hero image

A new study published in the Crime Science journal has ranked fake video or audio content referred to as “Deepfakes” as the most worrying artificial intelligence (AI) use for terrorism and crime.

Research establishes 20-different AI-criminal ways

According to the study by University College London researchers, in the next 15 years, AI could be employed in 20 different ways for criminal purposes. The study ranked the crimes based on the potential harm they can cause, potential monetary gain, how difficult it will be to prevent, and how it can be used. Researchers indicated that “Deepfakes” will be hard to detect and prevent because they will have varied objectives. As a result, this can lead to widespread distrust in visual and audio evidence that will ultimately cause societal harm.

For instance, imagine a video spreading in social media showing US President Donald Trump declaring war on a given country. Such a video may appear to be legit but in the real sense, it is a “Deepfake.” The kind of uncertainty and fear that could result from such a video in a matter of minutes will be detrimental even if a statement will later be sent to clarify things. Since “Deepfakes” are convincing it can be tricky to detect them and could be used for anything like impersonation to political gain and extortion.

Deepfakes could be rampant going forward

Researchers warn that “Deepfakes” will become common that people will not be able to differentiate on what to trust. Some of the AI-enabled crimes that are considered of high concern include spear phishing, using autonomous vehicles as weapons, AI-authored fake news, disruption of AI-controlled systems, and harvesting of online data for large-scale blackmailing.

Professor Lewis Griffin the lead author so the study indicated that with AI-based capabilities expanding so has the possibility of criminal exploitation. He said that to prepare adequately for the potential AI threats there is a need to identify what the threats might be and what will be their impact. The study analyzed the threats by analyzing news reports, academic papers, and fictional sources.