23 July, 2024

Unmasking the AI deepfake threat 

What is a deepfake?
A deepfake is a synthetic video, image or audio created with Artificial Intelligence (AI) to look real. They do so by mimicking a person’s likeness, including their facial features and voice. These can be used for entertainment or for scientific research, but they can also be used for malicious reasons. The most significant danger presented by deepfakes is their ability to spread false information that appears to come from trusted sources.
What different types of threat can they pose?
  • Blackmail and reputational harm that put targets in legally compromising situations.
  • Political misinformation. Fabricated videos of politicians saying or doing things they never did can erode public trust or even incite unrest.
  • Election interference, such as creating fake videos of candidates to influence public.
  • Stock manipulation where fake content is created to influence stock prices.
  • Fraud where an individual is impersonated to steal money.
Recent high-profile deepfake fraud cases
Elon Musk Bitcoin scam
Recently a deepfake video of Tesla co-founder Elon Musk encouraged people to part with their Bitcoin with the promise that it would be doubled during a livestream on YouTube.
“What this scam has highlighted is just how quickly the risk landscape is shifting as new breeds of fraud come into play”, says Michael Marcotte, founder of the National Cybersecurity Centre.
Arup financial scam
Financial Times reported that British engineering firm Arup was the victim of a recent high-profile deepfake scam. An employee in Hong Kong was tricked into sending £20m to scammers who posed as senior officers of the company, via a deepfake video.
After a video conference joined by the company’s digitally cloned CFO and other fake company employees, the staff member made a total of 15 transfers to five Hong Kong bank accounts before eventually discovering it was a scam upon following up with the group’s headquarters.
Scarlett Johansson Advert 
Sky News report that a deepfake of Scarlett Johansson appeared was used for an ad seen on X (formerly Twitter) for a company called Lisa AI. The company allows users to create avatars and images using text prompts, but Johansson’s likeness was used without her permission.
Can Deepfakes be detected?
The following are signs of possible deepfake content:
  • Unusual or awkward facial positioning.
  • Unnatural facial or body movement.
  • Videos that look odd when zoomed in or magnified.
  • Inconsistent audio.
  • Lack of blinking.
However, with AI developing rapidly and deepfake content and scams becoming increasingly sophisticated, it’s not enough to rely on detection by the human eye.
Building up defences
Cybersecurity expert Michael Marcotte, argues that we need to be proactive in building technological defences against deepfakes. Verification systems such as biometric signatures and forensic media tools should be used.
AI, a double-edged sword, can be a tool for the creation of deepfakes, but it also holds the potential to aid in their identification. AI systems can be trained to detect abnormal facial movements, inconsistent lighting and shadows, or mismatched audio and lip movements, thereby serving as a crucial defence against deepfakes.
Others argue that the problem should be addressed by introducing regulatory and legislative measures against creating and distributing malicious deepfakes. Recently, industry experts signed an open letter titled ‘Disrupting the Deepfake Supply Chain’ calling for increased government regulation. The letter recommends introducing criminal penalties for any individual knowingly creating or facilitating the spread of harmful deepfakes.
Although AI deepfakes can have positive uses, an increasing number of high-profile deepfake scams suggest that they pose a significant threat in many walks of life. With AI technology getting more advanced by the day, it’s clear that regulations and defences against deepfakes is vital. By integrating technological advancements, regulatory actions, and public awareness, we can effectively combat the dangers of deepfakes and protect the integrity of digital content.

News Team