Progress In Deepfake Detection
There has been significant growth in the use of deepfake technology which can create and use fake images, videos, and sounds that often seem completely authentic. Results that were once the preserve of Hollywood studios can now be achieved in minutes on a laptop or even a smartphone at almost zero cost.
Digital manipulations that can either alter or completely synthesise human faces are contributing to fake news and poisoning public trust in digital media.
Criminals have been early adopters, with the potential for deepfakes across a range of maliciOus activity, including fraud, online child sexual exploitation and abuse, intimate image abuse, not to mention attempts at election interference.
The progress of generative AI technologies is getting more realistic and this fake technology is increasing in popularity and creativity and it has the potential of changing important aspects of politics. This growing use of deception has seen fake images and videos make the headlines, and though they are often identified quickly the damage can be done by the time they are taken down, especially in this age of online sharing.
Now, Researchers from China's Nanjing University of Information Science and Technology have classified deepfakes Into four main types, each posing a different threat:
- Identity Swaps
- Expression Swaps
- Attribute Manipulations
- Entire Face Synthesis
They emphasise that the most dangerous types are “identity swaps”, where a person’s face is imposed on another’s, and “expression swaps”, which transfer facial expressions from one individual to another, and that deepfakes, can cause serious harm to the reputation and perception of the person being “deepfaked”.
When it comes to detection, the usual approach is “binary classification”, but it can fail if the video or images are highly compressed or of poor quality that obfuscates facial features and reduces the chance of detection.
However, even experts in deepfake generation struggle with matching the lighting perfectly to the original media, an issue the researchers focused on in their detection method, which uses a neural network to spot illumination discrepancies.
Not all use of the technology is malicious. In 2022, a video of Ukrainian President Volodymyr Zelensky requesting that his soldiers lay down their weapons in the face of a Russian invasion surfaced online. Today, the tables have been turned as Ukraine's government deploy an AI-generated artifical spokeswoman to deliver information on behalf of its Foreign Ministry.
IEEE | I-HLS | Reuters Institute | Gov.UK | Medium
Image: Nick Fancher
You Might Also Read:
Facts About Fake Election Advertising:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible