Advances In Recognising Deepfakes
As Artificial Intelligence (AI) continues to advance, distinguishing between real and AI-generated content is becoming increasingly difficult. This challenge could have significant implications, such as not being able to say if certain evidence is real or not.
It seems that AI itself might be our best defence against AI fakery after an algorithm has identified giveaway signs of AI videos with over 98% accuracy.
Now, a new study by Carnegie Mellon University and École Centrale Nantes addresses this issue by exploring the limitations of deepfake detection in environmental sounds.
Deepfake Technology
Deepfake technology leverages AI and machine learning algorithms to create hyper-realistic videos or audio recordings. This technology can manipulate facial features, voice, and body movements, making it challenging to distinguish between real and fabricated content. While there are benign applications, such as in the entertainment industry, the potential for malicious use in fraud and identity theft is a significant concern.
Research Work
The researchers at Carnegie Mellon and Ecole Centrale Nantes developed a deep neural network detector designed to automatically classify environmental sounds (anything that is not speech or music) in recordings as either real or AI-generated. According to TechXplore, the detector currently recognises seven categories of environmental sounds and demonstrated high accuracy during testing, with around 100 errors out of approximately 6,000 sounds. Theses the errors were categorised into two types: the detector mistakenly labelled AI-generated sounds as real or real sounds as AI-generated.To further investigate, the study involved 20 human participants who listened to the same sets of sounds identified incorrectly by the detector. Participants were asked to distinguish between real and AI-generated sounds.
The results revealed that humans were only about 50% accurate in identifying the fake sounds that the detector classified as real, suggesting that these sounds might have subtle characteristics that both the detector and participants struggled to recognise. In contrast, participants correctly identified real sounds labelled as fake by the detector about 71% of the time, indicating a possible cue in these sounds that humans can detect, which the current detector fails to recognise.
Carnegie Mellon University professor of psychology, Dr. Laurie Heller, concluded that if this cue can be identified, it could improve the accuracy of AI sound detectors. The research points to the potential for developing more sophisticated AI detection tools capable of analysing both speech and environmental sounds.
Heller emphasised the importance of staying ahead of rapidly advancing AI technologies, warning that a future where AI-generated content is indistinguishable from reality could lead to significant societal challenges.
I-HIS | Eurasip | Live Science | Drexel News | Reuters | Onfido
Image: Makhbubakhon Ismatova
You Might Also Read:
Progress In Deepfake Detection:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible