Deepfakes Are A Growing Threat
Deepfakes are increasingly being used in cyber attacks as the threat of the technology moves from hypothetical harms to real ones. “Deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilise a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened,” according to the US Dept. of Homeland Security (DHS).
Right now, deepfake artefacts are AI using massive amounts of info and data to replicate something human, like having a conversation, like ChatGPT, or creating an image or illustration, like Dall-E. AI can be used to alter existing or create new audio or audio-visual content.
Deepfakes are used to create a false narrative apparently originating from trusted sources. The two primary threats are against civil society (spreading disinformation to manipulate opinion towards a desired effect, such as a particular election outcome); and against individuals or companies to obtain a financial return.
The threat is that, left unregulated, entire populations could have their views and opinions swayed by deepfake-delivered disinformation campaigns distorting the truth of events. People will no longer be able to determine truth from falsehood.
Europol & UCL University Deepfake Reports
“Today, threat actors are using disinformation campaigns and deepfake content to misinform the public about events, to influence politics and elections, to contribute to fraud, and to manipulate shareholders in a corporate context,” says Europol. “Many organisations have now begun to see deepfakes as an even bigger potential risk than identity theft (for which deepfakes can also be used), especially now that most interactions have moved online since the COVID-19 pandemic.
This concern is echoed by a recent report by University College London (UCL) that ranks deepfake technology as one of the biggest threats faced by society today.
Senior author Professor Lewis Griffin (UCL Computer Science) said: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
Have you ever watched a deepfake video, where the faces and sometimes voices of celebrities are put into odd or amusing situations? This technology can be a fun game used for memes and or pranks but imagine receiving a phone call from someone who sounds exactly like a family member, pleading for help. How can you tell if it is real?
According to Dr Matthew Caldwell (UCL Computer Science) “People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity."
Two current developments have improved and increased the quality and threat from deepfakes. The first is the adaptation and use of generative adversarial networks (GANs).
- A GAN operates with two models: generative and discriminating. The discriminating model repeatedly tests the generative model against the original dataset.
- The second threat comes from 5G bandwidth and the computer power of the cloud, allowing video streams to be manipulated in real time. “Deepfake technologies can therefore be applied in videoconferencing settings, live-streaming video services and television,” writes Europol.
Europol's as produced a report entitled 'Law Enforcement & The Challenge of Deepfakes which says that “..The models continuously improve until the generated content is just as likely to come from the generative model as the training data.” The result is a false image that cannot be detected by the human eye but is under the control of an attacker.
Some experts say that deepfake technology has advanced to a point where it can be used in real-time, enabling people to use someone’s face, voice and even movements in a call or virtual meeting. The technology is already widely available and relatively easy to use,and is only going to improve.
Europol: UCL: DHS: I-HLS: Security Week: CNet: SentinelOne: Artificial Intelligence News:
You Might Also Read:
Deepfakes Are Making Business Email Compromise Worse:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible