Deepfakes Complicate Election Security
The explosion of Artificial Intelligence (AI) technology makes it easier than ever to deceive people on the Internet is turning the 2024 US presidential election - and many other national elections around the world this year - into an unprecedented test on how to police deceptive content.
Most of the world's largest tech companies, including Amazon, Google and Microsoft, have agreed to tackle what they are calling deceptive AI in elections.
Now, 20 firms have signed an accord committing them to fighting voter-deceiving content and they say they will deploy technology to detect and counter the material. The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference recently. While cyber security risks to the democratic process have been pervasive for many years now, the prevalence of AI now represents new threats.
Platforms like ChatGPT, Google's Gemini (formerly Bard), or any number of purpose-built Dark Web large language models (LLMs) could play a role in disrupting the democratic process, with attacks encompassing mass influence campaigns, automated trolling, and the proliferation of deepfake content.
FBI Director Christopher Wray said he had concerns about ongoing information warfare using deepfakes that could plant disinformation in the upcoming presidential campaign.
GenAI could also automate the rise of fake behaviour networks that attempt to develop audiences for their disinformation campaigns through fake news outlets, convincing social media profiles, and other avenues, with the goal of sowing discord and undermining public trust in the electoral process.
Election Risks
From the perspective of Padraic O'Reilly, CIO for CyberSaint Security, the risk is "substantial" because the technology is evolving so quickly. "It promises to be interesting and perhaps a bit alarming, too, as we see new variants of disinformation leveraging deepfake technology," he says. Specifically, O'Reilly says, the "nightmare scenario" is that microtargeting with AI-generated content will proliferate on social media platforms.
This is similar to the Cambridge Analytica scandal, where the company amassed psychological profile data on 230 million US voters, in order to serve up highly tailored messaging via Facebook to individuals in an attempt to influence their beliefs, and votes.
GenAI has the potential to automate that process at scale, creating a highly convincing content that would have few, if any, "bot" characteristics that could alert suspicions.
AI Amplifies Existing Phishing TTPs
GenAI is already being used to craft more believable, targeted phishing campaigns at scale, but in the context of election security that phenomenon is event more concerning, according to Scott Small, director of cyber threat intelligence at Tidal Cyber.
Small says AI adoption also lowers the barrier to entry for launching such attacks, a factor that is likely to increase the volume of campaigns this year that try to infiltrate campaigns or take over candidate accounts for impersonation purposes, amongst other potentials.
Defending Against AI Election Threats
To defend against these threats, election officials and campaigns must be aware of GenAI-powered risks and how to defend against them.They also must make sure volunteers and workers are trained on AI-powered threats like enhanced social engineering, the threat actors behind them and how to respond to suspicious activity.
Both Google and Meta have previously set out their policies on AI-generated images and videos in political advertising, which require advertisers to flag when they are suing deepfakes or content which has been manipulated by AI.
Dark Reading | The Wall Street Journal | BBC | Forbes | LinkedIn | Daily.dev
Image: Unsplash
You Might Also Read:
Imran Khan Claims Victory Using AI Generated Video:
DIRECTORY OF SUPPLIERS - Deepfake & Disinformation Detection:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible