How AI Will Help Disrupt Elections Around The Globe
With national elections happening all over this world in 2024 and 2025, and with AI capabilities growing at an unprecedented rate, it is highly likely that AI will be leveraged by malicious cyber operators, at both the nation state and cybercriminal level, to compromise the security and integrity of democratic election infrastructure.
Cyber threat actors have numerous generative AI tools at their disposal, ranging from deepfake videos and voice cloning to AI-generated SMS messages that can be compiled to implement a variety of cyber-attack vectors. These include scaled social engineering and phishing campaigns, as well as enhanced distributed denial-of-service (DDoS) attacks to manipulate voters and disrupt the operation of election-themed websites.
Generative AI is an attractive option for politically driven and nation state-sponsored threat actors due to scalability, reduced cost, speed of implementation and the ability to deploy advanced malware payloads against electoral systems that can evade defensive measures.
What can be done to combat this AI disruption? It is essential that election officials are aware of the potential for cyber threats to surge throughout the time periods coinciding with high-profile elections. This will allow for safeguards and mitigation strategies to be enforced to defend against potential attacks against business and the overarching democracy allowing for these organizations to function. The majority of the optimal mitigation measures involve the industry standard cybersecurity best practices, and it is therefore vital that both governments and private sector businesses are aware of these strategies to protect their accounts and devices.
A key mitigation to election-based cyber threats would include increased monitoring of network systems via an effective and monitored endpoint detection and response (EDR) solution to detect malicious intrusions. It will also be critical for governments and their partners to share threat intelligence, while conducting attack emulation scenarios that imitate election-orientated disruption scenarios as a proactive strategy to strengthen their network security posture. In addition, it is strongly recommended that government-level entities gather awareness regarding the manner in which vulnerable technology platforms intersect with their election processes, conduct holistic threat and risk assessments and implement robust defensive measures to combat foreign espionage efforts and reduce the risk of disruption.
To defend against AI-driven threats, more specific measures will be required depending on the attack vector at the disposal of the threat actor. To defend against AI-based phishing and social engineering operations, it will be critical for government bodies and businesses to establish:
- Robust authentication protocols, such as multi-factor authentication (MFA).
- Email authentication protocols, such as domain-based message authentication.
- Limit social media attack surfaces by applying strong privacy policies and removing personally identifiable information (PII) from profiles.
- Transition to zero trust security principles to prevent unauthorized users accessing sensitive data and services.
To reduce the risk of impersonation it is recommended that personal social media accounts are made private, thereby limiting access to images by nefarious cyber actors whereas old profiles no longer in use should be deactivated or deleted. Further, sensitive data can be protected by validating requests for information to be sent through secondary channels and by applying identity verification for real-time communications. Adoption of passphrases and educating employees are additional methods for diminishing the threat of impersonation and harassment during election periods.
Combating malicious influence operations and disinformation campaigns will require additional security measures including:
- Building rapport with local media entities and community officials to ensure the flow of accurate information.
- Utilizing authentication techniques, including watermarks, to consolidate the veracity of published content.
- Training employees regarding standard operating procedures (SOP) for responding to media manipulation, with an awareness of how to report this within the organization.
It is likely that threat actors will focus AI-driven cyber-attacks with the intention of disrupting weak democratic systems as opposed to more secure establishments. This will likely involve targeting the following election-based entities:
- Electoral process: manipulative AI methods could be leveraged to spread false information surrounding voting procedures.
- Election officials: AI tools could be utilized to collect sensitive data resulting in potential doxing attacks against election officials, including party candidates.
- Election offices: AI-driven spear phishing operations could be launched against election staff with the objective of gaining access to sensitive election data.
- Election vendors: AI capabilities could be leveraged to influence the trust level of the public surrounding election vendors.
Craig Watt is a Threat Intelligence Consultant at Quorum Cyber specializing in strategic and geopolitical intelligence.
Image: bizoo_n
You Might Also Read:
Israel-Hamas Conflict: The Escalation Of Cyberwarfare:
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible