What Can Be Done About Cyber Threat Actors Weaponizing AI?
With Generative Artificial Intelligence (GenAI) capabilities growing at an unprecedented rate, it is highly likely that this technology will be leveraged by more sophisticated malicious cyber operators, at both the nation state and cybercriminal level, to compromise the security and integrity of target systems.
Cyber threat actors have numerous GenAI tools at their disposal, ranging from deepfake videos and voice cloning to AI-generated SMS messages that can be compiled to implement a variety of cyber-attack vectors.
These include scaled social engineering and phishing campaigns, as well as enhanced distributed denial-of-service (DDoS) attacks to manipulate voters and disrupt the operation of election-themed websites.
GenAI is an attractive option for politically driven and nation state-sponsored threat actors due to scalability, reduced cost, speed of implementation and the ability to deploy advanced malware payloads against electoral systems that can evade defensive measures. To defend against state-backed AI-driven threats, more specific measures will be required depending on the attack vector at the disposal of the threat actor.
To defend against AI-based phishing and social engineering operations, it will be critical for government bodies and businesses to:
- Establish robust authentication protocols, such as Multifactor Authentication (MFA).
- Create email authentication protocols, such as Domain-based Message Authentication.
- Limit social media attack surfaces by applying strong privacy policies and removing personally identifiable information (PII) from profiles.
- Transition to zero-trust security principles to prevent unauthorized users accessing sensitive data and services.
Transparent and effective policies should be implemented to strike a balance between responsibility whilst simultaneously cultivating innovation within the global technology sector.
With emerging technologies, such as AI, there is a tendency to either let them run until problems emerge and then rely on reactive measures. This is not ideal as regulations are often too severe. Being open about self-regulation of AI technologies would represent an opportunity to strike the balance between restricting access to ensure safety but not hampering innovation. A minimal regulation approach should be adopted to allow for AI technologies to develop safety whilst ensuring the safety of the wider public.
As AI continues to become more widespread throughout all walks of life, it is becoming increasingly clear that we need to seriously consider the ethical implications.
The tech community can stay grounded in human values as capabilities rapidly advance by adhering to some key principles:
- Transparency, requiring that the decision-making process behind AI systems is open and understandable.
- Trust and explainability, particularly regarding the implementation into critical sectors such as healthcare and finance. Users need to be assured that they can trust that AI systems are making decisions in their best interest and based on ethical principles.
- Human values. Finally, it is crucial to ensure that these AI systems prioritise human values and well-being. With human-based AI aiming to create intelligent machines and algorithms that collaborate with humans to improve lives and society, this approach should involve designing AI that considers the impact on individuals and key aspects of society, such as privacy, security, equity, and transparency.
In addition, there’s so much we can gain with GenAI technology. Here are a few examples:
Future Opportunities with GenAI in Cybersecurity:
- Enhancing threat intelligence and predictive capabilities.
- Automating security protocols for quicker response.
- Training cybersecurity professionals using realistic AI-driven simulations.
Positive Aspects of Evolving Cybersecurity Measures:
- AI-driven behavioral analytics for understanding user behavior and improving security user experience.
- Automated patch management and proactive threat hunting.
Constructive Role of Tech Companies in AI Governance:
- Contributions to open-source AI projects from across the threat intelligence space.
- Involvement in AI education and ethical research to build the human skills we need.
- Setting benchmarks for ethical AI usage and responsible innovation.
Balancing Innovation and Safety in AI:
- Encouraging responsible innovation to address challenges.
- AI ethics boards and collaborative research efforts for safe AI development - noting Microsoft's leadership in this space.
AI Enhancing Human Values and Societal Benefits:
- AI applications personalized for organizations to understand their business uniquely.
- Enrichment beyond security, with aligned compliance posture and exposure for real-time evaluations.
Graham Hosking is Solutions Director for Data Security & AI at Quorum Cyber
Image: Mariia Shalabaieva
You Might Also Read:
Important Differences Between Different Types Of Artificial Intelligence:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible