What Can Be Done About Cyber Threat Actors Weaponizing AI?

With Generative Artificial Intelligence (GenAI) capabilities growing at an unprecedented rate, it is highly likely that this technology will be leveraged by more sophisticated malicious cyber operators, at both the nation state and cybercriminal level, to compromise the security and integrity of target systems.

Cyber threat actors have numerous GenAI tools at their disposal, ranging from deepfake videos and voice cloning to AI-generated SMS messages that can be compiled to implement a variety of cyber-attack vectors.

These include scaled social engineering and phishing campaigns, as well as enhanced distributed denial-of-service (DDoS) attacks to manipulate voters and disrupt the operation of election-themed websites. 

GenAI is an attractive option for politically driven and nation state-sponsored threat actors due to scalability, reduced cost, speed of implementation and the ability to deploy advanced malware payloads against electoral systems that can evade defensive measures. To defend against state-backed AI-driven threats, more specific measures will be required depending on the attack vector at the disposal of the threat actor.

To defend against AI-based phishing and social engineering operations, it will be critical for government bodies and businesses to:

  • Establish robust authentication protocols, such as Multifactor Authentication (MFA).
  • Create email authentication protocols, such as Domain-based Message Authentication.
  • Limit social media attack surfaces by applying strong privacy policies and removing personally identifiable information (PII) from profiles.
  • Transition to zero-trust security principles to prevent unauthorized users accessing sensitive data and services.

Transparent and effective policies should be implemented to strike a balance between responsibility whilst simultaneously cultivating innovation within the global technology sector.

With emerging technologies, such as AI, there is a tendency to either let them run until problems emerge and then rely on reactive measures. This is not ideal as regulations are often too severe. Being open about self-regulation of AI technologies would represent an opportunity to strike the balance between restricting access to ensure safety but not hampering innovation. A minimal regulation approach should be adopted to allow for AI technologies to develop safety whilst ensuring the safety of the wider public.

As AI continues to become more widespread throughout all walks of life, it is becoming increasingly clear that we need to seriously consider the ethical implications.

The tech community can stay grounded in human values as capabilities rapidly advance by adhering to some key principles:

  • Transparency, requiring that the decision-making process behind AI systems is open and understandable.
  • Trust and explainability, particularly regarding the implementation into critical sectors such as healthcare and finance. Users need to be assured that they can trust that AI systems are making decisions in their best interest and based on ethical principles.
  • Human values. Finally, it is crucial to ensure that these AI systems prioritise human values and well-being. With human-based AI aiming to create intelligent machines and algorithms that collaborate with humans to improve lives and society, this approach should involve designing AI that considers the impact on individuals and key aspects of society, such as privacy, security, equity, and transparency.

In addition, there’s so much we can gain with GenAI technology. Here are a few examples: 

Future Opportunities with GenAI in Cybersecurity:

  • Enhancing threat intelligence and predictive capabilities.
  • Automating security protocols for quicker response.
  • Training cybersecurity professionals using realistic AI-driven simulations.

Positive Aspects of Evolving Cybersecurity Measures:

  • AI-driven behavioral analytics for understanding user behavior and improving security user experience.
  • Automated patch management and proactive threat hunting.

Constructive Role of Tech Companies in AI Governance:

  • Contributions to open-source AI projects from across the threat intelligence space.
  • Involvement in AI education and ethical research to build the human skills we need.
  • Setting benchmarks for ethical AI usage and responsible innovation.

Balancing Innovation and Safety in AI:

  • Encouraging responsible innovation to address challenges.
  • AI ethics boards and collaborative research efforts for safe AI development - noting Microsoft's leadership in this space.

AI Enhancing Human Values and Societal Benefits:

  • AI applications personalized for organizations to understand their business uniquely.
  • Enrichment beyond security, with aligned compliance posture and exposure for real-time evaluations.

Graham Hosking is Solutions Director for Data Security & AI at Quorum Cyber

Image: Mariia Shalabaieva

You Might Also Read: 

Important Differences Between Different Types Of Artificial Intelligence:

DIRECTORY OF SUPPLIERS - AI Security & Governance:

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

 

« AWS & Google Agree To Drop Cloud Service Exit Fees
Protecting OT With MDR »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

CLUSIL

CLUSIL

CLUSIL is an association for the information security industry in Luxembourg.

Cybercrowd

Cybercrowd

Cybercrowd is a cyber security specialist offering technical services, cyber security assessments, guidance and security thought leadership.

ENVEIL

ENVEIL

ENVEIL’s technology is the first scalable commercial solution to cryptographically secure Data in Use.

CryptoTec

CryptoTec

CryptoTec is a provider of security concepts and encryption solutions for secure communication between decentralized computerized systems.

Leadcomm

Leadcomm

Leadcomm is a Brazilian company focused on the distribution and integration of IT systems and security solutions for large companies.

Phosphorous Cybersecurity

Phosphorous Cybersecurity

Phosphorus has fully automated remediation of the two biggest IoT vulnerabilities, out of date firmware and default credentials.

Cyber Risk Institute (CRI)

Cyber Risk Institute (CRI)

CRI is a not-for-profit coalition of financial institutions and trade associations working to protect the global economy by enhancing cybersecurity and resiliency through standardization.

Earlybird Venture Capital

Earlybird Venture Capital

Earlybird is a venture capital investor focused on European technology innovators.

Code Intelligence

Code Intelligence

Code Intelligence offers a platform for automated software security testing to help developers make their software more robust and secure.

Tech Vedika

Tech Vedika

Tech Vedika has access to technical guidance, training and resources from AWS to successfully undertake solution architecture, application development, application migration, and managed services.

Cyber Unit

Cyber Unit

Cyber Unit offer next level protection from cyber attacks in packages and pricing options that are accessible to smaller organizations.

Cyberguardians

Cyberguardians

Cyberguardians is a team of experienced cybersecurity experts and consultants who always believe in the value and a high level of cybersecurity services to clients.

Logiq Consulting

Logiq Consulting

Logiq Consulting provide a full range of Cyber Security, Information Assurance and System Engineering services.

Bitdefender Voyager Ventures (BVV)

Bitdefender Voyager Ventures (BVV)

Bitdefender Voyager Ventures is an early-stage investment vehicle focused on cybersecurity, data analytics and automation startups.

ArmorX AI

ArmorX AI

ArmorX AI (formerly Kapalya) operates an encryption management platform designed to encrypt all data in transit and at rest on mobile end-points, corporate servers, and cloud servers.

Black Alps

Black Alps

Black Alp's mission is to promote cybersecurity through the organization of dedicated events.