The AI Threat: How Can Businesses Protect Themselves?

Artificial Intelligence (AI) has become a key part of many people's daily lives. For example, generative AI platforms such as ChatGPT have opened many people's eyes to its potential uses, driving a massive surge in adoption.

In August 2023, Deloitte revealed that 61% of computer users were already leveraging generative AI programmes in their daily tasks. Then, in May 2024, Microsoft similarly reported that AI usage had nearly doubled in the previous six months, with 75% of global knowledge workers using such solutions.


These statistics speak volumes. This widespread adoption will ensure that AI solutions become more quickly and deeply integrated into critical business processes spanning predictive analytics, process automation, and personalised customer experiences. And the potential for enterprises is significant.

According to the McKinsey Global Institute, generative AI has the potential to add between $2.6 trillion and $4.4 trillion to global corporate profits annually. Meanwhile, an additional study shows that AI can improve employee productivity by as much as 66%.

To exploit these potential benefits, companies must endeavour to stay ahead of the curve. Conversely, neglecting to adopt AI risks falling behind in an increasingly technologically savvy and competitive landscape.

AI: The Dark Side

It's not all good news and opportunities, however. Despite the exponential opportunities that AI offers, organisations equally face increasing risks that should not be ignored.

From a security standpoint, keeping a finger on the pulse of AI developments is vital. Indeed, we're already seeing cybercriminals leveraging AI to automate and scale their attacks, create more sophisticated malware, enhance advanced persistent threats (APTs) and exploit deepfake technologies for social engineering.  Our State of the Information Security report found that deepfakes are now the second most common information security incident encountered by businesses in the past year, trailing only behind malware infections.

Further, the situation is not helped by the fact that companies' AI programs are becoming increasingly vulnerable to various attacks, which can lead to incorrect, biased outcomes or even the generation of offensive content. 

We've already seen instances of threat actors using model inversion techniques to alter sensitive training data, risking breaches and privacy violations. Similarly, data poisoning has been observed, using malicious or biased data used to corrupt training sets, compromising AI model predictions and behaviours.

Implementing biases within AI models can pose a variety of problems, potentially amplifying adverse outcomes in decision-making processes like hiring and lending. Threat actors have also been observed using Trojan attacks to embed malicious behaviours in AI models, triggering harmful actions under specific conditions.

In addition, we're also seeing evasion attacks, which manipulate input data in an effort to evade AI-based security systems, and model stealing, in which AI models are reverse-engineered to create competing versions or exploit weaknesses.

Leveraging Key Guidance Frameworks

Such a novel series of attack methods directed towards AI programs themselves highlights the need for robust, modernised security measures designed to protect AI systems from being compromised and misused. The impending introduction of the EU Artificial Intelligence Act, published on 12 July, aims to take this one step further. It prohibits specific uses for AI and sets out regulations on "high-risk" AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models.  Similarly, during the King's recent speech he stated that the Government will "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models."
 
Organisations should, therefore, look to ISO 42001 and ISO 27001 for guidance, particularly when it comes to complying with this new EU law and any upcoming regulations and legislation that the UK is likely to implement.

Specifically, ISO 42001 provides guidelines for managing AI systems, focusing on risk management, roles, security controls, and ethical practices. In this sense, it can help organisations identify AI-specific risks, develop mitigation strategies, and continuously enhance AI security.

ISO 27001, meanwhile, provides a comprehensive framework for managing information security risks through regular assessments, controls, incident response plans, and compliance measures. Similarly, it can be used to safeguard sensitive data and AI models from unauthorised access, ensuring confidentiality and integrity using encryption and fostering a security-conscious culture.

By embracing and combining the benefits of these two key standards, companies will be well-placed to create a robust security framework for AI systems. Not only will they be able to integrate AI-specific risk management with broader information security practices, but they can also use these guidelines to establish governance structures, develop continuous improvement strategies, and ensure compliance with key regulations and ethical standards.

Best practice: Embracing Education & Compliance

However, it's not just a case of security professionals adhering to these standards. Equally, cybersecurity best practices should be embedded into the very culture and fabric of the business to ensure maximum effectiveness.

To achieve this, firms must prioritise training and education throughout the employee base, equipping all staff members with the knowledge and skills to identify and respond to risks, bolstering the organisation's overall cybersecurity resilience.

Not only should training programmes encompass more traditional aspects, such as identifying phishing emails and proper data handling practices, but they should also evolve in tandem with AI to address emerging risks and challenges. Here, ethical considerations such as bias detection and mitigation, as well as training on the threat of deepfakes, stand as relevant examples that the modern firm should be working to include.

The key point is that continuous learning is essential. By regularly updating training programmes to reflect the latest threat landscape and technological advancements, organisations will be well placed to enhance their cybersecurity posture and better protect their AI assets on an ongoing basis.

This forward-looking approach must be a primary focus. Indeed, establishing more robust security frameworks aligned with industry standards and best practices is vital for preparing against current and future threats.

Failing to address these issues can result in operational inefficiencies, increased costs, decision-making complexities, and AI systems susceptible to adversarial attacks.

Additionally, as AI ethics and data protection regulations tighten, non-compliance may lead to legal penalties, fines, and erosion of customer trust. Prioritising compliance becomes essential to protect both an organisation's operations - and its reputation.

Sam Peters is Chief Product Officer at ISMS.online

Image:  Peopleimages

You Might Also Read: 

Understanding The Threats & Opportunities Posed By AI:


If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

 


Cyber Security Intelligence: Captured Organised & Accessible


 

 


 

« What Sets Next-Generation Firewalls Apart From Traditional Firewalls?
Grok Faces Prosecution For Misusing AI Training Data »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

Tripwire

Tripwire

Tripwire are a leading provider of risk-based security, compliance and vulnerability management solutions.

DLA Piper

DLA Piper

DLA Piper is a global law firm with offices throughout the Americas, Asia Pacific, Europe and the Middle East. Practice areas include Cybersecurity.

Identity Automation

Identity Automation

Identity Automation is a leading provider of Identity and Access Management software.

SISA

SISA

SISA is a global forensics-driven cybersecurity solutions company, trusted by leading organizations for securing their businesses with robust preventive and corrective cybersecurity solutions.

Synamic Technologies

Synamic Technologies

Synamic Technologies was founded in 2018 as a start-up to automate cyber security processes. Our CISOSCOPE product automates vulnerability management, risk management and compliance.

Rubrik

Rubrik

Rubrik helps enterprises achieve data control to drive business resiliency, cloud mobility, and regulatory compliance.

Wavex Technology

Wavex Technology

Wavex Technology is an award winning IT Services firm offering clients a secure and fully managed IT service.

Cyber Protection Group (CPG)

Cyber Protection Group (CPG)

Cyber protection Group specialize in Penetration Testing. We work with enterprise level companies as well as small to medium sized businesses.

Oxford Internet Institute - University of Oxford

Oxford Internet Institute - University of Oxford

The Oxford Internet Institute is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet.

AMSYS Innovative Solutions

AMSYS Innovative Solutions

AMSYS is a full-service, 24/7/365 IT solutions, Cybersecurity & Managed Service Provider.

Quantum eMotion (QeM)

Quantum eMotion (QeM)

Quantum eMotion is a Montreal-based advanced developer leading the way towards a new generation of quantum-safe encryption for the quantum computing age.

Solcon Capital

Solcon Capital

Solcon Capital is a forward-looking, technology-focused investment firm that is committed to identifying and investing in the most promising areas of innovation and development in the tech industry.

Zally

Zally

Using advanced behavioural biometrics and AI, Zally is the world's answer to next-generation security.

Francisco Partners

Francisco Partners

Francisco Partners provide capital, expertise, and support for growth-aspiring technology companies.

Gleam Cloud Security Solutions (GCSS)

Gleam Cloud Security Solutions (GCSS)

GCSS Security is an information security firm providing cyber security protection with a highly skilled and experienced team focused on technology that creates best-in-class customer experiences.

Aberrant

Aberrant

A radically new approach to managing information security. Aberrant is the single pane of glass through which a security program can be viewed.