The AI Threat: How Can Businesses Protect Themselves?
Artificial Intelligence (AI) has become a key part of many people's daily lives. For example, generative AI platforms such as ChatGPT have opened many people's eyes to its potential uses, driving a massive surge in adoption.
In August 2023, Deloitte revealed that 61% of computer users were already leveraging generative AI programmes in their daily tasks. Then, in May 2024, Microsoft similarly reported that AI usage had nearly doubled in the previous six months, with 75% of global knowledge workers using such solutions.
These statistics speak volumes. This widespread adoption will ensure that AI solutions become more quickly and deeply integrated into critical business processes spanning predictive analytics, process automation, and personalised customer experiences. And the potential for enterprises is significant.
According to the McKinsey Global Institute, generative AI has the potential to add between $2.6 trillion and $4.4 trillion to global corporate profits annually. Meanwhile, an additional study shows that AI can improve employee productivity by as much as 66%.
To exploit these potential benefits, companies must endeavour to stay ahead of the curve. Conversely, neglecting to adopt AI risks falling behind in an increasingly technologically savvy and competitive landscape.
AI: The Dark Side
It's not all good news and opportunities, however. Despite the exponential opportunities that AI offers, organisations equally face increasing risks that should not be ignored.
From a security standpoint, keeping a finger on the pulse of AI developments is vital. Indeed, we're already seeing cybercriminals leveraging AI to automate and scale their attacks, create more sophisticated malware, enhance advanced persistent threats (APTs) and exploit deepfake technologies for social engineering. Our State of the Information Security report found that deepfakes are now the second most common information security incident encountered by businesses in the past year, trailing only behind malware infections.
Further, the situation is not helped by the fact that companies' AI programs are becoming increasingly vulnerable to various attacks, which can lead to incorrect, biased outcomes or even the generation of offensive content.
We've already seen instances of threat actors using model inversion techniques to alter sensitive training data, risking breaches and privacy violations. Similarly, data poisoning has been observed, using malicious or biased data used to corrupt training sets, compromising AI model predictions and behaviours.
Implementing biases within AI models can pose a variety of problems, potentially amplifying adverse outcomes in decision-making processes like hiring and lending. Threat actors have also been observed using Trojan attacks to embed malicious behaviours in AI models, triggering harmful actions under specific conditions.
In addition, we're also seeing evasion attacks, which manipulate input data in an effort to evade AI-based security systems, and model stealing, in which AI models are reverse-engineered to create competing versions or exploit weaknesses.
Leveraging Key Guidance Frameworks
Such a novel series of attack methods directed towards AI programs themselves highlights the need for robust, modernised security measures designed to protect AI systems from being compromised and misused. The impending introduction of the EU Artificial Intelligence Act, published on 12 July, aims to take this one step further. It prohibits specific uses for AI and sets out regulations on "high-risk" AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models. Similarly, during the King's recent speech he stated that the Government will "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models."
Organisations should, therefore, look to ISO 42001 and ISO 27001 for guidance, particularly when it comes to complying with this new EU law and any upcoming regulations and legislation that the UK is likely to implement.
Specifically, ISO 42001 provides guidelines for managing AI systems, focusing on risk management, roles, security controls, and ethical practices. In this sense, it can help organisations identify AI-specific risks, develop mitigation strategies, and continuously enhance AI security.
ISO 27001, meanwhile, provides a comprehensive framework for managing information security risks through regular assessments, controls, incident response plans, and compliance measures. Similarly, it can be used to safeguard sensitive data and AI models from unauthorised access, ensuring confidentiality and integrity using encryption and fostering a security-conscious culture.
By embracing and combining the benefits of these two key standards, companies will be well-placed to create a robust security framework for AI systems. Not only will they be able to integrate AI-specific risk management with broader information security practices, but they can also use these guidelines to establish governance structures, develop continuous improvement strategies, and ensure compliance with key regulations and ethical standards.
Best practice: Embracing Education & Compliance
However, it's not just a case of security professionals adhering to these standards. Equally, cybersecurity best practices should be embedded into the very culture and fabric of the business to ensure maximum effectiveness.
To achieve this, firms must prioritise training and education throughout the employee base, equipping all staff members with the knowledge and skills to identify and respond to risks, bolstering the organisation's overall cybersecurity resilience.
Not only should training programmes encompass more traditional aspects, such as identifying phishing emails and proper data handling practices, but they should also evolve in tandem with AI to address emerging risks and challenges. Here, ethical considerations such as bias detection and mitigation, as well as training on the threat of deepfakes, stand as relevant examples that the modern firm should be working to include.
The key point is that continuous learning is essential. By regularly updating training programmes to reflect the latest threat landscape and technological advancements, organisations will be well placed to enhance their cybersecurity posture and better protect their AI assets on an ongoing basis.
This forward-looking approach must be a primary focus. Indeed, establishing more robust security frameworks aligned with industry standards and best practices is vital for preparing against current and future threats.
Failing to address these issues can result in operational inefficiencies, increased costs, decision-making complexities, and AI systems susceptible to adversarial attacks.
Additionally, as AI ethics and data protection regulations tighten, non-compliance may lead to legal penalties, fines, and erosion of customer trust. Prioritising compliance becomes essential to protect both an organisation's operations - and its reputation.
Sam Peters is Chief Product Officer at ISMS.online
Image: Peopleimages
You Might Also Read:
Understanding The Threats & Opportunities Posed By AI:
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible