The AI Threat: How Can Businesses Protect Themselves?

Artificial Intelligence (AI) has become a key part of many people's daily lives. For example, generative AI platforms such as ChatGPT have opened many people's eyes to its potential uses, driving a massive surge in adoption.

In August 2023, Deloitte revealed that 61% of computer users were already leveraging generative AI programmes in their daily tasks. Then, in May 2024, Microsoft similarly reported that AI usage had nearly doubled in the previous six months, with 75% of global knowledge workers using such solutions.


These statistics speak volumes. This widespread adoption will ensure that AI solutions become more quickly and deeply integrated into critical business processes spanning predictive analytics, process automation, and personalised customer experiences. And the potential for enterprises is significant.

According to the McKinsey Global Institute, generative AI has the potential to add between $2.6 trillion and $4.4 trillion to global corporate profits annually. Meanwhile, an additional study shows that AI can improve employee productivity by as much as 66%.

To exploit these potential benefits, companies must endeavour to stay ahead of the curve. Conversely, neglecting to adopt AI risks falling behind in an increasingly technologically savvy and competitive landscape.

AI: The Dark Side

It's not all good news and opportunities, however. Despite the exponential opportunities that AI offers, organisations equally face increasing risks that should not be ignored.

From a security standpoint, keeping a finger on the pulse of AI developments is vital. Indeed, we're already seeing cybercriminals leveraging AI to automate and scale their attacks, create more sophisticated malware, enhance advanced persistent threats (APTs) and exploit deepfake technologies for social engineering.  Our State of the Information Security report found that deepfakes are now the second most common information security incident encountered by businesses in the past year, trailing only behind malware infections.

Further, the situation is not helped by the fact that companies' AI programs are becoming increasingly vulnerable to various attacks, which can lead to incorrect, biased outcomes or even the generation of offensive content. 

We've already seen instances of threat actors using model inversion techniques to alter sensitive training data, risking breaches and privacy violations. Similarly, data poisoning has been observed, using malicious or biased data used to corrupt training sets, compromising AI model predictions and behaviours.

Implementing biases within AI models can pose a variety of problems, potentially amplifying adverse outcomes in decision-making processes like hiring and lending. Threat actors have also been observed using Trojan attacks to embed malicious behaviours in AI models, triggering harmful actions under specific conditions.

In addition, we're also seeing evasion attacks, which manipulate input data in an effort to evade AI-based security systems, and model stealing, in which AI models are reverse-engineered to create competing versions or exploit weaknesses.

Leveraging Key Guidance Frameworks

Such a novel series of attack methods directed towards AI programs themselves highlights the need for robust, modernised security measures designed to protect AI systems from being compromised and misused. The impending introduction of the EU Artificial Intelligence Act, published on 12 July, aims to take this one step further. It prohibits specific uses for AI and sets out regulations on "high-risk" AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models.  Similarly, during the King's recent speech he stated that the Government will "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models."
 
Organisations should, therefore, look to ISO 42001 and ISO 27001 for guidance, particularly when it comes to complying with this new EU law and any upcoming regulations and legislation that the UK is likely to implement.

Specifically, ISO 42001 provides guidelines for managing AI systems, focusing on risk management, roles, security controls, and ethical practices. In this sense, it can help organisations identify AI-specific risks, develop mitigation strategies, and continuously enhance AI security.

ISO 27001, meanwhile, provides a comprehensive framework for managing information security risks through regular assessments, controls, incident response plans, and compliance measures. Similarly, it can be used to safeguard sensitive data and AI models from unauthorised access, ensuring confidentiality and integrity using encryption and fostering a security-conscious culture.

By embracing and combining the benefits of these two key standards, companies will be well-placed to create a robust security framework for AI systems. Not only will they be able to integrate AI-specific risk management with broader information security practices, but they can also use these guidelines to establish governance structures, develop continuous improvement strategies, and ensure compliance with key regulations and ethical standards.

Best practice: Embracing Education & Compliance

However, it's not just a case of security professionals adhering to these standards. Equally, cybersecurity best practices should be embedded into the very culture and fabric of the business to ensure maximum effectiveness.

To achieve this, firms must prioritise training and education throughout the employee base, equipping all staff members with the knowledge and skills to identify and respond to risks, bolstering the organisation's overall cybersecurity resilience.

Not only should training programmes encompass more traditional aspects, such as identifying phishing emails and proper data handling practices, but they should also evolve in tandem with AI to address emerging risks and challenges. Here, ethical considerations such as bias detection and mitigation, as well as training on the threat of deepfakes, stand as relevant examples that the modern firm should be working to include.

The key point is that continuous learning is essential. By regularly updating training programmes to reflect the latest threat landscape and technological advancements, organisations will be well placed to enhance their cybersecurity posture and better protect their AI assets on an ongoing basis.

This forward-looking approach must be a primary focus. Indeed, establishing more robust security frameworks aligned with industry standards and best practices is vital for preparing against current and future threats.

Failing to address these issues can result in operational inefficiencies, increased costs, decision-making complexities, and AI systems susceptible to adversarial attacks.

Additionally, as AI ethics and data protection regulations tighten, non-compliance may lead to legal penalties, fines, and erosion of customer trust. Prioritising compliance becomes essential to protect both an organisation's operations - and its reputation.

Sam Peters is Chief Product Officer at ISMS.online

Image:  Peopleimages

You Might Also Read: 

Understanding The Threats & Opportunities Posed By AI:


If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

 


Cyber Security Intelligence: Captured Organised & Accessible


 

 


 

« What Sets Next-Generation Firewalls Apart From Traditional Firewalls?
Grok Faces Prosecution For Misusing AI Training Data »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

FT Cyber Resilience Summit: Europe

FT Cyber Resilience Summit: Europe

27 November 2024 | In-Person & Digital | 22 Bishopsgate, London. Business leaders, Innovators & Experts address evolving cybersecurity risks.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Our Supplier Directory lists 6,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

tunCERT

tunCERT

TunCERT is the National Computer Emergency Response Team of Tunisia.

APWG

APWG

APWG is the international coalition unifying the global response to cybercrime across industry, government, law-enforcement and NGO communities.

Institute for Cybersecurity & Privacy (ICSP) -  University of Georgia

Institute for Cybersecurity & Privacy (ICSP) - University of Georgia

The goal of ICSP is to become a state hub for cybersecurity research and education, including multidisciplinary programs and research opportunities, outreach activities, and industry partnership.

ThreatSpike Labs

ThreatSpike Labs

ThreatSpike Labs provides the first end-to-end fully managed security service for companies of all sizes.

Bangladesh Computer Council (BCC)

Bangladesh Computer Council (BCC)

Bangladesh Computer Council (BCC) is a government body providing support for ICT related activities including formulating national ICT strategy and policy.

Communications Authority of Kenya

Communications Authority of Kenya

The Authority is responsible for facilitating the development of the information and communications sectors including; broadcasting, telecommunications, electronic commerce and cybersecurity.

4Stop

4Stop

4Stop is a global KYC, compliance and anti-fraud risk management company.

Radically Open Security

Radically Open Security

Radically Open Security is the world's first not-for-profit computer security consultancy company.

BotGuard

BotGuard

BotGuard provides a service to protect your website from malicious bots, crawlers, scrapers, and hacker attacks.

Visible Statement

Visible Statement

Visible Statement is a computer-based delivery system designed to insure the retention and recall of your most important security training messages.

Risk Strategies

Risk Strategies

Risk Strategies is a leading specialty risk management consultancy and insurance broker offering smarter, practical approaches to risk mitigation including Cyber Liability insurance.

Tech Vedika

Tech Vedika

Tech Vedika has access to technical guidance, training and resources from AWS to successfully undertake solution architecture, application development, application migration, and managed services.

Apex Systems

Apex Systems

Apex Systems is a world-class technology services business that incorporates industry insights and experience to deliver solutions that fulfill our clients’ digital visions.

Aquia

Aquia

Aquia are on a mission to enable innovation and drive transformative change to solve the world’s most pressing and complex cybersecurity challenges.

Orca Tech

Orca Tech

Orca Tech brings together a portfolio of complimentary vendor in the IT security industry to help provide a complete solution to meet the requirements of our Partners across all sectors.

Ark Infotech

Ark Infotech

Ark Infotech is a provider of cloud management services, selective support services, and technology solutions.