The AI Threat: How Can Businesses Protect Themselves?

Artificial Intelligence (AI) has become a key part of many people's daily lives. For example, generative AI platforms such as ChatGPT have opened many people's eyes to its potential uses, driving a massive surge in adoption.

In August 2023, Deloitte revealed that 61% of computer users were already leveraging generative AI programmes in their daily tasks. Then, in May 2024, Microsoft similarly reported that AI usage had nearly doubled in the previous six months, with 75% of global knowledge workers using such solutions.


These statistics speak volumes. This widespread adoption will ensure that AI solutions become more quickly and deeply integrated into critical business processes spanning predictive analytics, process automation, and personalised customer experiences. And the potential for enterprises is significant.

According to the McKinsey Global Institute, generative AI has the potential to add between $2.6 trillion and $4.4 trillion to global corporate profits annually. Meanwhile, an additional study shows that AI can improve employee productivity by as much as 66%.

To exploit these potential benefits, companies must endeavour to stay ahead of the curve. Conversely, neglecting to adopt AI risks falling behind in an increasingly technologically savvy and competitive landscape.

AI: The Dark Side

It's not all good news and opportunities, however. Despite the exponential opportunities that AI offers, organisations equally face increasing risks that should not be ignored.

From a security standpoint, keeping a finger on the pulse of AI developments is vital. Indeed, we're already seeing cybercriminals leveraging AI to automate and scale their attacks, create more sophisticated malware, enhance advanced persistent threats (APTs) and exploit deepfake technologies for social engineering.  Our State of the Information Security report found that deepfakes are now the second most common information security incident encountered by businesses in the past year, trailing only behind malware infections.

Further, the situation is not helped by the fact that companies' AI programs are becoming increasingly vulnerable to various attacks, which can lead to incorrect, biased outcomes or even the generation of offensive content. 

We've already seen instances of threat actors using model inversion techniques to alter sensitive training data, risking breaches and privacy violations. Similarly, data poisoning has been observed, using malicious or biased data used to corrupt training sets, compromising AI model predictions and behaviours.

Implementing biases within AI models can pose a variety of problems, potentially amplifying adverse outcomes in decision-making processes like hiring and lending. Threat actors have also been observed using Trojan attacks to embed malicious behaviours in AI models, triggering harmful actions under specific conditions.

In addition, we're also seeing evasion attacks, which manipulate input data in an effort to evade AI-based security systems, and model stealing, in which AI models are reverse-engineered to create competing versions or exploit weaknesses.

Leveraging Key Guidance Frameworks

Such a novel series of attack methods directed towards AI programs themselves highlights the need for robust, modernised security measures designed to protect AI systems from being compromised and misused. The impending introduction of the EU Artificial Intelligence Act, published on 12 July, aims to take this one step further. It prohibits specific uses for AI and sets out regulations on "high-risk" AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models.  Similarly, during the King's recent speech he stated that the Government will "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models."
 
Organisations should, therefore, look to ISO 42001 and ISO 27001 for guidance, particularly when it comes to complying with this new EU law and any upcoming regulations and legislation that the UK is likely to implement.

Specifically, ISO 42001 provides guidelines for managing AI systems, focusing on risk management, roles, security controls, and ethical practices. In this sense, it can help organisations identify AI-specific risks, develop mitigation strategies, and continuously enhance AI security.

ISO 27001, meanwhile, provides a comprehensive framework for managing information security risks through regular assessments, controls, incident response plans, and compliance measures. Similarly, it can be used to safeguard sensitive data and AI models from unauthorised access, ensuring confidentiality and integrity using encryption and fostering a security-conscious culture.

By embracing and combining the benefits of these two key standards, companies will be well-placed to create a robust security framework for AI systems. Not only will they be able to integrate AI-specific risk management with broader information security practices, but they can also use these guidelines to establish governance structures, develop continuous improvement strategies, and ensure compliance with key regulations and ethical standards.

Best practice: Embracing Education & Compliance

However, it's not just a case of security professionals adhering to these standards. Equally, cybersecurity best practices should be embedded into the very culture and fabric of the business to ensure maximum effectiveness.

To achieve this, firms must prioritise training and education throughout the employee base, equipping all staff members with the knowledge and skills to identify and respond to risks, bolstering the organisation's overall cybersecurity resilience.

Not only should training programmes encompass more traditional aspects, such as identifying phishing emails and proper data handling practices, but they should also evolve in tandem with AI to address emerging risks and challenges. Here, ethical considerations such as bias detection and mitigation, as well as training on the threat of deepfakes, stand as relevant examples that the modern firm should be working to include.

The key point is that continuous learning is essential. By regularly updating training programmes to reflect the latest threat landscape and technological advancements, organisations will be well placed to enhance their cybersecurity posture and better protect their AI assets on an ongoing basis.

This forward-looking approach must be a primary focus. Indeed, establishing more robust security frameworks aligned with industry standards and best practices is vital for preparing against current and future threats.

Failing to address these issues can result in operational inefficiencies, increased costs, decision-making complexities, and AI systems susceptible to adversarial attacks.

Additionally, as AI ethics and data protection regulations tighten, non-compliance may lead to legal penalties, fines, and erosion of customer trust. Prioritising compliance becomes essential to protect both an organisation's operations - and its reputation.

Sam Peters is Chief Product Officer at ISMS.online

Image:  Peopleimages

You Might Also Read: 

Understanding The Threats & Opportunities Posed By AI:


If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

 


Cyber Security Intelligence: Captured Organised & Accessible


 

 


 

« What Sets Next-Generation Firewalls Apart From Traditional Firewalls?
Grok Faces Prosecution For Misusing AI Training Data »

Infosecurity Europe
CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

ZenGRC

ZenGRC

ZenGRC (formerly Reciprocity) is a leader in the GRC SaaS landscape, offering robust and intuitive products designed to make compliance straightforward and efficient.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

Securezoo

Securezoo

Securezoo's mission is to simplify and enhance information security by providing trusted security guidance, products, and information to small and mid-sized businesses and security professionals.

Black Duck Software

Black Duck Software

Black Duck Hub allows organizations to manage open source code security as well as license compliance risks.

Israel National Cyber Directorate (INCD)

Israel National Cyber Directorate (INCD)

The Israel National Cyber Directorate is the national security and technological agency responsible for defending Israel’s national cyberspace and for establishing and advancing Israel’s cyber power.

Security Brokers

Security Brokers

Security Brokers focus services and solutions with a focus on strategic ICT Security and Cyber Defense issues.

K2 Integrity

K2 Integrity

K2 Integrity is a preeminent risk, compliance, investigations, and monitoring firm - built by industry leaders to safeguard our clients’ operations, reputations, and economic security.

TeachPrivacy

TeachPrivacy

TeachPrivacy provides computer-based privacy and data security training that is engaging, memorable, and understandable.

Bounga Informatics

Bounga Informatics

Bounga Informatics provides Digital Forensics, E-Discovery, and Endpoint Security software, hardware, and training in Singapore and other countries in Asia Pacific.

Cube 5

Cube 5

The Cube 5 incubator, located at the Horst Görtz Institute for IT Security (HGI), supports IT security startups and people interested in starting a business in IT security.

Nu Quantum

Nu Quantum

Nu Quantum is developing quantum photonics hardware to power the quantum revolution in communications, sensing and computing.

LoughTec

LoughTec

LoughTec secure, manage and connect IT infrastructure for businesses and organisations throughout the UK and Republic of Ireland.

link22

link22

link22 offers a high level of expertise within IT security and system solutions. We help public and private actors with highly secure IT-solutions.

Invicti Security

Invicti Security

Invicti Security is an AppSec leader transforming the way web applications are secured.

FoxPointe Solutions

FoxPointe Solutions

FoxPointe Solutions is a full-service cyber risk management and compliance firm.

Twinstate Technologies

Twinstate Technologies

Twinstate Technologies specializes in cybersecurity, proactive IT, and hosted and on-premise voice solutions.

DOT Europe

DOT Europe

DOT Europe is a consensus based organisation which brings a diverse membership together to agree on their collective stance on EU tech policy.

Scinary Cybersecurity

Scinary Cybersecurity

Scinary was founded in 2015 on the premise that cybersecurity should not be limited to just large corporations or large government entities.