The Rising Threat Of Deepfakes
AI is evolving rapidly, with ChatGPT highlighting yet another example of the technology's vast potential.
Keeping a pulse on AI's advancement has never been more important for security professionals. In the words of the UK government, AI has the potential to "increase risks to safety and security by enhancing threat actor capabilities and increasing the effectiveness of attacks."
We continue to see how financially motivated cyber-criminals and nation-state actors are working to exploit weaknesses in cybersecurity frameworks, refining their techniques to trick their targets into giving up sensitive information such as login credentials or financial details.
As AI advances, these efforts become increasingly sophisticated. Today, cyber attackers readily leverage novel tools to create highly realistic deepfake images, audio, and video, posing a growing threat.
During the UK general election campaign, we've seen high-profile politicians like Wes Streeting and Nigel Farage being targeted by deepfake attacks. However, these technologies aren't just being weaponised in a political context. Equally, they are also being used against businesses.
In 2020, one threat actor managed to steal $35 million by using AI to replicate a company director's voice and deceive a bank manager. Similarly, in January 2024, a finance employee at British engineering firm Arup fell victim to a $25 million scam after a video call with a 'deepfake chief financial officer'.
From the spread of misinformation to financially motivated attacks, deepfakes have become an increasingly prominent threat in bad actors' arsenals. In fact, one report indicates that there was a staggering 3000% increase in deepfake fraud attempts between 2022 and 2023. Prompted by this rising tide, ISMS.online conducted a survey to assess the current impact of deepfakes on UK organisations.
Here, the 'State of Information Security' report reveals alarming trends. Critically, nearly a third (32%) of UK businesses reported experiencing a deepfake security incident in the past year, making it the country's second most common type of information security breach.
The statistics speak for themselves: deepfakes are no longer a theoretical threat but a present-day reality that enterprises must confront.
AI In Cyberattacks Must Be Met With AI In Cybersecurity
Currently, threat actors use deepfakes most commonly in business email compromise (BEC) style attacks, where AI-powered voice and video cloning technology deceives recipients into executing corporate fund transfers. Other potential uses include information or credential theft, causing reputational harm, or circumventing facial and voice recognition authentication.
Regardless of the specific technique or intended outcomes, the consequences for organisations can be severe, leading to substantial data loss, service disruptions, and significant financial and reputational harm.
Organisations must take proactive steps to mitigate the threat of deepfakes, strengthening their cybersecurity frameworks with cutting-edge technologies.
It's essential to recognise that AI isn't exclusive to threat actors. Equally, organisations can and should leverage it to their own advantage, building more robust defences while unlocking efficiency gains, accuracy improvements, enhanced security insights, and several other benefits.
Encouragingly, companies clearly recognise the potential of advanced technologies in security applications. According to ISMS.online's State of Information Security report, 72% of UK respondents believe AI and machine learning are improving information security, with six in ten planning to increase investment in such applications in the next 12 months.
Of course, the challenge of turning intent into action remains a gap that must be bridged with thorough and effective guidance.
Typically, we advise that companies should seek to embrace critical standards. ISO 42001, for example, can be used as a central guiding principle given that it deals with AI and is specifically designed to help organisations leverage and use AI within their businesses safely and sustainably ensuring compliance with all necessary data security, information security and ethical requirements.
Of course, this is easier said than done. For organisations struggling to know where to start, it may make sense to leverage software platforms that can provide essential guidance for implementing standards like ISO 42001, significantly accelerating secure adoption of AI, necessary information and data security improvements and transformations.
Every organisation's unique skillset, context, and requirements will differ, yet the goal across all enterprises must be the same.
By proactively aligning with modern security guidance and embracing relevant technologies, firms can position themselves against evolving threats, providing assurances to partners, customers, and regulators whilst setting themselves up for future-proofed operations and financial success.
Sam Peters Is Chief product officer at ISMS.online
Image: Zinkevych
You MIght Also Read:
The Psychology Of GenAI Manipulation:
DIRECTORY OF SUPPLIERS - Deepfake & Disinformation Detection:
___________________________________________________________________________________________
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible