Using AI To Defend Against AI-Enhanced BEC Scams
AI is a game-changing weapon in criminals’ BEC attack arsenal.
In fact, AI-generated business email compromise (BEC) deception is becoming increasingly advanced. Despite awareness efforts, employees are getting out-manoeuvred by these attacks. Businesses lost a staggering $50.8 billion to BEC attacks between 2013-2022.
The use of AI makes it difficult for employees to identify malicious emails. Previously, human scammers would make noticeable typos and grammar mistakes, serving as red flags. Now, AI-generated BEC emails are perfectly written to dodge and bypass basic spam and phishing filters, reaching unsuspecting employees' inboxes.
These AI tools accurately grasp the context and can customise emails to appear completely authentic. For example, they can craft text that closely mirrors a CEO's writing style or matches a business partner's tone of voice. Furthermore, AI tools can churn out such emails at scale, while constantly learning and adapting to make the output better and more accurate.
Cyber attackers are using AI to sift through company data, such as executives' names and roles, current and confidential projects, social media accounts, photographs, and recent news. This enables them to create highly relevant and convincing emails that seem legitimate to busy employees.
Why Do Conventional Security Mmeasures Fall Short?
Today's traditional email security solutions combat conventional threats by scanning for specific keywords, known malicious domains, or typical phishing indicators. These defences are primarily designed to block known threats.
The challenge with BEC attacks is that they aren’t threats in the traditional sense. They are emails generated from a compromised, but ‘real’ email account. For example, using stolen credentials, a bad actor logs into a Finance Director’s email account to deceitfully instruct a colleague to urgently clear a partner’s invoice or make a money transfer to a third party.
Also, these AI-enhanced BEC attacks are dynamic and continually adapting. By the time a new attack pattern is recognised, attackers have already shifted tactics.
Using AI To Counter AI-driven BEC attacks
The good news is that if AI has made BEC attacks more potent, its use is equally powerful in helping to neutralise such assaults. Embedding AI as part of a layered security approach is the most effective way of successfully counteracting the technology’s weaponisation by bad actors.
Organisations must use AI to routinely detect and thwart social engineering, phishing, and ransomware attacks by determining unusual patterns or activities across their network and email infrastructure. Given the continual onslaught of such attacks, it is humanly impossible to monitor them manually 24 x7. AI, on the other hand, can be trained to consume and analyse copious amounts of data with increasing accuracy. Many advanced security solutions already use AI to protect against zero-day threats, building vast databases of known malware and analysing email traffic to identify suspicious content.
Link isolation is one AI-driven technique, where the technology isolates suspicious links and inspects them in a safe environment to prevent users from accidentally reaching malicious sites. Likewise, AI can be deployed to open suspicious email attachments in a secure “sandbox” environment for analysis before allowing the email into the organisational network.
This kind of active threat identification and obstruction is important to prevent end users’ email accounts from being compromised, which helps prevent BEC attacks.
However, due to the relentless pursuit of bad actors and the sophisticated nature of their attacks, email accounts occasionally get compromised. In such situations, to ensure that the BEC attacks are thwarted, AI’s application for behavioural analysis is potentially the most effective yet.
In the above scenario of the Finance Director’s compromised email account, the AI tool analyses and compares the individual’s typical behaviour to look for tell-tale signs. Where is the executive logging in from? What machine is being used? What time do they typically log on and is there a discrepancy? Is the writing style or tone of voice different? Does the Finance Director routinely make such urgent requests? Has the company made this level of money transfers to the account in question previously? And so on.
By deploying AI routinely for mapping email usage and behavioural patterns, organisations can identify which emails are suspicious, flagging them for further investigation.
AI Delivers Analysis At Scale
With the high volume of email traffic and numerous tactics cybercriminals deploy, AI provides the power to speed up and scale real-time analysis to help stop all manner of email-related cyber-attacks aimed at end users.
This, supported by a highly security-aware and vigilant workforce, is the best defence. No single technology or solution can ever be foolproof. Threats come from multiple angles to multiple vectors. Defence needs a strong technology foundation and human security to quash the attacks.
Jack Garnsey is Subject Matter Expert – Email Security, VIPRE Security Group
Image: Liubomyr Vorona
You Might Also Read:
BEC Attacks: Trends & Predictions For 2024:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible