AI-Based Phishing Attacks Demand A Multi-Pronged Response
The sophistication of phishing is growing by leaps and bounds. Using AI to pre-empt identification, cybercriminals’ crafty emails no longer contain grammatical mistakes, non-specific greetings, poor-quality company logos and images, and general correspondence - all the things that previously served as “phishing” indicators.
AI Is Changing The Face Of Phishing
The phishing onslaught will continue in 2025 as criminals further evolve their hoax and deception techniques. We can expect phishing to take some of these key forms this year.
Live phishing is becoming more prevalent. Criminals are creating messages using AI in tune with specific, relevant events, industry developments, and locations, to make messages appear real and trustworthy.
Chat-based phishing attacks are growing. Advancing natural language models is empowering cybercriminals to design sophisticated phishing strategies using AI chatbots. These intelligent chatbots can dynamically craft personalised and uniquely adapted interactions, making communication nuanced and challenging to identify as they seamlessly adjust responses to victim reactions in real-time.
Deepfakes are rapidly becoming easier to create. Attackers are creating exceedingly convincing, difficult-to-detect videos. Likewise, audio deepfakes have become highly sophisticated, with some systems able to clone voices with just a few seconds of sample audio. Especially as short clips, the audio can easily fool listeners, manipulating victims into disclosing confidential information, approving high-value commercial transactions, and more.
Hyper-personalisation is becoming commonplace. Using analytics and advanced data collection techniques, criminals are using AI to create hyper-personalised phishing attacks via communications that reference victims’ recent behaviours, shopping patterns, social media engagement, and so forth. Such customised, deceptive messages are posing growing challenges for detection.
Fake social media accounts, generated with the use of AI are growing. These accounts mimic real users, with criminals using them to engage with potential victims over extended periods of time to garner trust.
Malicious websites and links designed with the help of AI is another technique that is growing in popularity with cybercriminals. They are practically impossible to tell apart from legitimate ones, and bypass traditional detection methods. As a result, detection tools aren’t able to flag malicious sites, which is delivering significant success rates for attackers.
Perhaps, the most difficult form to identify yet is AI-driven dynamic phishing, where cybercriminals employ real-time monitoring and machine learning to modify their tactics based on victims' responses. They might show hesitation when replying, delay responses, or express doubt to imitate the individual in question. They deceitfully analyse interaction patterns and adjust their strategies along the way to maximise success rates.
The Challenge For Enterprises
Phishing is a social engineering attack that is designed to manipulatively deceive victims. Now with the use of AI, criminals’ power for deception has grown manifold, rendering traditional approaches ineffective. For instance, and foremost, the strongest line of defense - i.e. the human - is falling prey to scammers as the traditional phishing-related training is proving inadequate. Likewise, traditional email security programmes – i.e., those built into the most commonly used email platforms - aren’t able to separate AI phishing attempts from legitimate emails.
Fundamentally, customary approaches to combating AI-driven phishing attacks are insufficient.
Relying on outdated email security measures and defensive tools that simply react to threats after detection, is futile. Data loss prevention alerts and spam filtering solutions are examples. These conventional detection methods lack the advanced and dynamic detection capabilities AI-powered phishing attacks demand.
Mitigating Strategies For Enterprises
Today, a multi-pronged strategy for mitigating and combating AI-powered phishing attacks is needed.
First off, security awareness and training continue to play an instrumental role. Employees need to be aware of the latest AI threats, the various types of coercions that can result in the leakage of sensitive business information, and the signs to look out for in today’s world of AI-led deception. This level of heightened awareness is best imparted through training that mirrors real-life attacks. Theoretical knowledge can no longer be the mainstay of cybersecurity programmes.
Moving on to technology solutions, they too remain crucial. Adding data loss awareness tools to the repertoire (in addition to the traditional data loss prevention tools) is useful as they provide alerts to employees before they take action on a potentially risky email, deepfake, chat, and whatever else.
Adopting the right email threat protection solution is important. Advanced tools offer capabilities such as attachment sandboxing, dynamic link analysis, and remote browser isolation to help contain the spread of malware and viruses often carried in phishing emails, links, and attachments.
If AI is proving an effective technology to unleash criminal activity, equally it should be leveraged to defend against those attacks too. It can be used very effectively to detect and mitigate sophisticated threats - alongside other layers of security such as continuous monitoring, multi-factor authentication, and independent verification.
Last but not least, a zero-trust philosophy must underpin every cybersecurity strategy. Trust no one, assume the legitimacy of no communication, and verify every single interaction.
This kind of layered and holistic approach to security offers the best chance of defense in an environment where criminals are weaponising every technology and deploying every trick in the book to deceive, manipulate and attack, for monetary gain and business disruption.
Oliver Paterson is Director of Product Management at VIPRE Security Group
Image: Ideogram
You Might Also Read:
Using AI To Defend Against AI-Enhanced BEC Scams:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible