Deepfakes Are Making Business Email Compromise Worse
Business Email Compromise (BEC) exploits have long been a favourite for bad actors looking to gain access to enterprise networks via their people. The idea is simple: use fake emails to get employees to send money or information by impersonating an individual in a position of power. This is a longstanding attack vector that has targeted companies for more than a decade.
As employees have become savvier to this kind of attack, bad actors have upped their game with a new weapon in the arsenal: artificial intelligence (AI)-based deep fake phishing.
A deepfake is a simulation of a real, known person’s voice and/or image. Deepfakes can be effective where other social engineering attacks would fail. Even those well-coached to be suspicious of inbound emails may not consider the same risks when the communication appears to come from a trustworthy source. After all, they may not be aware that what appears to be a sound bite from a trusted colleague, or even a video snippet, may not be genuine.
As deepfake technology becomes more widespread, these types of attacks will become increasingly frequent in 2023—and the cybersecurity implications are serious. For example, a top issue for identity experts is the AI chatbot ChatGPT, and its potential – on combination with AI-driven with voice synthesis – to accurately create and mimic legitimate voices to produce ever more believable fake identities. Generative AI is still in its infancy, but even at this stage, it has brought upheaval on businesses and organisations, from academia to government. There can be no doubt that technology will advance, and bad actors will take advantage of it.
How Do Deepfakes Impersonate Legitimate Identities?
To understand why deepfakes are so effective, it’s crucial to treat identity as the new security perimeter. This perimeter, no longer built by office walls and protected by on-premise hardware, is now composed of all humans and machines allowed access to the enterprise network, wherever they may be. In the new remote-working reality, this could be anywhere. This paradigm allows for much more flexibility, but with flexibility comes potential weaknesses: when people are not together and communicating in the physical space, bad actors exploit this distributed model with sophisticated impersonations to be granted undue access.
Let’s take the most ubiquitous forms of remote corporate communication. Virtually every single business relies upon email and video conferencing as fundamental forms of communication, and reliance on these has only grown in the era of hybrid work.
Cybercriminals are aware of this reliance and have learned to deploy several tactics to overcome the established trust from a traditionally trusted channel where identity was not in doubt. They can infiltrate historically trusted modes of communication, using their ingrained status in the enterprise for malicious purposes.
The term "deepfake” comes from the underlying technology, "deep learning," which is a form of AI. Deepfake technology allows users to create startling accurate impersonations of others. In the news, we see examples of deep fakes pertaining to celebrities or politicians, but anyone can be a target. For example, a Binance PR executive claimed that cybercriminals created a ‘fake’ AI hologram of his image to scam cryptocurrency projects via Zoom video calls.
How do these attacks work? Bad actors make autoencoders - a kind of advanced neural network - which scan videos and voice files, collecting images and recordings of individuals to learn their distinguishing characteristics and attributes. They then collate these ingredients into images, voice recordings, and videos which appear extremely faithful to reality. These ‘deepfakes’ are then deployed as part of social engineering scams, where the author uses them to impersonate an individual.
The Origins: How Phishing Made Us Distrust Identities
While deepfake attacks are reliant on new technology and relatively recent, such impersonations are far from new. Phishing, of course, is one of the original and longest-standing scams of the internet age, and the U.S. Federal Bureau of Investigation coined the term “Business Email Compromise” (BEC) to describe a specific form of spear-phishing attack. In a BEC attack, the author would impersonate a legitimate person inside the organisation or its network to dupe the recipient into delivering funds to an unauthorized account or individual. This is what BEC attacks have in common with their modern cousin, deepfakes: the important part of BEC is faking the identity of a trusted party to con an unsuspecting employee. The rest of this is just adapting the basic social engineering strategy to the latest platforms in use. Deepfakes simply build on this initial idea, but there have been, and will be, many other vectors.
When the FBI coined the term, email was the main avenue for this attack. Subsequently, there have been similar campaigns where the bad actor uses phone texting, voice messages, chat in platforms like Slack, and now video conferencing platforms like Skype and Zoom. Since these attacks are not email-based, arguably they are not technically BEC. But they are the next generation of the same, basic strategy.
Unbreakable Cryptographic Identities
BEC is an extremely successful attack vector: the FBI estimates these attacks have cost a combined $43 million in recent years. Deepfakes only add to the trust problem, further dissolving the boundaries of reliable identity, and tricking recipients into trusting communications they shouldn’t. The only solution is to apply a sure-fire way to authenticate, confirm, and secure identities - one which does not rely on human intuition.
Fortunately, digital certificates are a well-proven strategy for authenticating identity in modern business environments. These certificates are foundational for modern defense-in-depth strategies such as Zero Trust Network Access and Software-defined Perimeter.
By employing Certificate Lifecycle Management platforms to automate the deployment, monitoring, and renewal of these certificates, enterprises will be in good stead to restore trust, despite bad actors’ best efforts and their increasingly advanced attack tools.
Tim Callan is Chief Experience Officer at Sectigo
You Might Also Read:
Building An Identity-First Security Strategy:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible