Europol Warning Of The Growing AI Cyber Threat
Europol and the United Nations (UN) have released an alarming report detailing how cyber criminals are using malicious targeting and abuse of Artificial Intelligence (AI) technology to conduct cyber attacks. The report predicts that AI will become increasingly popular among cyber criminals who are beginning to use it it for targeting their victims and to maximise their hacking operations.
Cyber criminals are not only looking for ways to use AI tools in attacks, but also methods via which to compromise or sabotage existing AI systems, like those used in image and voice recognition and malware detection.
Compiled with help from Trend Micro, the Malicious Uses and Abuses of Artificial Intelligence Report predicts AI will in the future be used as both attack vector and attack surface. AI-supported ransomware attacks could feature clever targeting and evasion, and self-propagation at higher pace to cripple target networks in advance of they’ve experienced a prospect to respond.
The report also warned that, while deepfakes are the most talked about malicious use of AI, there are many other use cases which could be under development.
These include Machine Learning or AI systems designed to produce highly convincing and customised social engineering content at scale, or perhaps to automatically identify the high-value systems and data in a compromised network that should be exfiltrated.
AI-supported ransomware attacks often feature intelligent targeting and evasion and self-propagation at high speed to cripple victim networks before they’ve had a chance to react. By finding blind spots in detection methods, algorithms can also highlight where attackers can hide safe from discovery.
The report highlights multiple areas where industry and law enforcement can come together to pre-empt the risks highlighted earlier. These include the development of AI, which is being used as a crime fighting tool and new ways to build resilience into existing AI systems to mitigate the threat of sabotage. The Report says “using AI to improve and optimise the effectiveness of criminal operations can be applied to any other scam as well, such as regular email phishing... ML, in particular, is already being applied to improve the success rates of any corporate endeavor from sales to marketing.
As an example, the report visualises, a phishing operation targeted at banks that adds a small tag on emails or embedded phishing links. When the potential victim receives the email, the scammer would know whether the receiver has seen it and if the link has been clicked on. The scammer would also learn whether any personal information has been entered on the phishing page, along with the quality of that information.
By correlating all this data, the scammer can form a clear picture of what kind of emails are more successful for each bank. Using these method, criminals would learn which email databases are more likely to elicit good success rates versus those databases that have been reused repeatedly and would no longer produce good results for the hackers.
Eurpol: Trend Micro: Oodaloop: Infosecurity Magazine:
You Might Also Read:
Criminal Use Of Artificial Intelligence: