The Dark Side Of AI
Everyone is talking about Chat GPT, the acronym of Generative Chat Pre-trained Transformer, the free chatbot based on Artificial Intelligence created by OpenAI, the artificial intelligence research organisation that promotes the development of friendly AI - intelligences capable of contributing to the good of humanity.
By accessing their website, you can virtually converse with a "virtual person", an artificial intelligence programmed to answer any question, thanks to a sophisticated machine learning model with a high machine learning capability. But what are the risks that this Chatbot can entail?
ChatGPT has already attracted many cyber criminals, who in the first place have made almost identical copies of the site or app. Downloading those from official stores, and installing them in the phone, they can then spread malicious content.
The most serious problem, however, is that through specific and artfully built queries, Chat GPT is the perfect tool that, in the hands of an attacker, helps to create spear phishing attacks.
They are, in fact, hyper customised attacks, calibrated on the information that users, without realising it, share on their social accounts and through daily navigation on PCs and mobile. In this way, cyber criminals use AI to build deceptive content, created specifically for the person they are targeting.
To counter this growing and increasingly insidious phenomenon ERMES, the leading Italian cybersecurity firm, is developing an effective AI system. According to ERMES, users accessing ChatGPT, will increasingly rely on third-party services and enabling technologies based on AI.
The ERMES tool enables them to use these safely, through application of filters that prevent user from sharing sensitive information such as email, passwords.
"Chat GPT is the perfect tool which, in the hands of an attacker, helps him carry out what, in the cyber world, are called "spear phishing" attacks. These are personalised attacks, calibrated on the information that users share, without realising it, on their social accounts and through daily browsing on PCs and mobiles. In this way, cybercriminals use AI to build deceptive content, created ad hoc for the person they are addressing." says Lorenzo Asuni, Chief Marketing Officer at Ermes
Three Main Risk Factors Of Using ChatGBT
1. The number one scam is the birth of phishing sites that exploit the hype on ChatGPT, already hundreds in recent weeks alone. Recognising them is not easy: they have similar domains, look almost identical to web pages or apps and often rely on non-existent integrations, creating duplicates of the service that steal the credentials of those who register.
2. Spear phishing attacks become easier and more scalable with the rapid production of good quality and highly targeted Business Email Compromise (BEC) campaigns, sms (smishing) or advertising (malware), aimed at various types of fraud, including economic scams and theft of personal data and credentials.
3. The requirement to share sensitive company information, with a continuous demand for content, answers and analysis. How does this happen? For example, with a simple "reply to this email" request, forgetting to exclude the email of the recipient or sender, which exposes names of customers and other business partners.
Business Email Compromise
A practical example demonstrates the risk to business email users. ChatGPT responds excellently to any content query, but this becomes particularly risky when used as part of a BEC attack. With BEC, attackers use a template to generate a deceptive email, which prompts a recipient to provide him with sensitive information.
With the help of ChatGPT, hackers have the ability to customise any communication, thus potentially having unique content for each email generated thanks to AI, making these attacks more difficult to recognise and detect.
Likewise, writing emails or building a copy of a phishing site can become easier without typos or unique formats, which today are often critical to differentiate these attacks from legitimate emails. The most alarming thing is that it becomes possible to add as many changes to the prompt as "make the email urgent", "emails with a high probability of recipients clicking the link" and so on.
"As regards the risks correlated to the use of tools such as ChatGPT, we can consider the extreme ease with which sensitive information and data of the company are shared today, in many cases without realising it, during requests made to these conversational engines... as phishing campaigns are underway as they use the hype around ChatGPT to clone its appearance or potential integrations and steal important data or user credentials." Lorenzo Asuni said.
You Might Also Read:
Cyber Criminals Are Quick To Use ChatGPT:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible