The Dark Side Of The New Dawn In AI
As ChatGPT’s active users grew to 100 million in just two months fascinated by what it can do, it has already started to reveal its dark side. It has become the cause of security breaches and risked violation of privacy, compliance and governance regulations; not through attacks, but users are voluntarily (and unlawfully) uploading sensitive information to the system in order to generate insights.
One report found that over 4% of employees have already tried to put sensitive company data into the model. The recent release of GPT4, which uses Large Language Models (LLM), can accept much larger chunks of text and is likely to make this problem much worse (and quickly).
The report came from a company that detected and blocked the 67,000 attempts of misuse across their client base. Most organizations don’t have this capability. One executive tried to paste the corporate strategy into the system to make a PowerPoint, and a doctor put in a patient’s name and medical condition to draft a report.
Cyber experts have proven that training data extraction attacks are possible in GPT, where an attacker can get the system to recall, verbatim, sensitive information it has been given.
What Are The Shortfalls Of ChatGPT In Cyber Security?
As well as potentially having your spilled data ‘hacked’ out of GPT’s databases, just spilling it in the first place could be a breach of many different security policies, secrecy laws, and privacy regulations. And on the flip side, retrieving and using someone else’s information from GPT, that turns out to be proprietary, confidential, or copyright, could also get your company in trouble.
The only way to stop this, apart from blocking access to GPT or other LLM tools, is training and education of humans using the technology. But it’s difficult to train every staff member, and even more challenging to make sure they understand and retain that training. It’s even harder to make sure they apply the training daily and consistently to ensure they are not exposing their employer to significant risks.
The rise of chatbots and reliance on a machine to provide answers means we never know when we are being given the correct information.
Chatbots have the challenge of being opaque. When they give an answer, it’s hard to fact check that answer. We run the risk of solely relying on a machine’s recommendation, even when that recommendation may be wrong.
Now let’s look at this issue in light of cybersecurity. If we ask a machine security-related questions and the answer is potentially incorrect, the consequences of being wrong can be catastrophic.
This is why it’s critical not to rely on a black-box, algorithmic AI for regulatory or security compliance. When we are deciding what law or policy to apply, we need to be able to understand and challenge the evidence behind that decision.
Safeguarding From Emerging ChatGPT Threats
Phishing is already one of the most common and successful attack methods for bad actors. ChatGPT can put the ability to craft more believable phishing messages with ease in the wrong hands. Deepfakes are the next level. A believable email from your boss asking you to email a sensitive document, followed up by a video call that looks and sounds exactly like her? These are some of the enormous challenges we face that training alone can’t control.
Having a data spill is essentially inevitable. We can never reduce the likelihood of a breach to zero, because we will always have trusted insiders. The approach to take now is to reduce the potential impact of a future breach.
Know what data you have, what risk it has, and what value. What rules apply to it, where it is, who is doing what to it. And know what needs to be locked down, and what can be disposed of (across the whole enterprise). This is something we can use AI for right now, and it’s really moving the needle back to the side of good governance.
Rachael Greaves is CEO/CSO and Founder of Castlepoint Systems
You Might Also Read:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible