Generative Artificial Intelligence Models Leak Private Data
Adoption of ChatGPT has over the past few months since release of of 4th generation version has greatly increased and, right now, more than 100 million users are signed up to the program.
This has been made possible by the platform's aggregation of over 300 billion items of text and other data, scraped from online sources like articles, posts, websites, journals and books.
Although OpenAI has developed and trained the ChatGPT model to operate within parameters intended to deliver useful ouput, analysts of the model say that this data is gathered without discrimation between fact and fiction, copyright status, or data privacy.
Now, researchers from Northwestern University have published a study in which they explain how they could use keywords to trick ChatGPT into and releasing training data that was not meant to be disclosed.
Although OpenAI has taken steps to protect privacy, everyday chats and postings leave a massive pool of data and much of it is personal which is not intended for widespread distribution. Generative AI platforms, such as ChatGPT, are built by data scientists through a training process where the program in its initial, unformed state, is subjected to billions of bytes of text, some of it from public Internet sources and some from published books.
The fundamental function of training is to make the program reproduce anything that is given acces to, using essentially a compression technique. A program, once trained, could reproduce the training data, based upon only a very small amount data being submitted as an enquiry, prompting the relevant response.
The researchers said that they were able to extract over 10,000 unique verbatim memorised training examples using only $200 worth of queries to ChatGPT, adding- “Our extrapolation to larger budgets suggests that dedicated adversaries could extract far more data.” Indeed, they found that they could obtain names, phone numbers, and addresses of individuals and companies by feeding ChatGPT absurd commands that forced a malfunction.
For example, the researchers requested that ChatGPT repeat the word “poem” ad infinitum, which forced the model to reach beyond its training procedures and “fall back on its original language modelling objective” and tap into restricted details in its training data. They also reached a similar result by requesting infinite repetition of the word “company,” and managed to retrieve the email address and phone number of an American law firm.
In response to potential unauthorised data disclosures, some companies have placed restrictions on employee usage of large language models earlier this year. Rising concerns about data breaches caused OpenAI to add a feature that turns off chat history, adding a layer of protection to sensitive data. The problem is that such data is still retained for 30 days before being permanently deleted.
In conclusion, the researchers termed their findings “worrying” and said their report should serve as “a cautionary tale for those training future models,” warning that users “should not train and deploy LLMs for any privacy-sensitive applications without extreme safeguards.”
Northwestern Univ: SearchEngine Journal: I-HLS: ZDNet: TechXplore: New Scientist: Wired: Wired:
Science Direct: Business Insider: Image: DeepMind
You Might Also Read:
Guidelines For AI Systems Development:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible