Grok Faces Prosecution For Misusing AI Training Data
Elon Musk’s X platform (formerly Twitter) is under pressure from data regulators as it has emerged that users are consenting to their posts being used to build Artificial Intelligence (AI) systems via a default setting on the app without their explicit permission
An X user exposed a setting on the app that activated by default and permitted the account holder’s posts to be used for training Grok, an AI chatbot built by Musk’s Grok AI business. This means X can exploit user posts, interactions, and outputs from Grok for training and refining its AI, requiring users to manually opt-out.
Now, the UK and Irish data regulators have contacted X over the apparent attempt to gain user consent for data harvesting without them giving specific consent.
Under UK GDPR, which is based on the EU data regulation, companies are not allowed to use “pre-ticked boxes” or “any other method of default consent”. The setting, which comes with an already ticked box, states that you “allow your posts as well as your interactions, inputs and results with Grok to be used for training and fine-tuning”.
Data regulators immediately expressed concern about the default setting. In the UK, the information commissioner’s office (ICO) said it was “making enquiries” with X.
The Data Protection Commission (DPC) in the Republic of Ireland, the lead regulator for X across the European Union, said it had already been speaking to Musk’s company about data collection and AI models and was surprised to learn of the default setting.
Large language models are the technology underpinning chatbots such as ChatGPT and Grok and are fed vast amounts of data scraped from the Internet in order to spot patterns in language and build a statistical understanding of it. This ultimately enables chatbots to churn out convincing-looking answers to queries.
This approach has met with opposition in multiple areas, with numerous claims that this process breaches copyright laws, as well as data privacy and consumer protect rules.
- Earlier this year, the New York Times newspaper started legal action for copyright infringement against Micorsoft and OpenAI over their unauthorised use of millions of pages of text to train their AI model, ChatGPT.
- Now, European privacy advocate NOYB (None of Your Business) has filed nine GDPR complaints against X for the use of personal data from over 60 million European users to train Grok. It was shared that X did not inform its users that their data was being used to train AI and that they hadn’t consented to this practice.
Chris Denbigh-White, CSO at Next DLP commented “The General Data Protection Regulation (GDPR) was explicitly written with the aim of protecting an individual's privacy and to stop organisations from having free rein over people’s data... However, since the regulations were introduced six years ago, technologies have emerged that present new data protection challenges.
“GenAI, for example, processes and generates huge amounts of data – including personal data – requiring organisations to take a mindful approach to the technology. As with any other software-as-a-service (SaaS) tool, organisations need to act thoughtfully through a framework whereby they understand the data flows and risks.
There’s no reason AI can’t be compliant with GDPR, but companies need to take the time to get it right... Organisations need to prioritise legality over speed. After all, the backlash over a legal issue is much more significant than that of the potential complaints over the timeline.” Denbigh-White concludes.
ICO.org | Data Protection Commission | X,com | Times of India | Guardian | BeeBom |
You Might Also Read:
Generative Artificial Intelligence Models Leak Private Data:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible