The Data Privacy Risks Of Generative AI
Many organisations are choosing to limit the use of Generative Artificial Intelligence (GenAI) over data privacy and security issues and now some firms have banned its use in the workplace completely. Indeed, 27% of organisations have stopped the use of GenAI amongst their workforce over privacy and data security risks, says the 2024 Data Privacy Benchmark Study from Cisco
Most organisations have also placed controls on these tools. Nearly two-thirds (63%) have established limitations on what data can be entered and 61% have limits on which Gen-AI tools can be used by employees.
Despite these restrictions, many organisations admitted inputting sensitive data into generative AI applications. This included information about internal processes (62%), employee names or information (45%), non-public information about the company (42%) and customer names or information (38%).
Most respondents (92%) viewed generative AI as a fundamentally different technology with novel challenges and concerns requiring new techniques to manage data and risk.
The biggest concerns cited were that these tools could hurt the organization’s legal and intellectual property rights (69%), the information entered could be shared publicly or with competitors (68%), and that the information it returns to the user could be wrong (68%).
Significantly, 91% of security and privacy professionals acknowledged that they need to do more to reassure customers about their data use with AI. However, none of the actions listed in the study to build trust with consumers in this area exceeded 50% of respondents.
- Nearly all (94%) security and privacy professionals said their customers would not buy from their organization if they did not protect data properly.
- Even more (97%) feel they have a responsibility to use data ethically, and 95% argue the business benefits of privacy investment are greater than the costs.
- The growing connection between data privacy and business benefits has made this area a key boardroom issue. Nearly all (98%) respondents reported one or more privacy metrics to the board, and over half reported three or more.
- The top privacy metrics used were audit results (44%), data breaches (43%), data subject requests (31%) and incident response (29%).
- Respondents were strongly in favor of governments implementing data privacy laws, with 80% believing privacy laws have had a positive impact on their organisation, and just 6% a negative impact.
- Around 63 per cent have established limitations on what data can be entered and 61 per cent have limits on which GenAI tools can be used by employees.
Consumers are widely concerned about AI use which involves their pesonal data, and yet 91 per cent of organisations recognise they need to do more to reassure their customers that their data is being used only for intended and legitimate purposes in AI. This finding is similar to the levels in Cisco 2023 report, suggesting that there has litte progress
Cisco: Economic Times: Daniel Lozovsky: Infosecurity Magazine: IndiaTV: Technolgy Magazine:
Image: Claudio Schwarz
You Might Also Read:
AI Adoption: The Overlooked Existential Risk:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible