The Problem With Generative AI - Leaky Data
Generative AI has become a crucial tool in the workplace, helping with tasks like summarising meetings, drafting emails, and brainstorming new ideas. It saves time on manual tasks and boosts productivity and efficiency. Many organisations are already starting to see the benefits of using AI.
According to a recent Microsoft survey, 75% of people use AI at work, with 46% having started to use it only in the last six months, highlighting how adoption of AI is growing month after month.
A notable use of AI is in handling large amounts of data. With the rise of the Internet of Things (IoT) and digital transformation, we have more data than ever. Manually processing this data is time-consuming and resource-intensive. AI accelerates this process by recognising patterns and extracting insights from vast data sets.
Benefits and Risks of AI
To fully maximise usage of AI, most organisations will need to trust external technology providers to help them implement and manage their use of the technology across the enterprise. While AI has the power to revolutionise the way we work, it also raises important privacy and security concerns. In order to work well, AI requires large amounts of data, which can include sensitive information but running this confidential data through AI puts it at risk of exposure.
Bad actors can bypass the security protections in place to leverage, steal or corrupt the data. Organisations using AI must therefore carefully consider and address these risks.
Confidential Computing as a Solution
Confidential computing offers an innovative solution to these security concerns, enhancing the way we use AI by protecting data during processing.
For years, cloud providers have been able to secure data while it is stored or while it is in transit, but data has been left vulnerable while being processed as it must be de-crypted in order to be analysed. Confidential computing solves this problem and expands protection to the data’s entire lifecycle, thereby eliminating vulnerability.
It works by isolating sensitive data in a protected digital space, ensuring that the content, and the techniques used to process it are encrypted and accessible only to authorised code holders.
We’re already seeing many companies turn towards confidential computing, as the global market is projected to grow from USD 14.14 billion in 2024 to USD 208.06 billion by 2032 according to a study from Fortune Business Insights.
In the context of AI, confidential computing allows an organisation to input data into a generative AI model and for the model to process the data securely, without risk of leaks. This means that sensitive information is protected from bad actors and potential data leaks, ensuring that organisations can retain control over their data while using AI.
The Future of Confidential Computing
The main challenge with confidential computing is the cost. It requires new systems and a skilled workforce capable of managing the complexity of implementation. Organisations should assess these costs against the advantages of securing data. Today, given the expense involved, confidential computing will be most relevant for critical sectors like government, healthcare, and IT
As the technology advances, companies might collaborate to use confidential computing within a trusted network. In this scenario, each company would effectively ‘rent’ a space for their processing to take place, in a way that limits them to viewing only their own data and not the information of other companies within the network. This approach would reduce costs and enhance security, since company data would be safeguarded, much like a bank protects individual finances.
Confidential computing has the potential to bring significant benefits to many sectors, particularly those handling sensitive information. With more employees leveraging generative AI to help with complex tasks, confidential computing could be the solution to addressing privacy concerns, allowing safe AI usage without fear of data leaks or compromising data protection laws.
The ongoing evolution of this technology promises even more sophisticated uses of AI applications in the coming years.
Samuel Tourbot is Head of Cloud Communications at Alcatel-Lucent Enterprise
Image: Ideogram
You Might Also Read:
ChatGPT - Solving AI’s Privacy Issue:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible