Leveraging Data Privacy For Artificial Intelligence
Every time you use a navigation app to get from A to B, use dictation to convert speech-to-text, or unlock your phone using face ID, you’re relying on Artificial Intelligence (AI). Organisations across industries are also relying on, and investing in, AI, to improve customer service, increase efficiency, empower employees and much more.
At the end of last year, Gartner forecast that worldwide AI software revenue will total $62.5 billion in 2022, an increase of 21.3% from 2021. At the moment, two of the main uses for AI are pattern recognition and automating repetitive tasks. These are both areas where AI outperforms human intelligence.
However, pattern recognition can result in biased outcomes and when tasks are automated without enough testing, machines make mistakes too - but much faster and more consistently.
Perhaps the best-known example of the issues with pattern recognition is the case of Amazon, which implemented a recruitment engine intended to help it screen the huge number of applications it received. The software was trained to recognise patterns in hiring decisions that it could compare with CVs to find the candidates most likely to succeed at interview. However, where Amazon was hiring for male-dominated positions, it discovered that the tool had essentially ‘taught itself’ that Amazon was looking for a man. Amazon first tried to correct the algorithm, but ultimately disbanded the development team.
Why Data Privacy Is The Key To Unlocking The Potential Of AI
Data privacy rules require organisations to check that systems work as intended, communicate clearly and ensure all processing is lawful. The General Data Protection Regulation (GDPR) sets out rules for automated decision-making that has significant consequences for individuals and the rights of individuals, such as the right to be informed about how personal data is processed. Other GDPR rules, such as the need to establish a lawful basis for all processing activities, also affect AI.
Jurisdictions such as the EU, US and Brazil have chosen to set out bills to govern AI. The EU has proposed an Artificial Intelligence Act that imposes conditions on medium and high-risk AI systems; the US has proposed an AI Bill of Rights that describes five principles for responsible AI; and Brazil has passed an AI Bill that sets out goals and principles for developing AI in Brazil. Specific uses, such as those for healthcare, may also have rules imposed by existing laws. So far, the UK has chosen not to do this, although this could change.
AI In The Public Sector
In the UK, AI has been named as one of four Grand Challenges and is supported by an AI Sector estimated to be worth up to £950m. The government has set up three bodies – the AI Council, Office for AI and the Centre for Data Ethics and Innovation - to facilitate the adoption of AI by both private and public sector organisations.
Public sector use of AI is particularly important for a number of reasons.
- First, governments typically address challenges with wide social impact. The potential for both beneficial and harmful outcomes is heightened because of the large scale processing involved and, in many cases, the lack of alternatives. Many parts of the public sector deal with vulnerable individuals who are particularly susceptible to harm should things go wrong.
- Second, private sector organisations look to the public sector to set examples for how to get things right.
- Third, public sector AI-driven services will be the first AI that many people encounter and their experience of them will affect their perception of AI more widely.
Used well, AI has the potential to transform public services by facilitating socially beneficially outcomes in cost-efficient ways. This is particularly important in today’s economic climate, when the public sector is being asked to find savings and individuals are under cost of living pressures that are unprecedented in recent times.
Building Public Trust In AI
The potential social benefits from wider use of AI means that it is essential that individuals can trust public and private sector organisations to process their personal data fairly and safely.
They need to believe that AI-driven outcomes will be at least as fair as ones decided by humans, and that the process of interacting with the AI will be at least as easy as interacting with another person.
Data privacy offers a set of tools that AI developers can use with data privacy experts to identify the risks and concerns affecting potential users, address them, and ensure that people feel confident to interact with the technology. This is as much a commercial imperative as a regulatory requirement, and these kinds of risk controls should be seen as just as intrinsic a part of system design requirements as any operational processing activity.
Camilla Winlo is Head of Data Privacy at Gemserv
You Might Also Read:
Super Intelligent Machines Need An Off Switch: