Leveraging Data Privacy For Artificial Intelligence

Every time you use a navigation app to get from A to B, use dictation to convert speech-to-text, or unlock your phone using face ID, you’re relying on Artificial Intelligence (AI). Organisations across industries are also relying on, and investing in, AI, to improve customer service, increase efficiency, empower employees and much more. 

At the end of last year, Gartner forecast that worldwide AI software revenue will total $62.5 billion in 2022, an increase of 21.3% from 2021.  At the moment, two of the main uses for AI are pattern recognition and automating repetitive tasks. These are both areas where AI outperforms human intelligence.

However, pattern recognition can result in biased outcomes and when tasks are automated without enough testing, machines make mistakes too - but much faster and more consistently.

Perhaps the best-known example of the issues with pattern recognition is the case of Amazon, which implemented a recruitment engine intended to help it screen the huge number of applications it received. The software was trained to recognise patterns in hiring decisions that it could compare with CVs to find the candidates most likely to succeed at interview. However, where Amazon was hiring for male-dominated positions, it discovered that the tool had essentially ‘taught itself’ that Amazon was looking for a man. Amazon first tried to correct the algorithm, but ultimately disbanded the development team. 

Why Data Privacy Is The Key To Unlocking The Potential Of AI

Data privacy rules require organisations to check that systems work as intended, communicate clearly and ensure all processing is lawful. The General Data Protection Regulation (GDPR) sets out rules for automated decision-making that has significant consequences for individuals and the rights of individuals, such as the right to be informed about how personal data is processed. Other GDPR rules, such as the need to establish a lawful basis for all processing activities, also affect AI. 

Jurisdictions such as the EU, US and Brazil have chosen to set out bills to govern AI. The EU has proposed an Artificial Intelligence Act that imposes conditions on medium and high-risk AI systems; the US has proposed an AI Bill of Rights that describes five principles for responsible AI; and Brazil has passed an AI Bill that sets out goals and principles for developing AI in Brazil. Specific uses, such as those for healthcare, may also have rules imposed by existing laws. So far, the UK has chosen not to do this, although this could change.

AI In The Public Sector

In the UK, AI has been named as one of four Grand Challenges and is supported by an AI Sector estimated to be  worth up to £950m. The government has set up three bodies – the AI Council, Office for AI and the Centre for Data Ethics and Innovation - to facilitate the adoption of AI by both private and public sector organisations.

Public sector use of AI is particularly important for a number of reasons.

  • First, governments typically address challenges with wide social impact. The potential for both beneficial and harmful outcomes is heightened because of the large scale processing involved and, in many cases, the lack of alternatives. Many parts of the public sector deal with vulnerable individuals who are particularly susceptible to harm should things go wrong.
  • Second, private sector organisations look to the public sector to set examples for how to get things right.
  • Third, public sector AI-driven services will be the first AI that many people encounter and their experience of them will affect their perception of AI more widely. 

Used well, AI has the potential to transform public services by facilitating socially beneficially outcomes in cost-efficient ways. This is particularly important in today’s economic climate, when the public sector is being asked to find savings and individuals are under cost of living pressures that are unprecedented in recent times. 

Building Public Trust In AI

The potential social benefits from wider use of AI means that it is essential that individuals can trust public and private sector organisations to process their personal data fairly and safely.

They need to believe that AI-driven outcomes will be at least as fair as ones decided by humans, and that the process of interacting with the AI will be at least as easy as interacting with another person. 

Data privacy offers a set of tools that AI developers can use with data privacy experts to identify the risks and concerns affecting potential users, address them, and ensure that people feel confident to interact with the technology. This is as much a commercial imperative as a regulatory requirement, and these kinds of risk controls should be seen as just as intrinsic a part of system design requirements as any operational processing activity. 

Camilla Winlo is Head of Data Privacy at Gemserv

You Might Also Read: 

Super Intelligent Machines Need An Off Switch:

 

« Is It Time To Consolidate Systems?
Japan Will Use AI To Secure Critical Infrastructure »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

Arxan Technologies

Arxan Technologies

Arxan is a leader of application attack-prevention and self-protection products for Internet of Things (IoT), Mobile, Desktop, and other applications.

HackLabs

HackLabs

HackLabs is a penetration testing company providing services for network security, web application security and social engineering testing.

PECB

PECB

PECB is a certification body for persons, management systems, and products on a wide range of international standards in a range of areas including Information Security and Risk Management.

GuardianKey

GuardianKey

GuardianKey is a solution to protect systems against authentication attacks.

NGS (UK)

NGS (UK)

NGS (UK) Ltd are independent, vendor agnostic, next generation security trusted advisors, providing all-encompassing solutions from the perimeter to the endpoint.

OXO Cybersecurity Lab

OXO Cybersecurity Lab

OXO Cybersecurity Lab is the first dedicated cybersecurity incubator in the Central & Eastern Europe region.

Lattice Semiconductor

Lattice Semiconductor

Lattice Semiconductor solves customer problems across the network, from the Edge to the Cloud, in the growing communications, computing, industrial, automotive and consumer markets.

Conseal Security

Conseal Security

Mobile app security testing done well. Conseal Security are specialists in mobile app penetration testing. Our expert-led security analysis quickly finds security vulnerabilities in your apps.

Privacy Compliance Hub

Privacy Compliance Hub

Privacy Compliance Hub provide an easy to use platform with a comprehensive data protection compliance programme including training, information, templates and reporting.

1Touch.io

1Touch.io

1touch.io Inventa is an AI-based, sustainable data discovery and classification platform that provides automated, near real-time discovery, mapping, and cataloging of all sensitive data.

Coviant Software

Coviant Software

Coviant Software delivers secure managed file transfer (MFT) software that integrates smoothly and easily with business processes.

Josef Ressel Centre for Intelligent & Secure Industrial Automation

Josef Ressel Centre for Intelligent & Secure Industrial Automation

The Josef Ressel Centre for Intelligent and Secure Industrial Automation investigates the fundamentals of digital assistants for industrial machines that enable intelligent and secure operation.

Project Cypher

Project Cypher

Project Cypher leverages the latest cybersecurity developments, a world class team of hackers and constant R&D to provide you with unparalleled cybersecurity offerings.

Anetac

Anetac

Developed by seasoned cybersecurity experts, the Anetac Identity and Security Platform protects threat surface exploited via service accounts.

Boldend

Boldend

Boldend offers leading-edge offensive and defensive cybersecurity solutions that empower government and commercial organizations to stay resilient in an evolving threat landscape.

Nordic Defender

Nordic Defender

Nordic Defender is the first crowd-powered modern cybersecurity solution provider in the Nordic region.