Leveraging Data Privacy For Artificial Intelligence

Every time you use a navigation app to get from A to B, use dictation to convert speech-to-text, or unlock your phone using face ID, you’re relying on Artificial Intelligence (AI). Organisations across industries are also relying on, and investing in, AI, to improve customer service, increase efficiency, empower employees and much more. 

At the end of last year, Gartner forecast that worldwide AI software revenue will total $62.5 billion in 2022, an increase of 21.3% from 2021.  At the moment, two of the main uses for AI are pattern recognition and automating repetitive tasks. These are both areas where AI outperforms human intelligence.

However, pattern recognition can result in biased outcomes and when tasks are automated without enough testing, machines make mistakes too - but much faster and more consistently.

Perhaps the best-known example of the issues with pattern recognition is the case of Amazon, which implemented a recruitment engine intended to help it screen the huge number of applications it received. The software was trained to recognise patterns in hiring decisions that it could compare with CVs to find the candidates most likely to succeed at interview. However, where Amazon was hiring for male-dominated positions, it discovered that the tool had essentially ‘taught itself’ that Amazon was looking for a man. Amazon first tried to correct the algorithm, but ultimately disbanded the development team. 

Why Data Privacy Is The Key To Unlocking The Potential Of AI

Data privacy rules require organisations to check that systems work as intended, communicate clearly and ensure all processing is lawful. The General Data Protection Regulation (GDPR) sets out rules for automated decision-making that has significant consequences for individuals and the rights of individuals, such as the right to be informed about how personal data is processed. Other GDPR rules, such as the need to establish a lawful basis for all processing activities, also affect AI. 

Jurisdictions such as the EU, US and Brazil have chosen to set out bills to govern AI. The EU has proposed an Artificial Intelligence Act that imposes conditions on medium and high-risk AI systems; the US has proposed an AI Bill of Rights that describes five principles for responsible AI; and Brazil has passed an AI Bill that sets out goals and principles for developing AI in Brazil. Specific uses, such as those for healthcare, may also have rules imposed by existing laws. So far, the UK has chosen not to do this, although this could change.

AI In The Public Sector

In the UK, AI has been named as one of four Grand Challenges and is supported by an AI Sector estimated to be  worth up to £950m. The government has set up three bodies – the AI Council, Office for AI and the Centre for Data Ethics and Innovation - to facilitate the adoption of AI by both private and public sector organisations.

Public sector use of AI is particularly important for a number of reasons.

  • First, governments typically address challenges with wide social impact. The potential for both beneficial and harmful outcomes is heightened because of the large scale processing involved and, in many cases, the lack of alternatives. Many parts of the public sector deal with vulnerable individuals who are particularly susceptible to harm should things go wrong.
  • Second, private sector organisations look to the public sector to set examples for how to get things right.
  • Third, public sector AI-driven services will be the first AI that many people encounter and their experience of them will affect their perception of AI more widely. 

Used well, AI has the potential to transform public services by facilitating socially beneficially outcomes in cost-efficient ways. This is particularly important in today’s economic climate, when the public sector is being asked to find savings and individuals are under cost of living pressures that are unprecedented in recent times. 

Building Public Trust In AI

The potential social benefits from wider use of AI means that it is essential that individuals can trust public and private sector organisations to process their personal data fairly and safely.

They need to believe that AI-driven outcomes will be at least as fair as ones decided by humans, and that the process of interacting with the AI will be at least as easy as interacting with another person. 

Data privacy offers a set of tools that AI developers can use with data privacy experts to identify the risks and concerns affecting potential users, address them, and ensure that people feel confident to interact with the technology. This is as much a commercial imperative as a regulatory requirement, and these kinds of risk controls should be seen as just as intrinsic a part of system design requirements as any operational processing activity. 

Camilla Winlo is Head of Data Privacy at Gemserv

You Might Also Read: 

Super Intelligent Machines Need An Off Switch:

 

« Is It Time To Consolidate Systems?
Japan Will Use AI To Secure Critical Infrastructure »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Our Supplier Directory lists 6,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

Guardian360

Guardian360

The Guardian360 platform offers unrivalled insight into the security of your applications and IT infrastructure.

Wüpper Management Consulting (WMC)

Wüpper Management Consulting (WMC)

Specialized in compliance, risk management and holistic information security WMC GmbH has longtime implementation experience in global projects.

CTERA Networks

CTERA Networks

CTERA provides cloud storage solutions that enable service providers and enterprises to launch managed storage, backup, file sharing and mobile collaboration services using a single platform.

Data443 Risk Mitigation

Data443 Risk Mitigation

Data443 Risk Mitigation provides next-generation cybersecurity products and services in the area of data security and compliance.

Council to Secure the Digital Economy (CSDE)

Council to Secure the Digital Economy (CSDE)

CSDE brings together companies from across the ICT sector to combat increasingly sophisticated and emerging cyber threats through collaborative actions.

CoverWallet

CoverWallet

CoverWallet combines deep analytics, thoughtful design and state of the art technology to help small businesses with all their insurance needs including Cyber Liability.

Secure Digital Solutions (SDS)

Secure Digital Solutions (SDS)

Secure Digital Solutions is a leading consulting firm in the business of information security providing cyber security program strategy, enterprise risk and compliance, and data privacy.

DeFY Security

DeFY Security

DeFY Security is a Cyber Security solutions provider with more than 20 years of experience securing financial institutions, healthcare, manufacturing and retail.

Swiss It Security Group

Swiss It Security Group

Swiss It Security Group offers clients complete IT security concepts based on innovative solutions and technology, with a focus on protection, detection and defence.

Wazuh

Wazuh

Wazuh is a free, open source and enterprise-ready security monitoring solution for threat detection, integrity monitoring, incident response and compliance.

Alias

Alias

Alias (formerly Alias Forensics) provide penetration testing, vulnerability assessments, incident response and security consulting services.

GLIMPS

GLIMPS

GLIMPS-Malware automatically detects malware affecting standard computer systems, manufacturing systems, IOT or automotive domains.

SecurityGen

SecurityGen

SecurityGen is a global cybersecurity start-up focused on telecom security, with a focus on 5G networks.

Upstack

Upstack

UPSTACK - One partner, end-to-end expertise, helping develop the solutions you need – when you need them.

SYN Ventures

SYN Ventures

SYN Ventures invests in disruptive, transformational solutions that reduce technology risk.

Hubble

Hubble

Hubble grew from the idea that legacy solutions were failing to provide organizations with the asset visibility they needed to effectively secure and operate their businesses.