Leveraging Data Privacy For Artificial Intelligence

Every time you use a navigation app to get from A to B, use dictation to convert speech-to-text, or unlock your phone using face ID, you’re relying on Artificial Intelligence (AI). Organisations across industries are also relying on, and investing in, AI, to improve customer service, increase efficiency, empower employees and much more. 

At the end of last year, Gartner forecast that worldwide AI software revenue will total $62.5 billion in 2022, an increase of 21.3% from 2021.  At the moment, two of the main uses for AI are pattern recognition and automating repetitive tasks. These are both areas where AI outperforms human intelligence.

However, pattern recognition can result in biased outcomes and when tasks are automated without enough testing, machines make mistakes too - but much faster and more consistently.

Perhaps the best-known example of the issues with pattern recognition is the case of Amazon, which implemented a recruitment engine intended to help it screen the huge number of applications it received. The software was trained to recognise patterns in hiring decisions that it could compare with CVs to find the candidates most likely to succeed at interview. However, where Amazon was hiring for male-dominated positions, it discovered that the tool had essentially ‘taught itself’ that Amazon was looking for a man. Amazon first tried to correct the algorithm, but ultimately disbanded the development team. 

Why Data Privacy Is The Key To Unlocking The Potential Of AI

Data privacy rules require organisations to check that systems work as intended, communicate clearly and ensure all processing is lawful. The General Data Protection Regulation (GDPR) sets out rules for automated decision-making that has significant consequences for individuals and the rights of individuals, such as the right to be informed about how personal data is processed. Other GDPR rules, such as the need to establish a lawful basis for all processing activities, also affect AI. 

Jurisdictions such as the EU, US and Brazil have chosen to set out bills to govern AI. The EU has proposed an Artificial Intelligence Act that imposes conditions on medium and high-risk AI systems; the US has proposed an AI Bill of Rights that describes five principles for responsible AI; and Brazil has passed an AI Bill that sets out goals and principles for developing AI in Brazil. Specific uses, such as those for healthcare, may also have rules imposed by existing laws. So far, the UK has chosen not to do this, although this could change.

AI In The Public Sector

In the UK, AI has been named as one of four Grand Challenges and is supported by an AI Sector estimated to be  worth up to £950m. The government has set up three bodies – the AI Council, Office for AI and the Centre for Data Ethics and Innovation - to facilitate the adoption of AI by both private and public sector organisations.

Public sector use of AI is particularly important for a number of reasons.

  • First, governments typically address challenges with wide social impact. The potential for both beneficial and harmful outcomes is heightened because of the large scale processing involved and, in many cases, the lack of alternatives. Many parts of the public sector deal with vulnerable individuals who are particularly susceptible to harm should things go wrong.
  • Second, private sector organisations look to the public sector to set examples for how to get things right.
  • Third, public sector AI-driven services will be the first AI that many people encounter and their experience of them will affect their perception of AI more widely. 

Used well, AI has the potential to transform public services by facilitating socially beneficially outcomes in cost-efficient ways. This is particularly important in today’s economic climate, when the public sector is being asked to find savings and individuals are under cost of living pressures that are unprecedented in recent times. 

Building Public Trust In AI

The potential social benefits from wider use of AI means that it is essential that individuals can trust public and private sector organisations to process their personal data fairly and safely.

They need to believe that AI-driven outcomes will be at least as fair as ones decided by humans, and that the process of interacting with the AI will be at least as easy as interacting with another person. 

Data privacy offers a set of tools that AI developers can use with data privacy experts to identify the risks and concerns affecting potential users, address them, and ensure that people feel confident to interact with the technology. This is as much a commercial imperative as a regulatory requirement, and these kinds of risk controls should be seen as just as intrinsic a part of system design requirements as any operational processing activity. 

Camilla Winlo is Head of Data Privacy at Gemserv

You Might Also Read: 

Super Intelligent Machines Need An Off Switch:

 

« Is It Time To Consolidate Systems?
Japan Will Use AI To Secure Critical Infrastructure »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

Logpoint

Logpoint

Logpoint is a creator of innovative security platforms to empower security teams in accelerating threat detection, investigation and response with a consolidated tech stack.

Secure Decisions

Secure Decisions

Secure Decisions focus on research and product development related to national security including information assurance, computer network defense, cyber security education, and application security.

RangeForce

RangeForce

RangeForce delivers the only integrated cybersecurity simulation and skills analysis platform that combines a virtual cyber range with hand-on training.

Trustless Computing Association (TCA)

Trustless Computing Association (TCA)

TCA is is a non-profit organization promoting the creation and wide availability of IT and AI technologies that are radically more secure and accountable than today’s state of the art.

FirstPoint Mobile Guard

FirstPoint Mobile Guard

FirstPoint Mobile Guard has developed the market’s most advanced solution for securing cellular devices, including mobile phones and IoT products, by blocking malicious data leakage.

DeuZert

DeuZert

DeuZert is an accredited German certification body in accordance with ISO/IEC 27001 (Information Security Management).

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

Go Grow

Go Grow

Go Grow is a business oriented accelerator program at Copenhagen School of Entrepreneurship. Targeted technologies include IoT, AI and Cybersecurity.

DNX Ventures

DNX Ventures

Based in Silicon Valley and Tokyo, DNX Ventures is an early stage VC for B2B startups in sectors including Cybersecurity.

Charterhouse Voice & Data

Charterhouse Voice & Data

Charterhouse is your trusted technology partner - designing, provisioning and supporting the technology that underpins your operations including network security and data compliance.

Avetta

Avetta

Avetta One is the industry’s largest Supply Chain Risk Management (SCRM) platform. It enables clients to manage supply chain risks and suppliers to prove the value of their business.

BalkanID

BalkanID

BalkanID is an Identity governance solution that leverages data science to provide visibility into your SaaS & public cloud entitlement sprawl.

Apura Cybersecurity Intelligence

Apura Cybersecurity Intelligence

Apura is a Brazilian company that develops advanced products and provides specialized services in information security and cyber defense.

Secora Consulting

Secora Consulting

Secora Consulting is a professional services company specialising in tailored cybersecurity assessments and cyber advisory services.

Rhodian Group

Rhodian Group

Rhodian Group (formerly Adar) specialize in providing Technology, Cybersecurity, and Compliance services to the insurance industry.

Transatlantic Cyber Security Business Network

Transatlantic Cyber Security Business Network

The Transatlantic Cyber Security Business Network is a coalition of UK and US cyber security companies which facilitates collaboration to help address critical cyber security challenges.

iomart Group

iomart Group

iomart is a cloud computing and IT managed services business providing secure hybrid cloud, network connectivity, data management, and digital workplace capability.