The Human Factor Is Essential To Eliminating Bias in Artificial Intelligence

It is not enough to open the ‘black box’ of machine learning. Direct human evaluation is the only way to ensure biases are not perpetuated through AI.

More and more technology and digital services are built upon, and driven, by AI and machine learning. But as we are beginning to see, these programmes are starting to replicate the biases which are fed into them, notably biases around gender. It is therefore imperative that the machine learning process is managed from input to output – including data, algorithms, models, training, testing and predictions – to assure that this bias is not perpetuated.

Bahar Gholipour notes this bias as AI’s so-called ‘black box’ problem — our inability to see the inside of an algorithm and therefore understand how it arrives at a decision. He claims that ‘left unsolved, it can devastate our societies by ensuring that historical discrimination, which many have worked hard to leave behind, is hard-coded into our future.’

Technological expertise is not enough to scrutinize, monitor and safeguard each stage of the machine learning process. The experience and perspective of people of all ages and all walks of life is needed to identify both obvious and subliminal social and linguistic biases, and make recommendations for adjustments to build accuracy and trust. Even more important than having an opportunity to evaluate gender bias in the ‘black box’ is having the freedom to correct the biases discovered.

The first step is to open the ‘black box’. Users are increasingly demanding that AI be honest, fair, transparent, accountable and human-centric. But proprietary interests and security issues have too often precluded transparency. However, positive initiatives are now being developed to accelerate open-sourcing code and create transparency standards. AI Now, a nonprofit at New York University advocating for algorithmic fairness, has a simple principle worth following: ‘When it comes to services for people, if designers can’t explain an algorithm’s decision, you shouldn’t be able to use it.’

Now there are a number of public and private organizations who are beginning to take this seriously. Google AI has several projects to push the business world, and society, to consider the biases in AI, including GlassBox, Active Question Answering and its PAIR initiative (People + AI Research) which add manual restrictions to machine learning systems to make their outputs more accurate and understandable.

The US Defense Advanced Research Projects Agency is also funding a big effort called XAI (Explainable AI) to make systems controlled by artificial intelligence more accountable to their users.

Microsoft CEO Satya Nadella has also gone on the record defending the need for ‘algorithmic accountability’ so that humans can undo any unintended harm.

But laudable as these efforts are, opening the box and establishing regulations and policies to ensure transparency is of little value until you have a human agent examining what’s inside to evaluate if the data is fair and unbiased. Automated natural language processing alone cannot do it because language is historically biased – not just basic vocabulary, but associations between words, and relationships between words and images.

Semantics matter. Casey Miller and Kate Swift, two women who in 1980 wrote The Handbook of Nonsexist Writing – the first handbook of its kind – dedicated their lives to promoting gender equity in language. That was almost 40 years ago and, while technology has advanced exponentially in that time period, we've made little progress removing gender bias from our lexicon.

The challenge for AI is in programming a changing vocabulary into a binary numerical system. Human intervention is necessary to adjudicate the bias in the programmer, the context and the language itself. But gender bias is not just in the algorithms. It lies within the outcomes – predictions and recommendations – powered by the algorithms.

Common stereotypes are even being reinforced by AI's virtual assistants: those tasked with addressing simple questions (e.g. Apple’s Siri and Amazon’s Alexa) have female voices while more sophisticated problem-solving bots (e.g. IBM’s Watson and Microsoft’s Einstein) have male voices.

Gender bias is further exacerbated by the paucity of women working in the field. AI Now’s 2017 report (opens in new window) identifies the lack of women, and ethnic minorities, working in AI as a foundational problem that is most likely having a material impact on AI systems and shaping their effects in society.

Human agents must question each stage of the process, and every question requires the perspective of a diverse, cross-disciplinary team, representing both the public and private sectors and inclusive of race, gender, culture, education, age and socioeconomic status to audit and monitor the system and what it generates. They don't need to know the answers – just how to ask the questions.

In some ways, 21st century machine learning needs to circle back to the ancient Socratic method of learning based on asking and answering questions to stimulate critical thinking, draw out ideas and challenge underlying presumptions. Developers should understand that this scrutiny and reformulation helps them clean identified biases from their training data, run ongoing simulations based on empirical evidence and fine tune their algorithms accordingly. This human audit would strengthen the reliability and accountability of AI and ultimately people’s trust in it.

By  Elizabeth Isele.  Associate Fellow, Global Economy and Finance, Royal Institute of International Affairs

Chatham House

You Might Also Read: 

Real Dangers of Artificial Intelligence:

Do Companies Need A Chief AI Officer?:

Don't Leave AI Governance To The Machines:

 

« Keeping Young People Off The Dark Web
UK Fallout From The Massive Breach At Equifax »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

Tufin

Tufin

Tufin enables organizations to automate their security policy visibility, risk management, provisioning and compliance across their multi-vendor, hybrid environment.

Avatu

Avatu

Avatu specialise in providing clients the advice, technology and tools they need to fight cyber and insider threats.

LogicManager

LogicManager

LogicManager offer a complete set of IT governance, risk and compliance software solutions and advisory services.

Asseco Group

Asseco Group

Asseco Poland stands at the forefront of the multinational Asseco Group. We are a leading provider of state-of-the-art IT solutions in Central and Eastern Europe.

Cyxtera Technologies

Cyxtera Technologies

Cyxtera offers powerful, secure IT infrastructure capabilities paired with agile, dynamic software-defined security.

Monegasque Digital Security Agency (AMSN)

Monegasque Digital Security Agency (AMSN)

AMSN is the national authority in charge of the security of information systems in Monaco.

CyberSwarm

CyberSwarm

CyberSwarm is developing a neuromorphic System-on-a-Chip dedicated to cybersecurity which helps organizations secure communication between connected devices and protect critical business assets.

Cyber-Physical Systems Security Institute (CPSSI)

Cyber-Physical Systems Security Institute (CPSSI)

CPSSI is a non-profit, by-invitation-only research and educational organization focused on practical and theoretical solutions to the cybersecurity challenges facing Cyber-Physical Systems.

Greylock Partners

Greylock Partners

Greylock Partners is a leading venture capital firm based in Silicon Valley. We invest in all sectors of enterprise software technology including applications, cloud/SaaS, networking and security.

Zeusmark

Zeusmark

Zeusmark are a digital brand security company. We enable companies to successfully defend their brands, revenue and consumers online.

Elpha Secure

Elpha Secure

Elpha Secure provides a comprehensive cybersecurity solution, combining technology and insurance to protect against cyber threats.

CyberCX

CyberCX

CyberCX provides services from strategic consulting, security testing and training to world-class managed services and engineering solutions.

Tabidus Technology

Tabidus Technology

Tabidus Technology is a cybersecurity association that unites and provides the global protection options against cyber threats.

CloudCover

CloudCover

CloudCover is a software-defined cybersecurity risk solution that provides risk awareness, risk analytics, and data security in real time.

Kaesim Cybersecurity

Kaesim Cybersecurity

Kaesim are a global team of cybersecurity experts protecting businesses since 2015. We stop bad people damaging your business, your data and your reputation.

Dynamic Networks

Dynamic Networks

Dynamic Networks provide Managed Cloud Services; Unified Communications; Security & Compliance Services and Network & Infrastructure Services for both Public Sector and Private sector businesses.