Computers Say ‘No’ But AI’s Decisions Must Be Fair & Transparent

American teachers have prevailed in a lawsuit with their school district over a computer program that assessed their performance.

The system rated teachers in Houston by comparing their students’ test scores against state averages. Those with high ratings won praise and even bonuses. Those who fared poorly faced the sack.

The program did not please everyone. Some teachers felt that the system marked them down without good reason. But they had no way of checking if the program was fair or faulty: the company that built the software, the SAS Institute, regards its algorithm a trade secret and would not disclose its workings.

The teachers took their case to court and a federal judge ruled that use of the EVAAS (Educational Value Added Assessment System) program may violate their civil rights. In settling the case, the school district paid the teachers’ fees and agreed to stop using the software.

The law has treated others differently. When Wisconsin police arrested Eric Loomis in 2013 for driving a car used in a shooting, he was handed a hefty prison term in part because a computer algorithm known as Compas judged him at high risk of re-offending. Loomis challenged the sentence because he was unable to check the program. His argument was rejected by the Wisconsin supreme court.

The arrival of artificial intelligence has raised concerns over computerised decisions to a new high. Powerful AIs are proliferating in society, through banks, legal firms and businesses, into the National Health Service and government. It is not their popularity that is problematic; it is whether they are fair and can be held to account.

Researchers have documented a long list of AIs that make bad decisions either because of coding mistakes or biases ingrained in the data they trained on.

Bad AIs have flagged the innocent as terrorists, sent sick patients home from hospital, lost people their jobs and car licences, had people kicked off the electoral register, and chased the wrong men for child support bills. They have discriminated on the basis of names, addresses, gender and skin colour. 

Bad intentions are not needed to make bad AI. A company might use an AI to search CVs for good job applicants after training it on information about people who rose to the top of the firm. If the culture at the business is healthy, the AI might well spot promising candidates, but if not, it might suggest people for interview who think nothing of trampling on their colleagues for a promotion. 

Opening the black box
How to make AIs fair, accountable and transparent is now one of the most crucial areas of AI research. Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust. That may not matter if the AI is recommending the next series of Game of Thrones. But the stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.

Recently, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare and education, to ban black box AIs because their decisions cannot be explained. 
“We can’t accept systems in high stakes domains that aren’t accountable to the public,” said Kate Crawford, a co-founder of the institute. The report said AIs should pass pre-release trials and be monitored “in the wild” so that biases and other faults are swiftly corrected.

Tech firms know that coming regulations and public pressure may demand AIs that can explain their decisions, but developers want to understand them too. 

Klaus-Robert Müller, professor of machine learning at the Technical University of Berlin, has trained an AI to diagnose breast cancer using variety of medical data. It is not good enough for the AI to simply spit out a diagnosis, he says. “It’s absolutely mandatory for the individual patient to know what the heck is going on.”

To understand how their AI reached decisions, Müller and his team developed an inspection program known as Layerwise Relevance Propagation, or LRP. It can take an AI’s decision and work backwards through the program’s neural network to reveal how a decision was made. 

Instead of exposing the full inner workings of an AI, it figures out what it would take to change the AI’s decision. Suppose an AI turns down a mortgage applicant. A researcher at Oxord Univerity, Dr Sandra Wachter, has formulated Wachter’s method which when used correctly might reveal that the loan was denied because the person’s income was £30,000, but would have been approved if it was £45,000. It allows the decision to be challenged and informs the person what needs to change to get the loan. 

For some researchers, the time to start regulating AI has arrived. “We have seen too many slip-ups, and AI is too powerful not to have government be part of the solution,” said Craig Fagan, policy director at Tim Berners-Lee’s Web Foundation. “It’s asking companies to take on a lot of responsibility to manage such rapid economic, political and social transformation and not have some government oversight.” 

Along with Luciano Floridi and Brent Mittelstadt at the Oxford Internet Institute, Wachter has called for a European AI watchdog to police the technology. The body would need powers to send independent investigators into organisations to scrutinise their AIs and extract meaningful explanations. 

To keep people safe, AIs could be certified for use in critical arenas such as medicine, criminal justice and driverless cars. “If we’re deploying them in critical infrastructure, we need to be sure they meet safety standards,” Wachter said.
“We need transparency as far as it is achievable, but above all we need to have a mechanism to redress whatever goes wrong, some kind of ombudsman,” said Floridi. “It’s only the government that can do that.”

Guardian

You Might Also Read: 

Artificial Intelligence Needs Regulation:

The AI Apocalypse:

Computer Says No:
 

« N. Korean Hackers Plan to Devastate UK
Management Coverup At Uber After 57m Customers Hacked »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Cyber Future Foundation (CFF)

Cyber Future Foundation (CFF)

CFF was established to create a cyberspace where digital commerce and innovation can thrive based on trust and respect to individual privacy.

Sistem Integra (SISB)

Sistem Integra (SISB)

SISB provide IT Security Infrastructure & Development, Mechanical & Electrical Services, Fire Safety & Detection Services, Facilities Management & Application Development.

Granted Consultancy

Granted Consultancy

Granted Consultancy is a business consultancy that specialises in securing funding to support companies with the development and commercialisation of new and innovative products and technologies.

Critical Start

Critical Start

Critical Start provides Managed Detection and Response services, endpoint security, threat intelligence, penetration testing, risk assessments, and incident response.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Rede Nacional CSIRT

Rede Nacional CSIRT

Rede Nacional CSIRT is a national network of CSIRTs in Portugal aimed at cooperation and mutual assistance in the handling of incidents and in the sharing of good security practices.

TransUnion

TransUnion

TransUnion is a global information and insights company that makes it possible for businesses and consumers to transact with confidence.

QuantiCor Security

QuantiCor Security

QuantiCor Security is one of the world’s leading developers and manufacturers of quantum computer resistant security solutions for IT infrastructures and the Internet of Things (IoT).

DataSolutions

DataSolutions

DataSolutions is a leading value-added distributor of transformational IT solutions in the UK and Ireland.

Radiance Technologies

Radiance Technologies

Radiance solutions provide technological advantage and operational superiority for our nation in the areas of intelligence, cyber and advanced weapon systems.

Mayer Brown

Mayer Brown

Mayer Brown is a global law firm. We have deep experience in high-stakes litigation and complex transactions across industry sectors including the global financial services industry.

Lab 1

Lab 1

Lab 1 turns criminal data breaches and attacks into insights. Get alerts of data breaches or ransomware attack incidents as they happen.

Flawnter

Flawnter

Flawnter is a security testing software that finds hidden security and quality flaws in your applications.

Radiant Security

Radiant Security

Radiant Security offers an AI-powered security co-pilot for Security Operations Centers (SOCs). Reinforce your SOC with an AI assistant.

DuckDuckGoose

DuckDuckGoose

DuckDuckGoose offer advanced solutions to protect against manipulated videos, images, voices and texts.

Bearer

Bearer

Bearer helps modern teams ship trustworthy products with the help of our code security solution built for security, privacy and engineering teams.