Computer Says No
The rise of artificial intelligence (AI) has led to an explosion in the number of algorithms that are used by employers, banks, police forces and others, but the systems can, and do, make bad decisions that seriously impact people’s lives.
An artificial intelligence watchdog should be set up to make sure people are not discriminated against by the automated computer systems making important decisions about their lives, say experts.
But because technology companies are so secretive about how their algorithms work, to prevent other firms from copying them, they rarely disclose any detailed information about how AIs have made particular decisions.
In a new report, Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, a research team at the Alan Turing Institute in London and the University of Oxford, call for a trusted third party body that can investigate AI decisions for people who believe they have been discriminated against.
“What we’d like to see is a trusted third party, perhaps a regulatory or supervisory body, that would have the power to scrutinise and audit algorithms, so they could go in and see whether the system is actually transparent and fair,” said Wachter.
It is not a new problem. Back in the 1980s, an algorithm used to sift student applications at St George’s Hospital Medical School in London was found to discriminate against women and people with non-European-looking names.
More recently, a veteran American Airlines pilot described how he had been detained at airports on 80 separate occasions after an algorithm repeatedly confused him with an IRA leader. Others to fall foul of AI errors have lost their jobs, had car licences revoked, been kicked off the electoral register or mistakenly chased for child support bills.
People who find themselves on the wrong end of a flawed AI can challenge the decision under national laws, but the report finds that the current laws to protect people are not now effective enough.
An automated system designed to catch dads who were not keeping up with childcare payments targeted hundreds of innocent men in Los Angeles who had to pay up or prove their innocence. One man, Walter Vollmer, was sent a bill for more than $200,000. His wife thought he had been leading a secret life and became suicidal.
More than 1000 people a week are mistakenly flagged up as terrorists by algorithms used at airports. A 22-year-old Asian DJ was denied a New Zealand passport last year because the automated system that processed his photograph decided that he had his eyes closed. But he was not too put out. “It was a robot, no hard feelings. I got my passport renewed in the end,” he told Reuters.