Can We Stop Algorithms Telling Lies?

Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

We’ve seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move.
.
Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system.

Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.

Let’s have a four-layer hierarchy when it comes to bad algorithms. At the top there are the unintentional problems that reflect cultural biases. For example, when Harvard professor Latanya Sweeney found that Google searches for names perceived to be black generated ads associated with criminal activity, we can assume that there was no Google engineer writing racist code.

In fact, the ads were trained to be bad by previous users of Google search, who were more likely to click on a criminal records ad when they searched for a black sounding name. Another example: the Google image search result for “unprofessional hair”, which returned almost exclusively black women, is similarly trained by the people posting or clicking on search results throughout time.

One layer down we come to algorithms that go bad through neglect. These would include scheduling programs that prevent people who work minimum wage jobs from leading decent lives. The algorithms treat them like cogs in a machine, sending them to work at different times of the day and on different days each week, preventing them from having regular childcare, a second job, or going to night school. They are brutally efficient, hugely scaled, and largely legal, collecting pennies on the backs of workers.

Or consider Google’s system for automatically tagging photos. It had a consistent problem whereby black people were being labelled gorillas. This represents neglect of a different nature, namely quality assessment of the product itself: they didn’t check that it worked on a wide variety of test cases before releasing the code.

The third layer consists of nasty but legal algorithms. For example, there were Facebook executives in Australia showing advertisers ways to find and target vulnerable teenagers. Awful but probably not explicitly illegal.

Indeed, online advertising in general can be seen as a spectrum, where on the one hand the wealthy are presented with luxury goods to buy but the poor and desperate are preyed upon by online payday lenders. Algorithms charge people more for car insurance if they don’t seem likely to comparison shop and Uber just halted an algorithm it was using to predict how low an offer of pay could be, thereby reinforcing the gender pay gap.

Finally, there’s the bottom layer, which consists of intentionally nefarious and sometimes outright illegal algorithms. There are hundreds of private companies, including dozens in the UK, that offer mass surveillance tools. They are marketed as a way of locating terrorists or criminals, but they can be used to target and root out citizen activists. And because they collect massive amounts of data, predictive algorithms and scoring systems are used to filter out the signal from the noise.

The illegality of this industry is under debate, but a recent undercover operation by journalists at Al Jazeera has exposed the relative ease with which middlemen representing repressive regimes in Iran and South Sudan have been able to buy such systems. For that matter, observers have criticised China’s social credit scoring system. Called “Sesame Credit,” it’s billed as mostly a credit score, but it may also function as a way of keeping tabs on an individual’s political opinions, and for that matter as a way of nudging people towards compliance.

Closer to home, there’s Uber’s “Greyball,” an algorithm invented specifically to avoid detection when the taxi service is functioning illegally in a city. It used data to predict which riders were violating the terms of service of Uber, or which riders were undercover government officials. Telltale signs that Greyball picked up included multiple use of the app in a single day and using a credit card tied to a police union.

The most famous malicious and illegal algorithm we’ve discovered so far is the one used by Volkswagen in 11 million vehicles worldwide to deceive the emissions tests, and in particular to hide the fact that the vehicles were emitting nitrogen oxide at up to 35 times the levels permitted by law. And although it seemed simply like a devious device, this qualifies as an algorithm as well. It was trained to identify and predict testing conditions versus road conditions, and to function differently depending on that result. And, like Greyball, it was designed to deceive.

So, what can we learn from the current, mature world of car makers in the context of illegal software?
First, similar types of software are being deployed by other car manufacturers that turn off emissions controls in certain settings. In other words, this was not a situation in which there was only one bad actor, but rather a standard operating procedure.

Next, the VW cheating started in 2009, which means it went undetected for five years. What else has been going on for five years? This line of thinking makes us start looking around, wondering which companies are currently hoodwinking regulators, evading privacy laws, or committing algorithmic fraud with impunity.

Put it another way. We’re all expecting cars to be self-driving in a few years or a couple of decades at most. When that happens, can we expect there to be international agreements on what the embedded self-driving car ethics will look like? Or will pedestrians be at the mercy of the car manufacturers to decide what happens in the case of an unexpected pothole? If we get rules, will the rules differ by country, or even by the country of the manufacturer?

If this sounds confusing for something as easy to observe as car crashes, imagine what’s going on under the hood, in the relatively obscure world of complex “deep learning” models.

It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently.

When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around.

Guardian:

You Might Also Read:

Artificial Intelligence Gives Business Wings:

 

 

« Global Cyber Attack Could Cost $53Billion.
Predicting Crime From Open Source Data »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

Cloud53

Cloud53

Cloud53 specialise in improving operational IT through strategic use of Cloud technologies and services.

Maverick Technologies

Maverick Technologies

Maverick is an industrial automation, enterprise integration and operational consulting company. Services include industrial cyber security.

CryptoCodex

CryptoCodex

Cryptocodex has developed Counter-Fight, the most advanced, yet simple to implement, counterfeit detection system.

Egis Technology

Egis Technology

Egis specializes in the IC design, research and development, and the testing and sales of capacitive fingerprint sensor.

Romanian Association for Information Security Assurance (RAISA)

Romanian Association for Information Security Assurance (RAISA)

RAISA promotes and supports information security activities and creates a community for the exchange of knowledge between specialists, academic and corporate environment in Romania.

DeviceAssure

DeviceAssure

DeviceAssure enables organizations to reliably identify counterfeit and non-standard devices with a real-time check on a device's authenticity.

Mend.io

Mend.io

Mend.io (formerly known as WhiteSource) is an application security company built to secure today’s digital world.

DataDome

DataDome

DataDome offers real-time AI protection against all OWASP automated threats, including credential stuffing, layer 7 DDoS attacks, SQL injection & intensive scraping.

Gorodissky IP Security

Gorodissky IP Security

Gorodissky IP Security is a comprehensive approach to protecting your intellectual property on the Internet and beyond.

Cyway

Cyway

Cyway is a value-added cybersecurity distributor focusing on on-prem, cloud solutions and hybrid solutions, IoT, AI & machine learning IT security technologies.

RankedRight

RankedRight

RankedRight empowers security teams to take immediate action on their most critical risks.

Stacklok

Stacklok

Stacklok are an Open Source first security company enabling safe Open Source Software consumption.

Blink Ops

Blink Ops

Blink helps security teams streamline everyday workflows and protect your organization better.

Exacom

Exacom

Exacom is a leading provider of multimedia logging/recording solutions across public safety, government, DoD, energy, utilities, transportation, and security applications.

Illustria

Illustria

Illustria is your agent-less “watchdog” for all open source libraries. Our mission is becoming a dev-velocity company, enabled via cyber security.

SOCRadar

SOCRadar

SOCRadar is an Extended Threat Intelligence (XTI) SaaS platform that combines External Attack Surface Management (EASM), Digital Risk Protection Services (DRPS), and Cyber Threat Intelligence (CTI).