Can We Stop Algorithms Telling Lies?

Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

We’ve seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move.
.
Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system.

Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.

Let’s have a four-layer hierarchy when it comes to bad algorithms. At the top there are the unintentional problems that reflect cultural biases. For example, when Harvard professor Latanya Sweeney found that Google searches for names perceived to be black generated ads associated with criminal activity, we can assume that there was no Google engineer writing racist code.

In fact, the ads were trained to be bad by previous users of Google search, who were more likely to click on a criminal records ad when they searched for a black sounding name. Another example: the Google image search result for “unprofessional hair”, which returned almost exclusively black women, is similarly trained by the people posting or clicking on search results throughout time.

One layer down we come to algorithms that go bad through neglect. These would include scheduling programs that prevent people who work minimum wage jobs from leading decent lives. The algorithms treat them like cogs in a machine, sending them to work at different times of the day and on different days each week, preventing them from having regular childcare, a second job, or going to night school. They are brutally efficient, hugely scaled, and largely legal, collecting pennies on the backs of workers.

Or consider Google’s system for automatically tagging photos. It had a consistent problem whereby black people were being labelled gorillas. This represents neglect of a different nature, namely quality assessment of the product itself: they didn’t check that it worked on a wide variety of test cases before releasing the code.

The third layer consists of nasty but legal algorithms. For example, there were Facebook executives in Australia showing advertisers ways to find and target vulnerable teenagers. Awful but probably not explicitly illegal.

Indeed, online advertising in general can be seen as a spectrum, where on the one hand the wealthy are presented with luxury goods to buy but the poor and desperate are preyed upon by online payday lenders. Algorithms charge people more for car insurance if they don’t seem likely to comparison shop and Uber just halted an algorithm it was using to predict how low an offer of pay could be, thereby reinforcing the gender pay gap.

Finally, there’s the bottom layer, which consists of intentionally nefarious and sometimes outright illegal algorithms. There are hundreds of private companies, including dozens in the UK, that offer mass surveillance tools. They are marketed as a way of locating terrorists or criminals, but they can be used to target and root out citizen activists. And because they collect massive amounts of data, predictive algorithms and scoring systems are used to filter out the signal from the noise.

The illegality of this industry is under debate, but a recent undercover operation by journalists at Al Jazeera has exposed the relative ease with which middlemen representing repressive regimes in Iran and South Sudan have been able to buy such systems. For that matter, observers have criticised China’s social credit scoring system. Called “Sesame Credit,” it’s billed as mostly a credit score, but it may also function as a way of keeping tabs on an individual’s political opinions, and for that matter as a way of nudging people towards compliance.

Closer to home, there’s Uber’s “Greyball,” an algorithm invented specifically to avoid detection when the taxi service is functioning illegally in a city. It used data to predict which riders were violating the terms of service of Uber, or which riders were undercover government officials. Telltale signs that Greyball picked up included multiple use of the app in a single day and using a credit card tied to a police union.

The most famous malicious and illegal algorithm we’ve discovered so far is the one used by Volkswagen in 11 million vehicles worldwide to deceive the emissions tests, and in particular to hide the fact that the vehicles were emitting nitrogen oxide at up to 35 times the levels permitted by law. And although it seemed simply like a devious device, this qualifies as an algorithm as well. It was trained to identify and predict testing conditions versus road conditions, and to function differently depending on that result. And, like Greyball, it was designed to deceive.

So, what can we learn from the current, mature world of car makers in the context of illegal software?
First, similar types of software are being deployed by other car manufacturers that turn off emissions controls in certain settings. In other words, this was not a situation in which there was only one bad actor, but rather a standard operating procedure.

Next, the VW cheating started in 2009, which means it went undetected for five years. What else has been going on for five years? This line of thinking makes us start looking around, wondering which companies are currently hoodwinking regulators, evading privacy laws, or committing algorithmic fraud with impunity.

Put it another way. We’re all expecting cars to be self-driving in a few years or a couple of decades at most. When that happens, can we expect there to be international agreements on what the embedded self-driving car ethics will look like? Or will pedestrians be at the mercy of the car manufacturers to decide what happens in the case of an unexpected pothole? If we get rules, will the rules differ by country, or even by the country of the manufacturer?

If this sounds confusing for something as easy to observe as car crashes, imagine what’s going on under the hood, in the relatively obscure world of complex “deep learning” models.

It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently.

When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around.

Guardian:

You Might Also Read:

Artificial Intelligence Gives Business Wings:

 

 

« Global Cyber Attack Could Cost $53Billion.
Predicting Crime From Open Source Data »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Our Supplier Directory lists 6,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

Certification Europe

Certification Europe

Certification Europe (now Amtivo Ireland) is an accredited certification body which provides ISO management system certification, including ISO 27001.

L J Kushner & Associates

L J Kushner & Associates

L.J. Kushner is a leading Information Security recruiting firm.

Secusmart

Secusmart

Secusmart provide highly secure and encrypted speech and data communication solutions.

AVG Technologies

AVG Technologies

AVG is focused on providing home and business computer users with the most comprehensive and proactive protection against computer security threats.

Silicon Cloud International

Silicon Cloud International

Silicon Cloud is a high performance and secure cloud computing platform for engineering and scientific applications.

Axio Global

Axio Global

Axio is a leading cyber risk management SaaS company. Our Axio360 platform gives companies visibility to their cyber risk, and enables them to prioritize investments to protect their business.

ClassNK Consulting Service (NKCS)

ClassNK Consulting Service (NKCS)

ClassNK Consulting provides consulting services to the maritime industry with a focus on safety, security and compliance.

Kocho

Kocho

Kocho (formerly TiG) is a provider of identity and access, cyber security, cloud transformation, and managed IT services.

Darkbeam

Darkbeam

Darkbeam provides a unified solution to protect against security, brand and compliance risks across your digital infrastructure.

Mitiga

Mitiga

Mitiga uniquily combines the top cybersecurity minds in Incident Readiness and Response with a cloud-based platform for cloud and hybrid environments.

Rimini Street

Rimini Street

Rimini Street is a global provider of enterprise software support products and services, and the leading third-party support provider for Oracle and SAP software products.

Prancer

Prancer

Prancer is the industry's first cloud-native, self-service SAAS platform for automated security validation and penetration testing in the cloud.

Nitel

Nitel

Nitel is a leading next-generation technology services provider. We simplify the complex technology challenges of today’s enterprises to create seamless and integrated managed network solutions.

GTT Communications

GTT Communications

GTT are a global network provider that serves thousands of multinational and national enterprise, government and carrier customers with a portfolio of advanced connectivity and security services.

Baselime

Baselime

Baselime, the cloud-native observability platform. Resolve issues in your cloud application before they become problems.

CyberAntix

CyberAntix

CyberAntix offers Premium CyberSecurity for your business using an advanced Security Operations Centre technology and process platform reinforced by a steadfast and expert SOC team.