Artificial Intelligence Is A Fascist's Dream

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralise power and target certain populations.

Microsoft Research’s Kate Crawford recently warned in her SXSW session, titled Dark Days: AI and the Rise of Fascism.  Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

All of these movements have shared characteristics, including the desire to centralize power, track populations, demonise outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases.

She described a controversial piece of research from Shanghai Jiao Tong University in China, where authors claimed to have developed a system that could predict criminality based on someone’s facial features.

The machine was trained on Chinese government ID photos, analyzing the faces of criminals and non-criminals to identify predictive features. The researchers claimed it was free from bias.

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.”

Crawford then outlined the “nasty history” of people using facial features to “justify the unjustifiable”. The principles of phrenology, a pseudoscience that developed across Europe and the US in the 19th century, were used as part of the justification of both slavery and the Nazi persecution of Jews.

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.

Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid.

Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,” Crawford said, mentioning research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013, since when AI has made huge leaps.

Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas. Earlier this year the justice department concluded that Chicago’s police had for years regularly used “unlawful force”, and that black and Hispanic neighborhoods were most affected.

Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises, using thousands of data points on people to work out how to manipulate their views, will be possible “in the next few years”.

“This is a fascist’s dream,” she said. “Power without accountability.”

Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants. “It’s the most powerful engine of mass deportation this country has ever seen,” she said.

But what do you do if the system has got something wrong? What if it has incorrect data?

Crawford argues that we have to make these AI systems more transparent and accountable. “The ocean of data is so big. We have to map their complex subterranean and unintended effects.”

Crawford has founded AI Now, a research community focused on the social impacts of artificial intelligence to do just this

“We want to make these systems as ethical as possible and free from unseen biases.”

Guardian

You Might Also Read

AI Might Be The Ultimate Answer To Cyber Threats:

Snowden Helping To Protect Journalists:

 

« Big Data Tech Alters Homeland Security
Around Half Of Human Jobs Can Be Automated Now »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Lacuna Talent

Lacuna Talent

Lacuna Talent delivers the combined power of Via Resource, the international Cyber Security recruiter, and Lacuna Talent, the Specialist AI/Data recruiter.

HDI

HDI

HDI is the worldwide professional association and certification body for the technical service and support industry.

Clearwater Security & Compliance

Clearwater Security & Compliance

Clearwater Compliance specialize in Privacy, Security, Compliance and Risk Management Solutions for Health Care, Law Firms and other businesses.

ObserveIT

ObserveIT

ObserveIT helps companies identify & eliminate insider threats. Visually monitor & quickly investigate with our easy-deploy user activity monitoring solution.

Digital Infrastructure Association (DINL)

Digital Infrastructure Association (DINL)

DINL is the leading representative for companies and organisations which are active within the Dutch digital infrastructure sector.

SolutionsPT

SolutionsPT

SolutionsPT enables customers to strengthen their Operational Technology (OT) network to meet the ever increasing demand for performance, availability, connectivity and security.

Lightship Security

Lightship Security

Lightship Security is an accredited Common Criteria and FIPS 140-2 IT security testing laboratory that specializes in test conformance automation solutions and IT product security certifications.

Absio

Absio

Absio provides the technology you need to build data security directly into your software by default, and the design and development services you need to make it happen.

Onfido

Onfido

Onfido is building the new identity standard for the internet. We digitally prove people’s real identities using a photo ID and facial biometrics.

Censys

Censys

Our customers rely on Censys data to get the global visibility they need of their attack surfaces in order to proactively prevent nation-state attacks and emerging threats.

Research Institute in Secure Hardware and Embedded Systems (RISE)

Research Institute in Secure Hardware and Embedded Systems (RISE)

The UK Research Institute in Secure Hardware and Embedded Systems (RISE) seeks to identify and address key issues that underpin our understanding of Hardware Security.

Hyperion Gray

Hyperion Gray

Hyperion Gray are a small research and development team focused on innovative work in a variety of areas including Software & Security Research, Penetration Testing, Incident Response, and Red Teaming

Hackurity.io

Hackurity.io

Hackurity.io is a high energy IT security start-up founded in 2021 out of the frustration that IT Security is highly fragmented and reactive.

Willyama Services

Willyama Services

Willyama Services is a certified Information Technology and Cybersecurity professional services business providing services to government and private sector clients.

Project Cypher

Project Cypher

Project Cypher leverages the latest cybersecurity developments, a world class team of hackers and constant R&D to provide you with unparalleled cybersecurity offerings.

SecuCenter

SecuCenter

Secucenter is a trusted partner for SOC services, offering security expertise in a cost-effective way.

Thero6

Thero6

Thero6 develop dynamic financial analysis algorithms that help prevent coin collapses and theft of cryptocurrency funds by identifying the transaction absolutely throughout the chain.