Frankenstein’s Paperclips

As Doomsday Scenarios go, it does not sound terribly frightening. The “paperclip maximiser” is a thought experiment proposed by Nick Bostrom, a philosopher at Oxford University. 

Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. 

Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr. Bostrom argues, but poses an existential threat to humanity.

The idea of machines that turn on their creators is not new, going back to Mary Shelley’s “Frankenstein” (1818) and earlier; nor is the concept of an AI undergoing an “intelligence explosion” through repeated self-improvement, which was first suggested in 1965. But recent progress in AI has caused renewed concern, and Mr. Bostrom has become the best-known proponent of the dangers of advanced AI or, as he prefers to call it, “Superintelligence”, the title of his bestselling book.

His interest in AI grew out of his analysis of existential threats to humanity. Unlike pandemic disease, an asteroid strike or a super volcano, the emergence of superintelligence is something that mankind has some control over. Mr. Bostrom’s book prompted Elon Musk to declare that AI is “potentially more dangerous than nukes”. 

Worries about its safety have also been expressed by Stephen Hawking, a physicist, and Lord Rees, a former head of the Royal Society, Britain’s foremost scientific body. All three of them, and many others in the AI community, signed an open letter calling for research to ensure that AI systems are “robust and beneficial”—i.e., do not turn evil. Few would disagree that AI needs to be developed in ways that benefit humanity, but agreement on how to go about it is harder to reach.

Mr. Musk thinks openness is the key. He was one of the co-founders in December 2015 of OpenAI, a new research institute with more than $1 billion in funding that will carry out AI research and make all its results public. “We think AI is going to have a massive effect on the future of civilisation, and we’re trying to take the set of actions that will steer that to a good future,” he says. In his view, AI should be as widely distributed as possible. Rogue AIs in science fiction, such as HAL 9000 in “2001: A Space Odyssey” and SKYNET in the “Terminator” films, are big, centralised machines, which is what makes them so dangerous when they turn evil. A more distributed approach will ensure that the benefits of AI are available to everyone, and the consequences less severe if an AI goes bad, Mr. Musk argues.

Not everyone agrees with this. Some claim that Mr. Musk’s real worry is market concentration—a Facebook or Google monopoly in AI, say—though he dismisses such concerns as “petty”. 

For the time being, Google, Facebook and other firms are making much of their AI source code and research freely available in any case. And Mr. Bostrom is not sure that making AI technology as widely available as possible is necessarily a good thing. In a recent paper he notes that the existence of multiple AIs “does not guarantee that they will act in the interests of humans or remain under human control”, and that proliferation could make the technology harder to control and regulate.

Fears about AIs going rogue are not widely shared by people at the cutting edge of AI research. “A lot of the alarmism comes from people not working directly at the coal face, so they think a lot about more science-fiction scenarios,” says Demis Hassabis of DeepMind. “I don’t think it’s helpful when you use very emotive terms, because it creates hysteria.” Mr. Hassabis considers the paperclip scenario to be “unrealistic”, but thinks Mr. Bostrom is right to highlight the question of AI motivation. 

How to specify the right goals and values for AIs, and ensure they remain stable over time, are interesting research questions, he says. (DeepMind has just published a paper with Mr. Bostrom’s Future of Humanity Institute about adding “off switches” to AI systems.) A meeting of AI experts held in 2009 in Asilomar, California, also concluded that AI safety was a matter for research, but not immediate concern. The meeting’s venue was significant, because biologists met there in 1975 to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.

Sci-fi scenarios

Mr. Bostrom responds that several AI researchers do in fact share his concerns, but stresses that he merely wishes to highlight the potential risks posed by AI; he is not claiming that it is dangerous now. For his part, Andrew Ng of Baidu says worrying about super-intelligent AIs today “is like worrying about overpopulation on Mars when we have not even set foot on the planet yet”, a subtle dig at Mr. Musk. (When he is not worrying about AIs, Musk is trying to establish a colony on Mars, as an insurance policy against human life being wiped out on Earth.) 

AI scares people, says Marc Andreessen, because it combines two deep-seated fears: The Luddite worry that machines will take all the jobs, and the Frankenstein scenario that AIs will “wake up” and do unintended things. Both “keep popping up over and over again”. And decades of science fiction have made it a more tangible fear than, say, climate change, which poses a much greater threat.

AI researchers point to several technical reasons why fear of AI is overblown, at least in its current form. First, intelligence is not the same as sentience or consciousness, says Mr. Ng, though all three concepts are commonly elided. The idea that machines will “one day wake up and change their minds about what they will do” is just not realistic, says Francesca Rossi, who works on the ethics of AI at IBM. 

Second, an “intelligence explosion” is considered unlikely, because it would require an AI to make each version of itself in less time than the previous version as its intelligence grows. Yet most computing problems, even much simpler ones than designing an AI, take much longer as you scale them up.

Third, although machines can learn from their past experiences or environments, they are not learning all the time. A self-driving car, for example, is not constantly retraining itself on each journey. Instead, deep-learning systems have a training phase in which neural-network parameters are adjusted to build a computational model that can perform a particular task, a number-crunching process that may take several days. 

The resulting model is then deployed in a live system, where it can run using much less computing horsepower, allowing deep-learning models to be used in cars, drones, apps and other products. But those cars, drones and so on do not learn in the wild. Instead, the data they gather while out on a mission are sent back and used to improve the model, which then has to be redeployed. So an individual system cannot learn bad behaviour in a particular environment and “go rogue”, because it is not actually learning at the time.

The black-box problem

Amid worries about rogue AIs, there is a risk that nearer-term ethical and regulatory concerns about AI technologies are being overlooked. Facial-recognition systems based on deep learning could make surveillance systems far more powerful, for example. Google’s FaceNet can determine with 99.6% accuracy whether two pictures show the same person (humans score around 98%). Facebook’s DeepFace is almost as good. When the social-network giant recently launched an app called Moments, which automatically gathers together photos of the same person, it had to disable some of its facial-recognition features in Europe to avoid violating Irish privacy laws.

In Russia, meanwhile, there has been a recent outcry over an app called FindFace, which lets users take photos of strangers and then determines their identity from profile pictures on social networks. The app’s creators say it is merely a way to make contact with people glimpsed on the street or in a bar. Russian police have started using it to identify suspects and witnesses. The risk is clear: the end of public anonymity. Gigapixel images of a large crowd, taken from hundreds of metres away, can be analysed to find out who went on a march or protest, even years later. In effect, deep learning has made it impossible to attend a public gathering without leaving a record, unless you are prepared to wear a mask. (A Japanese firm has just started selling Privacy Visor, a funny-looking set of goggles designed to thwart facial-recognition systems.)

Deep learning, with its ability to spot patterns and find clusters of similar examples, has obvious potential to fight crime—and allow authoritarian governments to spy on their citizens. Chinese authorities are analysing people’s social-media profiles to assess who might be a dissident, says Patrick Lin, a specialist in the ethics of AI at Stanford Law School. In America, meanwhile, police in Fresno, California, have been testing a system called “Beware” that works out how dangerous a suspect is likely to be, based on an analysis of police files, property records and social-media posts. 

Another system, called COMPAS, provides guidance when sentencing criminals, by predicting how likely they are to reoffend. Such systems, which are sure to be powered by deep learning soon if they are not already, challenge “basic notions about due process”, says Mr. Lin.

A related concern is that as machine-learning systems are embedded into more and more business processes, they could be unwittingly discriminatory against particular groups of people. In one infamous example, Google had to apologise when the automatic tagging system in its Photos app labelled black people as “gorillas”. COMPAS has been accused of discriminating against black people. 

AI technology “is already touching people’s lives, so it’s important that it does not incorporate biases”, says Richard Socher of MetaMind. Nobody sets out to make a system racist, he says, but “if it trains on terrible data it will make terrible predictions.” Increasingly it is not just intellectual work, but also moral thinking and decision-making, says Mr. Lin, that is being done “by what are in effect black boxes”.

Fortunately, there are ways to look inside these black boxes and determine how they reach their conclusions. An image-processing neural network, for example, can be made to highlight the regions of an input image which most influenced its decision. And many researchers are working on varieties of a technique called “rule extraction” which allows neural networks to explain their reasoning, in effect. The field in which this problem has received most attention is undoubtedly that of self-driving cars.

Such vehicles raise other ethical issues, too, particularly when it comes to how they should behave in emergencies. For example, should a self-driving car risk injuring its occupants to avoid hitting a child who steps out in front of it? Such questions are no longer theoretical. Issues such as who is responsible in an accident, how much testing is required and how to set standards need to be discussed now, says Mr. Hassabis. 

Mr. Ng comes at the question from a different angle, suggesting that AI researchers have a moral imperative to build self-driving cars as quickly as possible in order to save lives: most of the 3,000 people who die in car accidents every day are victims of driver error. But even if self-driving cars are much safer, says Daniel Susskind, an economist at Oxford University, attitudes will have to change. People seem to tolerate road deaths caused by humans, but hold machines to much higher standards. “We compare machines to perfection, not to humans doing the same tasks,” he says.

Killer app

Many people are worried about the military use of AI, in particular in autonomous weapons that make life-and-death decisions without human intervention. Yoshua Bengio of the University of Montreal says he would like an “outright ban” on the military use of AI. Life-and-death decisions should be made by humans, he says, not machines—not least because machines cannot be held to account afterwards. 

Mr. Hassabis agrees. When Google acquired his firm, he insisted on a guarantee that its technology would not be used for military purposes. He and Mr. Bengio have both signed an open letter calling for a ban on “offensive autonomous weapons”. (Ronald Arkin of the Georgia Institute of Technology, by contrast, argues that AI-powered military robots might in fact be ethically superior to human soldiers; they would not rape, pillage or make poor judgments under stress.)

Another of Mr. Hassabis’s ideas, since borrowed by other AI firms, was to establish an ethics board at DeepMind, including some independent observers (though the company has been criticised for refusing to name the board’s members). Even if AI firms disagree with the alarmists, it makes sense for them to demonstrate that there are at least some things they think are worth worrying about, and to get involved in regulation before it is imposed from outside. 

But AI seems unlikely to end up with its own regulatory agency on the lines of America’s Federal Aviation Authority or Food and Drug Administration, because it can be applied to so many fields. It seems most likely that AI will require existing laws to be updated, rather than entirely new laws to be passed. 

The most famous rules governing the behaviour of AI systems are of course the “Three Laws of Robotics” from Isaac Asimov’s robot stories. What made the stories interesting was that the robots went wrong in unexpected ways, because the laws simply do not work in practice. It will soon be time to agree on laws that do.

Economist

« Government In The Information Age
Is Moscow Trying To Influence The US Presidential Election? »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

XBOSoft

XBOSoft

XBOSoft is a software QA and testing company. We cover the entire QA and testing life cycle including software and application security.

Sogeti

Sogeti

Sogeti deliver solutions that enable digital transformation and offer cutting-edge expertise in Cloud, Cybersecurity, Digital Manufacturing, Quality Assurance, Testing, and emerging technologies.

Vitrociset

Vitrociset

Vitrociset design complex systems for defence, homeland security, space and transport. Activities include secure communications and cybersecurity.

Crosscheck Networks

Crosscheck Networks

Crosscheck products allow you to test your APIs across different protocols and message formats with functional automation, performance, and security testing capabilities.

Kleiner Perkins

Kleiner Perkins

For five decades, Kleiner Perkins has made history by partnering with some of the most ingenious and forward-thinking founders in technology and life sciences.

SecureAge Technology

SecureAge Technology

We’re a rapidly growing cybersecurity company with an 18-year history of ZERO Data breaches. Our security solutions place security and usability on equal footing. Learn more about our technology.

Albania Lab

Albania Lab

Albania Lab is a consulting company focused on the development and delivery of digital solutions and IT services including cybersecurity.

Grove Group

Grove Group

Grove provides businesses with the tools that work best for their unique operations, through cybersecurity and cloud services, custom software development and our big data analytics expertise.

Chartered Institute of Information Security (CIISec)

Chartered Institute of Information Security (CIISec)

CIISec is dedicated to helping individuals and organisations develop capability and competency in cyber security.

Drumz

Drumz

Drumz plc is an investment company whose investing policy is to invest principally but not exclusively in the technology sector within Europe.

Securious

Securious

If you need to improve your cyber security or achieve cyber security accreditations, Securious provide an independent service that will identify and address your issues quickly and efficiently.

HADESS

HADESS

We are "Hadess", a group of cyber security experts and white hat hackers.

eMudhra

eMudhra

eMudhra is a leader in Identity and Transaction Management Solutions.

Cybercentry

Cybercentry

Cybercentry is a specialist information security, data protection and cyber security consultancy.

Togggle

Togggle

Togggle offers seamless identity verification solutions and distributed infrastructure, enabling organizations to combat fraud and ensure compliance with data protection regulations.

Abstract Security

Abstract Security

Abstract Security has created a revolutionary platform, equipped with an AI-powered assistant, to better centralize the management of security analytics.

US Insider Risk Management Center of Excellence (US-InRM)

US Insider Risk Management Center of Excellence (US-InRM)

The US-InRM Center of Excellence is a nonprofit organization dedicated to promoting private, public, and academic partnerships to foster knowledge sharing and resources to mitigate insider risk.

Precision Cybertechnologies & Digital Solutions (Precision-Cyber)

Precision Cybertechnologies & Digital Solutions (Precision-Cyber)

Precision-Cyber was founded on the philosophy of state-of-the-art cybersecurity and digital solutions. Our guiding principle is simply that we will provide and secure all your digital needs.