Humans Should Ban Artificially Intelligent Weapons

article-2715290-203CCE0A00000578-914_634x304.jpg

Unlike self-aware computer networks, self-driving cars tricked out with machine guns are possible right now — as are any number of AI-augmented weapons far deadlier than their human-aimed counterparts. 
    
Unfortunately, much of the recent outcry against artificial-intelligence weapons has been confused, conjuring robot takeovers of mankind. This scenario is implausible in the near term, but AI weapons actually do present a danger not posed by conventional, human-controlled weapons, and there is good reason to ban them.

We’ve already seen a glimpse of the future of artificial intelligence in Google’s self-driving cars. Now imagine that some fiendish crime syndicate were to steal such a car, strap a gun to the top, and reprogram it to shoot people. That’s an AI weapon.

The potential of these weapons has not escaped the imaginations of governments. This year we saw the US Navy’s announcement of plans to develop autonomous-drone weapons, as well as the announcement of both the South Korean Super aEgis II automatic turret and the Russian Platform-M automatic combat machine.
But governments aren’t the only players making AI weapons. Imagine a GoPro-bearing quadcopter drone, the kind of thing anyone can buy. Now imagine a simple piece of software that allows it to fly automatically. The same nefarious crime syndicate that can weaponised a driverless car is just inches away from attaching a gun and programming it to kill people in a crowded public place.

This is the immediate danger with AI weapons: They are easily converted into indiscriminate death machines, far more dangerous than the same weapons with a human at the helm.

Stephen Hawking and Max Tegmark, alongside Elon Musk and many others have all signed a Future of Life petition to ban AI weapons, hosted by the institution that received a $10 million donation from Mr. Musk in January. This followed a UN meeting on ‘killer robots’ in April that did not lead to any lasting policy decisions. The letter accompanying the Future of Life petition suggests the danger of AI weapons is immediate, requiring action to avoid disasters within the next few years at the earliest. Unfortunately, it doesn’t explain what sorts of AI weapons are on the immediate horizon.
Many have expressed concerns about apocalyptic Terminator-like scenarios, in which robots develop the human-like ability to interact with the world all by themselves and attempt to conquer it. For example, physicist and Astronomer Royal Sir Martin Rees warned of catastrophic scenarios like “dumb robots going rogue or a network that develops a mind of its own.” His Cambridge colleague and philosopher Huw Price has voiced a similar concern that humans may not survive when intelligence “escapes the constraints of biology.” Together the two helped create the Centre for the Study of Existential Risk at the University of Cambridge to help avoid such dramatic threats to human existence.
These scenarios are certainly worth studying. However, they are far less plausible and far less immediate than the AI-weapons danger on the horizon now.

How close are we to developing the human-like artificial intelligence? By almost all standards, the answer is: not very close. University of Reading chatbot ‘Eugene Goostman’ was reported by many media outlets to be truly intelligent because it managed to fool a few humans into thinking it was a real 13-year-old boy. However, the chatbot turned out to be miles away from real human-like intelligence, as computer scientist Scott Aaronson demonstrated by destroying Eugene with his first question, “Which is bigger, a shoebox or Mt Everest?” After completely flubbing the answer, and then stumbling on, “How many legs does a camel have?” the emperor was revealed to be without clothes.
In spite of all this, we, the authors of this article, have both signed the Future of Life petition against AI weapons. Here’s why: Unlike self-aware computer networks, self-driving cars with machine guns are possible right now. The problem with such AI weapons is not that they are on the verge of taking over the world. The problem is that they are trivially easy to reprogram, allowing anyone to create an efficient and indiscriminate killing machine at an incredibly low cost. The machines themselves aren’t what’s scary. It’s what any two-bit hacker can do with them on a relatively modest budget.
Imagine an up-and-coming despot who would like to eliminate opposition, armed with a database of citizens’ political allegiances, addresses and photos. Yesterday’s despot would have needed an army of soldiers to accomplish this task, and those soldiers could be fooled, bribed, or made to lose their cool and shoot the wrong people.

The despots of tomorrow will just buy a few thousand automated gun drones. Thanks to Moore’s Law, which describes the exponential increase in computing power per dollar since the invention of the transistor, the price of a drone with reasonable AI will one day become as accessible as an AK-47. Three or four sympathetic software engineers can reprogram the drones to patrol near the dissidents’ houses and workplaces and shoot them on sight. The drones would make fewer mistakes, they wouldn’t be swayed by bribes or sob stories, and above all, they’d work much more efficiently than human soldiers, allowing the ambitious despot to mop up the detractors before the international community can marshal a response.
Because of the massive increase in efficiency brought about by automation, AI weapons will lower the barrier to entry for deranged individuals looking to perpetrate such atrocities. What was once the sole domain of dictators in control of an entire army will be brought within reach of moderately wealthy individuals.
Manufacturers and governments interested in developing such weapons may claim that they can engineer proper safeguards to ensure that they cannot be reprogrammed or hacked. Such claims should be greeted with skepticism. Electronic, ATMs, Blu-ray disc players, and even cars speeding down the highway have all been recently compromised in spite of their advertised security. History demonstrates that a computing device tends to eventually yield to a motivated hacker’s attempts to repurpose it. AI weapons are unlikely to be an exception.

International treaties going back to 1925 have banned the use of chemical and biological weapons in warfare. The use of hollow-point bullets was banned even earlier, in 1899. The reasoning is that such weapons create extreme and unnecessary suffering. They are especially prone to civilian casualties, such as when people inhale poison gas, or when doctors are injured in attempting to remove a hollow-point bullet. All of these weapons are prone to generate indiscriminate suffering and death, and so they are banned.

Is there a class of AI machines that is equally worthy of a ban? The answer, unequivocally, is yes. If an AI machine can be cheaply and easily converted into an effective and indiscriminate mass killing device, then there should be an international convention against it. Such machines are not unlike radioactive metals. They can be used for reasonable purposes. But we must carefully control them because they can be easily converted into devastating weapons. The difference is that repurposing an AI machine for destructive purposes will be far easier than repurposing a nuclear reactor.
We should ban AI weapons not because they are all immoral. We should ban them because humans will transform AI weapons into hideous blood-thirsty monsters using mods and hacks easily found online. A simple piece of code will transform many AI weapons into killing machines capable of the worst excesses of chemical weapons, biological weapons, and hollow-point bullets.

Banning certain kinds of artificial intelligence requires grappling with a number of philosophical questions. Would an AI weapons ban have prohibited the US Strategic Defense Initiative, popularly known as the Star Wars missile defense? Cars can be used as weapons, so does the petition propose to ban Google’s self-driving cars, or the self-driving cars being deployed in cities around the UK? What counts as intelligence, and what counts as a weapon?

These are difficult and important questions. However, they do not need to be answered before we agree to formulate a convention to control AI weapons. The limits of what’s acceptable must be seriously considered by the international community, and through the advice of scientists, philosophers, and computer engineers. The U.S. Department of Defense already prohibits fully autonomous weapons in some sense. It is time to refine and expand that prohibition to an international level.

Of course, no international ban will completely stop the spread of AI weapons. But this is no reason to scrap the ban. If we as a community think there is reason to ban chemical weapons, biological weapons, and hollow-point bullets, then there is reason to ban AI weapons too.

DefenseOne: http://http://bit.ly/1K1GFq8

« MH370 Gentle Landing Theory
Snowden Has No Plans to Leave Russia »

Infosecurity Europe
CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

K7 Computing

K7 Computing

K7 provides antivirus and internet security products for business and home users.

SISSDEN

SISSDEN

SISSDEN will improve cybersecurity through the development of increased awareness and the effective sharing of actionable threat information.

New Zealand Internet Task Force (NZITF)

New Zealand Internet Task Force (NZITF)

The New Zealand Internet Task Force (NZITF) is a non-profit with the mission of improving the cyber security posture of New Zealand.

Tempest

Tempest

TEMPEST is a leading provider of IT products and services including solutions for network and application security.

BEAM Teknoloji

BEAM Teknoloji

BEAM Technology is an independent Software Quality and Security Testing Center in Turkey.

Threatspan

Threatspan

Threatspan is a cybersecurity firm helping shipping and maritime enterprises achieve and maintain nautical resilience in an age of increasing cyber threats.

Cyber Covered

Cyber Covered

Cyber Covered provide complete website & data cover with market leading cyber insurance and powerful compliance software in one affordable package.

ECOLUX

ECOLUX

ECOLUX is a professional IoT security service company committed to developing world-leading “IoT Lifecycle Security” technologies and products.

Innovation Cybersecurity Ecosystem at BLOCK71 (ICE71)

Innovation Cybersecurity Ecosystem at BLOCK71 (ICE71)

Innovation Cybersecurity Ecosystem at BLOCK71 (ICE71) is Singapore's first cybersecurity entrepreneur hub.

Cyber Security for Europe (CyberSec4Europe)

Cyber Security for Europe (CyberSec4Europe)

CyberSec4Europe is designing, testing and demonstrating potential governance structures for a European Cybersecurity Competence Network.

Trilateral Research

Trilateral Research

Trilateral Research provide regulatory and policy advice; develop new data-driven technologies and contribute to the latest standards in safeguarding privacy, ethics and human rights.

Framatome

Framatome

Framatome Cybersecurity portfolio is directly inspired by its unique experience in nuclear safety for critical information systems and electrical systems design.

Mitigate Cyber

Mitigate Cyber

Mitigate Cyber (formerly Xyone Cyber Security) offer a range of cyber security solutions, from threat mitigation to penetration testing, training & much more.

Bluewave

Bluewave

Bluewave are a strategic IT advisory company that offers businesses a simple and comprehensive way to purchase information technology solutions.

Stack Identity

Stack Identity

Stack Identity protects access to cloud data by prioritizing identity and access vulnerabilities via a live data attack map.

Technoware Solutions

Technoware Solutions

Technoware Solutions is a global company committed to helping entities navigate the digital waters of modernizing their system processes in an ever changing cybersecurity landscape.

ThoughtSol

ThoughtSol

Thoughtsol help brands grow through Digital Transformation enabling them to leverage the power of IT for an all-embracing impact on their businesses.

QualySec

QualySec

QualySec is a leading cybersecurity firm specializing in comprehensive penetration testing and risk assessment services.