Managing The Rise Of The Killer Robots

The machines rise, subjugating humanity. It’s a science fiction trope that’s almost as old as machines themselves. The doomsday scenarios spun around this theme are so outlandish – like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies – it’s difficult to visualize them as serious threats.

Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It’s obvious how developing these technologies will benefit humanity. But, then – don’t all the dystopian sci-fi stories start out this way?

Any discussion about the dystopian potential of AI risks gravitating towards one of two extremes. One is overly credulous scare-mongering. Of course, Siri isn’t about to transmogrify into murderous HAL from 2001: A Space Odyssey. But the other extreme is equally dangerous – complacency that we don’t need to think about these issues, because humanity-threatening AI is decades or more away.

It is true that the artificial “superintelligence” beloved of sci-fi may be many decades in the future, if it is possible at all. However, a recent survey of leading AI researchers by TechEmergence found a wide variety of concerns about the security dangers of AI in a much more realistic, 20-year timeframe – including financial system meltdown as algorithms interact unexpectedly, and the potential for AI to help malicious actors optimize biotechnological weapons.

These examples show how, alongside technological progress on many fronts, the Fourth Industrial Revolution is promising a rapid and massive democratization of the capacity to wreak havoc on a very large scale. On the dark side of the “deep web”, where information is hidden from search engines, destructive tools across a range of emerging technologies already exist for sale, from 3D-printed weapons to fissile material and equipment for genetic engineering in home laboratories. In each case, AI exacerbates the potential for harm.

Consider another possibility mentioned in the TechEmergence survey. If we combine a gun, a quadrocopter drone, a high-resolution camera, and a facial recognition algorithm that wouldn’t need to be much more advanced than the current best in class, we could in theory make a machine we can program to fly over crowds, seeking particular faces and assassinating targets on sight.

Such a device would require no superintelligence. It is conceivable using current, “narrow” AI that cannot yet make the kind of creative leaps of understanding across distinct domains that humans can. When “artificial general intelligence”, or AGI, is developed – as seems likely, sooner or later – it will significantly increase both the potential benefits of AI and, in the words of Jeff Goddell, its security risks, “forcing a new kind of accounting with the technological genie”.

But not enough thinking is being done about the weaponizable potential of AI. As Wendell Wallach puts it: “Navigating the future of technological possibilities is a hazardous venture,” he observes. “It begins with learning to ask the right questions—questions that reveal the pitfalls of inaction, and more importantly, the passageways available for plotting a course to a safe harbor.”

Non-proliferation challenges

Prominent scholars including Stuart Russell have issued a call for action to avoid “potential pitfalls” in the development of AI that has been backed by leading technologists including Elon Musk, Demis Hassabis, Steve Wozniak and Bill Gates. One high-profile pitfall could be “lethal autonomous weapons systems” (LAWS) – or, more colloquially, “killer robots”. Technological advances in robotics and the digital tranformation of security has already changed the fundamental paradigm of warfare. According to Christopher Coker ” 21st-century technology is changing our understanding of war in deeply disturbing ways.” Fully developed LAWS will likely transform modern warfare as dramatically as gunpowder and nuclear arms.

The U.N. Human Rights Council has called for a moratorium on the further development of LAWS, while other activist groups and campaigns have advocated for a full ban, drawing an analogy with chemical and biological weapons, which the international community considers beyond the pale. For the third year in a row, the United Nations Member States met last month to debate the call for a ban, and how to ensure that any further development of LAWS stays within international humanitarian law. However, when ground breaking weapons technology is no longer confined to a few large militaries, non-proliferation efforts become much more difficult.

The debate is complicated by the fact that definitions remain mired in confusion. Platforms, such as drones, are commonly confused with weapons that can be loaded on platforms. The idea of systems being asked to execute narrowly defined tasks, such as to identify and eliminate armoured vehicles moving in a specific geographical area, is not always distinguished from the idea of systems being given discretionary scope to interpret more general missions, such as “win the war” .

Some argue to always keep a “human in the loop” to exercise “meaningful human control”. This would preclude fully autonomous systems, which might view the “man” and the ”law” as nothing but a nuisance to the system’s ability to deliver its task. However, limiting systems to semi-autonomy or “sliding autonomy” – in which the “man” is brought in only in unusual circumstances – also has flaws. Experience from testing self-driving cars suggests that humans struggle to stay alert and lose situational awareness when supervising a system that usually runs in automated mode. Deepmind, a company acquired by Google in 2014, has recently published a paper together with the Future of Humanity Institute at Oxford, where they describe an AI “off-switch” or a “big red button”. The paper outlines a “framework” to allow a “human operator” to safely interrupt an AI. According to one of the authors: “our framework allows the human supervisor to temporarily take control of the agent and make it believe it/chooses/to shut down itself”. However, the paper also clearly states that “it is unclear if all algorithms can be easily made safely interruptible”. Add to that it “is hard to to predict when human will need to start pressing a “big red button” on self learning machines”.

An additional concern is that any weapons system with a degree of autonomy could be spoofed and the programmed objectives corrupted remotely by a purpose-engineered virus like the Stuxnet worm.

It may already be too late to arrest the development of LAWS: Peter Singer and August Cole, the authors of Ghost Fleet, say “the ship may already have sailed – without the need of a crew”. Depending on the definition of autonomy, an argument can be made that such systems are already in use – Israel’s Harpy drone being the clearest-cut case.

There is little current support from governments for a full ban on LAWS. One reason is that the technologies needed to develop more advanced LAWS are likely to become widely available in time – and if it would be impractical to prevent a terrorist group like ISIS from developing killer robots, then states may want to ensure they understand the technology themselves. Another reason is cost-effectiveness: humans often make up most of a defence budget, and LAWS – especially “swarms”, in which many small robots can perform tasks simultanously – could potentially cut costs drastically.

Others even make a moral case in favour of state actors developing killer robots. They could reduce the number of soldiers being killed, or returning traumatized from battle. And suppose an algorithm is developed that performs better than the most highly-trained soldiers at coolly making snap decisions in the heat of battle, distinguishing civilians from combatants, and opening fire only on the latter: would humanitarian considerations not oblige military leaders to take the error-prone humans out of the equation? In the words of Cumming, How and Williams ”partnering human and computer abilities, could greatly enhance planning tasks in a chaotic environment.”

The counterpoint is that politicians might be more ready to start wars when they are sending robots than humans into battle – and the technology, once developed, is likely sooner or later to be used by those with scant regard for humanitarianism.

As artificial intelligence becomes more capable, similar questions will occur in more and more contexts, many of which are difficult to even imagine: How might the new capability conceivably be weaponized? Is it desirable? If not, how to control or – more likely – monitor its development? What norms about its use could and should be established? And how could any restrictions be enforced? It is already difficult to enforce restrictions on developing physical weapons – and the challenge becomes even starker when the “weapon” is software. Rod Beckstrom coins this as the “Conundrum of Artificial Intelligence”.

Unfortunately, there is a growing gap between those developing AI and those who should be party to such a conversation. Public sector decision-makers typically have little understanding of the complexity of the technological possibilities being created in myriad start-ups around the world. Meanwhile the technologists themselves often struggle to internalize the “dark side” of technologies they view as life-enhancing, and the consequent need to govern against misuse.

Dual-use innovations

For obvious reasons, militaries do not reveal all their work on weaponizing artificial intelligence. However, Russia recently unveiled their “Iron Man” humanoid military robot, aiming to minimize the risk to soldiers in dangerous situations. The US and Chinese militaries, among others, are also investing heavily in AI and robotics. The US “third offset” strategy explicitly aims to keep it ahead in the technology game.

The geopolitical dimension to the third offset strategy indicates an incipient AI arms race. As US Deputy Defence Secretary Work put it: “our adversaries are pursuing enhanced human operation and it scares the crap out of us, frankly”.

An AI arms race would be unlikely to be as stable as the Cold War stand-off involving mutually-assured destruction. A common concern among AI researchers in the recent TechEmergence survey was the difficulty of predicting what happens when artificial intelligences engage with each other.

In contrast to the Cold War paradigm of military-sponsored cutting-edge research eventually spawning private sector applications, militaries are not necessarily at the cutting edge. Potentially weaponizable, “dual use” AI is increasingly being developed first in the private sector. For example, quadrocopter development is driven by commercial aims such as package deliveries. Facial recognition algorithms have a broad array of private sector as well as public security applications, such as recognizing when valued customers enter a store. According to Mary Cummings, the prominent robotics professor and former fighter pilot, “I guarantee you, Google and Amazon will soon have much more surveillance capability with drones than the military”. She asks, “What happens when our governments are looking to corporations to provide them with the latest defence technology?”

The robotics race right now is causing a massive brain drain from militaries into the commercial world. The most talented minds are now being drawn towards the rewards offered in the private sector. Google’s AI budget would be the envy of any military, and it can leverage its commercial activities to further research – for example, launching a photo storage service which will help refine its facial recognition software.

The significance of the private sector taking the lead is enormous: when technologies can be bought off-the-shelf, AI is potentially weaponizable by any non-state actor. Sooner or later, it will become trivially easy for organized criminal gangs or terrorist groups to construct devices such as assassination drones. Indeed, it is likely that given time, any AI capability that can be weaponized will be weaponized.

As AI develops, early attempts to weaponize it are likely to be buggy and prone to misfiring. But another implication of the brain drain from the military to private sector is a reduction in capacity to test and verify the effectiveness of technology, to a degree that would instil confidence in battle situations. Legitimate actors may not want to send a technology that is considered only 80 percent ready into the battlefield.

Rogue actors, though, are unlikely to care about compliance or a bit of collateral damage. A terrorist organisation such as ISIS might be only too willing to use an 80 percent-ready AI weapon, with devastating results.

Towards superintelligence

Looking further to the future, the question that most fascinates sci-fi storytellers is what happens when an artificial general intelligence works out how to improve itself. Even today, the deep neural networks running narrow AI applications can not be fully understood by the engineers who program them. A “superintelligence” could act in ways that defy human comprehension. Part of the challenge is verification. According to Alan Winfield “current verification approaches typically assume that the system being verified will never change its behaviour, but a system that learns does – by definition – change its behaviour, so any verification is likely to be rendered invalid after the system has learned.” Verification is further complicated by what he coins as the “black box problem” describing the Artificial Neural Networks (ANNs) or rather the large sets of data that underpin all alghorithms to make decisions and learn. The ANN is “trained” using these sets of data – but exactly how decisions are being made is not clear.

Dystopian stories like The Matrix envisage such superintelligent machines developing their own goals. But perhaps the more likely threat comes from another kind of story altogether. Advanced AGI may doom humanity not because it pursues its own goals, but because we fail to foresee some implication in the goals we set for it.

Scholars in the machine ethics community are increasingly thinking through these kinds of fundamental questions pertaining to AI. How do we instil human values in an artificial general intelligence, to forestall misunderstandings about what we want and curtail our own biases? What are human values, anyway? As Christopher Coker puts it in his book ‘Future War’, will machines gradually “come to be seen not as replacements for human beings, but as extensions of our own humanity”?

Such discussions may still be academic rather than urgent concerns for policy-makers. But, as Stephen Hawking, Max Tegmark, Stuart Russell and Frank Wilczek stated in 2014, failing to take them seriously could be “potentially our worst mistake ever”. It is now possible to foresee a continuum of AI development, from current narrow AI to possible superintelligence – and a structure is urgently needed to address the current security risks and keep abreast of them as AI develops.

The way forward

There is a need for a new, global platform to monitor, consider, and make recommendations about the implications of emerging technologies in general, and AI more specifically, for international security. Such a platform would have two imperatives.

The first is to build a multi-stakeholder platform to involve both the private and public sectors and decisionmakers in the dialogue. The purpose is to enable and encourage greater transparency about the capabilities of new inventions, even when weaponization could not be further from an innovator’s mind.

The second is to find ways of moving beyond traditional, intergovernmental rule-making approaches to more creative regulation of new technologies. This will involve countries fundamentally rethinking their positions and expectations about non-proliferation efforts and disarmament processes, and considering practical measures to strengthen global, regional, and national norms.

Countries will always view any discussion on proliferation through a lens of their national security interests. But increasingly the security of all nations is interconnected. As technological progress democratizes the ability to inflict large-scale damage far beyond the historically important handful of major state actors, time-honoured tools to prevent escalation of disputes – treaties, conventions, international organizations, game-theoretic concepts of deterrence – become less and less relevant.

Many AI applications have life-enhancing potential, so holding back its development is undesirable and possibly unworkable. This speaks to need for a more connected and coordinated multistakeholder effort to create norms, protocols, and mechanisms for the oversight and governance of AI.

Gary Marchant and Wendell Wallach argue that emerging technologies, including AI, are better overseen by soft governance – industry initiatives, laboratory standards, testing and certification regimes, insurance policies – than hard governance, such as laws and regulations. This is an argument that holds promise in the overall non-proliferation discussion. Legal and regulatory regimes are typically slow-moving, while technological change is rapid; national, while innovation crosses borders; and stove-piped, while the biggest dangers often occur at the intersections of technologies.

Diplomats in the disarmament space have likewise argued that there is limited value in over-institutionalising discussions on non-proliferation – this area of policy making needs to be agile by design. However, soft governance mechanisms are difficult to enforce, so it will still be necessary to put hard laws and regulatory bodies in place to forestall serious harms. In addition to the UN process referred to above, several joint and creative track-II multistakeholder governance initiatives for AI have begun. Other track-I processes are generally still lagging behind with a few exceptions. The World Economic Forum, as stated by Professor Schwab, its founder, will continue to use its platform to encourage dialogue and bring stakeholders together.

A new approach to the oversight and governance of AI would map the interests of relevant stakeholders as well as existing efforts to develop a shared concept on mitigating the security implications of AI. It would also enable strategies that reach beyond “headline technologies” such as killer robots, as well as look at the potentially destabilizing security effects of advanced AI capabilities upon unemployment and inequality. It would identify champions for a spirit of collaboration. It would work to debunk myths about AI, and identify gaps and blind spots. It would build a repository of knowledge and practices. It would further public and policy literacy on AI-related issues.

Transferable lessons from other processes and initiatives should be explored in greater depth where relevant. In the non-profileration space, the Chemical Weapons Convention, agreed to in the 1990s, faced analogous issues of aligning business integrity with the need to test, verify and create a system for self-declaration of potentially relevant breakthroughs. Emerging multi-stakeholder regimes around the governance of cyberspace and climate change also offer insights, as do current discussions on relevant issues in driverless cars and aviation industry automation.

Above all, there is a need to recognize that humanity stands at an inflection point, with innovations in AI outpacing evolution in norms, protocols and governance mechanisms. A new and revived non-proliferation debate and architecture are needed to nurture holistic understanding of human relations with machines and automated systems, and influence the future trajectories and applications of AI and emerging technologies in general – making sure the outlandish, dystopian futures remain firmly in the realm of fiction.

Anja Kaspersen is Head of International Security, World Economic Forum, Geneva

« Who’s Stealing The Money? SWIFT Tightens Security As A Fourth Bank Is Attacked.
Russian Hi-Tech Spy Devices »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

ManageEngine

ManageEngine

As the IT management division of Zoho Corporation, ManageEngine prioritizes flexible solutions that work for all businesses, regardless of size or budget.

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

Oxygen Forensics

Oxygen Forensics

Oxygen Forensics offer the most advanced forensic data examination tools for mobile devices and cloud services.

ComCode

ComCode

ComCode provides consulting services and solutions in the area of digitization and cyber security for mid-sized and big businesses.

OnSystem Logic

OnSystem Logic

OnSystem Logic has developed a unique, patent-pending solution to solve the problem of the exploitation of flaws in application software as a technique for cyber attacks.

Gilbert + Tobin

Gilbert + Tobin

Gilbert + Tobin is an Australian corporate law firm serving clients throughout Australia, and around the world, on a broad range of legal issues including cyber security.

Approach

Approach

Approach is a leading provider of cyber security consulting and secure application development services in Belgium.

RIGCERT

RIGCERT

RIGCERT provides training, audit and certification services for multiple fields including Information Security.

RiskRecon

RiskRecon

RiskRecon makes it easy to gain deep, risk contextualized insight into the cybersecurity risk performance of all of your third parties.

Sergeant Laboratories

Sergeant Laboratories

Sergeant Laboratories builds advanced technologies to prove compliance in complex IT security and regulatory compliance situations.

Threat Status

Threat Status

Threat Status are a Threat Intelligence company. We are the developers of Trillion. A cloud based Security As A Service (SaaS) platform.

SecureThings

SecureThings

SecureThings focus is to provide guidance and technology to secure connected vehicles in order to build end-to-end security for the automotive industry.

ThreatModeler

ThreatModeler

ThreatModeler is an automated threat modeling solution that fortifies an enterprise’s Software Development Lifecycle by identifying, predicting and defining threats.

Qasky

Qasky

Anhui Qasky Quantum Technology Co. Ltd. (Qasky) is a new high-tech enterprise engaged in quantum information technology industrialization in China.

Quantum Armor

Quantum Armor

Quantum Armor is a next-gen cyber security monitoring platform that allows you to continuously stay aware of your security posture, and proactively spot trends, vulnerabilities and potential attacks.

Darkbeam

Darkbeam

Darkbeam provides a unified solution to protect against security, brand and compliance risks across your digital infrastructure.

Reliance Cyber

Reliance Cyber

Reliance Cyber (formerly Reliance ACSN) help to monitor and manage your organisation’s security infrastructure 24/7, so you can make sure all threats and issues are dealt with.

GoVanguard

GoVanguard

GoVanguard is an boutique information security team delivering robust, business-focused information security solutions.