Artificial Intelligence, Robotics & All Tomorrows Wars

In August, a group of experts on robotics and artificial intelligence released an open letter to the UN Convention on Certain Conventional Weapons.

The well-publicised letter called on the convention “to find a way to protect us all from” the dangers of autonomous weapons systems, and drew attention to a lack of international regulation on autonomous weapons.

These are often understood as weapons that “once activated, can select and engage targets without further human intervention”.

In 2013 the convention added autonomous weapons to the list of weapons it might consider restricting or outlawing.

But parties to the convention remain far from agreement on how to define “lethal autonomous weapons systems” or “appropriate human control of autonomous weapons”, a necessary precursor to further discussions on the topic or to a pre-emptive ban of the sort advocated by the Campaign to Stop Killer Robots.

In December of last year, the convention established a Group of Governmental Experts, with a mandate to discuss lethal autonomous weapons systems, but the group’s first meeting has been postponed twice for budgetary reasons. It is now scheduled for next month.

Deliberative processes that might examine autonomous weapons from the perspective of the laws of war, processes, that is, that could result in new regulations, are notoriously sluggish.

Meanwhile, autonomous weapons technology is developing apace. Nations such as the United States, China, Russia, South Korea, and the United Kingdom continue to develop autonomous weapons and related dual-use technologies, meaning that deployment of these weapons could become a fait accompli before any pre-emptive ban can be negotiated.

The current debate over autonomous weapons exhibits two important shortcomings:

First, though it is important to examine autonomous weapons from the legal and regulatory perspective, doing so can fail to capture the reality that autonomous weapons, and the practices associated with their development and deployment, can alter norms themselves.

For example, practices surrounding autonomous weapons can produce new understandings, outside and beyond international law, of when and how using force is appropriate.The unrestricted submarine warfare of World War II undermined agreed-upon norms about the conduct of war; other such examples are not hard to find.

Second, when observers discuss autonomous weapons’ game-changing potential in international relations and security policy, they often overemphasise the technologically sophisticated autonomous weapons of the future. (This tendency is shaped by popular culture’s “Terminator” vision of humanoid monsters and is affected by the lack of a consensus definition of “autonomous weapons” or “autonomy.”)

Overemphasising technologically sophisticated weapons seems to result in a belief that the international community should just wait to see whether “killer robots” indeed become reality.

However, no matter how important advanced artificial intelligence will be for future weapons systems, it is “stupid” autonomous weapons that require attention now.

To sort out these problems, it is helpful to contrast autonomy with mere automation. Drawing on definitions from basic robotics, automated machines can be said to run according to fixed and pre-programmed sequences of action.

Autonomous Systems. Meanwhile, are defined by their ability to adapt: An autonomous device’s “actions are determined by its sensory inputs, rather than where it is in a preprogramed sequence.” This level of autonomy is easy to achieve, one need only think of robotic vacuum cleaners. But where weapons are concerned, even this level of autonomy contests the idea of appropriate human control. And importantly, unlike the humanoid killer robots of possible future scenarios, this level of autonomy already exists.

Higher threshold. In August the UK Ministry of Defence released a “Joint Doctrine Publication” on unmanned aircraft systems. The document provided definitions of and distinctions between “automated” and “autonomous” systems, and in so doing departed from the generally accepted understanding of these terms.

It characterised autonomous weapons narrowly as sophisticated systems “capable of understanding higher-level intent and direction [and] capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.”

Systems that fall under this definition are not yet operational—but the definition raises the possibility that autonomous weapons emerging in the future may not even be regarded as such.

That is, the UK Ministry of Defence raised the threshold at which weapons systems are considered autonomous.

Correspondingly, it offered an expanded definition of “automated system,” declaring that “an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome.” This expanded definition moves “automation” into territory reserved for “autonomy” in the basic-robotics definition cited above.

Clearly, these new definitions present new challenges to the possible regulation and restriction of autonomous weapons. For example, what of the United Kingdom’s own declaration that it “does not possess armed autonomous aircraft systems and it has no intention to develop them”?

The UK government insists that “the operation of UK weapons will always be under human control,” in the sense that “a person [will be] involved in setting appropriate parameters.”

But when the distinction between automation and autonomy is blurred, Britain’s declaration represents only a weak moratorium on the development of weapons systems with autonomous qualities. And would a programmer developing an algorithm for the operation of autonomous weapons represent “human control” over the setting of “appropriate” parameters”?

“Better soldiers.” Given all this, debate about autonomous weapons should much more thoroughly take into account the ways in which existing norms may be affected by currently available autonomous weapons. Discussing how norms can regulate autonomous weapons is important, but a reversal is also in order—it’s time to investigate how weapons impact norms.

An important category of norms to consider in this context is “procedural norms.” These norms, which apply in confined organizational settings such as militaries, provide standards for appropriate ways of doing things.

They are based on specific objectives and expectations that are often associated with efficiency and effectiveness. Where weapons are concerned, greater levels of technical autonomy produce improvements in reaction time, systemic reliability, endurance, or precision (in contrast to unmanned and remote-controlled aerial vehicles, which do not necessarily deliver such improvements).

Because autonomous weapons confer advantages where procedural norms are concerned, their deployment is more likely. That is, autonomous weapons provide procedural incentives to remove human decision-making, in temporally proximate terms, from the use of force.

It is sometimes presumed that autonomous weapons will demonstrate ethical superiority over humans. Any such superiority is still hypothetical, but autonomous weapons might lack potentially problematic emotions such as fear, anger, or vengefulness.

Presumed ethical superiority leads to further procedural arguments for constructing autonomous weapons as “better soldiers” that will outperform humans morally and in terms of compliance with international humanitarian law.

If this argument becomes more dominant, the widespread development and deployment of autonomous weapons will become more likely, further escalating the possibility that procedural norms will affect the public and legal norms that underlie international law and notions of legitimacy.

The US military’s pervasive and accelerating deployment of drones, and drones’ centrality in US security policy, show that practices indeed shape norms. Drones have become “preferred” security instruments due to specific rationales based on procedural norms.

Autonomous weapons’ versatility, the dual-use character of their main features, and the technological rivalry among major powers qualify them as very important instruments, and this makes their regulation more difficult. Whenever procedural norms prevail over legal and ethical norms, the latter category, unfortunately, is likely to yield or adapt.

To be sure, some types of autonomous weapons might be banned in the future. But practices now being established regarding autonomous weapons are already setting standards about the future use of force.

This trend should be monitored much more closely, regardless of whether the Convention on Certain Conventional Weapons, governments, and nongovernmental organisations find common ground in their struggle to define what autonomous weapons are in the first place.

The Bulletin

You Might Also Read: 

FututeOfLife.org

Artificial Intelligence: A Warning:

'Killer Robot' Warfare Is Coming Closer:

Hawking Doesn’t Have to Fear Killer Robots:

 

« Robots Could Join The Ukrainian Conflict
Oxford University’s Cyber Research »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Defense Advanced Research Projects Agency (DARPA)

Defense Advanced Research Projects Agency (DARPA)

DARPA's mission is to develop breakthrough technologies for national security. The Information Innovation Office undertakes cyber security activities.

National Cyber Security Centre (NCSC) - United Kingdom

National Cyber Security Centre (NCSC) - United Kingdom

The NCSC acts as a bridge between industry and government, providing a unified source of advice, guidance and support on cyber security, including the management of cyber security incidents.

National Cyber Security Centre (NCSC) - Norway

National Cyber Security Centre (NCSC) - Norway

NCSC is part of the Norwegian Security Authority, and is Norway's national cyber security hub and the national CERT.

CLUSIL

CLUSIL

CLUSIL is an association for the information security industry in Luxembourg.

Stealthbits Technologies

Stealthbits Technologies

Stealthbits Technologies is a cybersecurity software company focused on protecting an organization's sensitive data and the credentials attackers use to steal that data.

CyberArts

CyberArts

CyberArts is founded on the belief that every single organization deserves and requires the creme de la creme when there is a need for Cyber services.

UPX Technologies

UPX Technologies

UPX Technologies is one of the largest digital security centers in Brazil providing full protection for data, networks and content.

ALTR

ALTR

ALTR provide software-embedded solutions for data security and privacy.

Ultratec

Ultratec

Ultratec provide a range of data centric services and solutions including data recovery, data erasure, data destruction and full IT Asset Disposal (ITAD).

SearchInform

SearchInform

SearchInform is a leading risk management product developer, protecting business and government institutions against data theft, harmful human behavior, compliance breaches and incomplete audit.

Keysight Technologies

Keysight Technologies

Keysight is dedicated to providing tomorrow’s test technologies today, enabling our customers to connect and secure the world with their innovations.

Mandiant

Mandiant

Mandiant deliver dynamic cyber defense solutions powered by industry-leading expertise, intelligence and innovative technology.

Alias

Alias

Alias (formerly Alias Forensics) provide penetration testing, vulnerability assessments, incident response and security consulting services.

Barrier Networks

Barrier Networks

Barrier Networks are a Cyber Security Managed Service Provider that specialises in Network and Application security.

PDI Technologies

PDI Technologies

PDI Technologies helps convenience retail and petroleum wholesale businesses around the globe increase efficiency and profitability by securely connecting their data and operations.

Interlock

Interlock

Interlock are building blockchain-based security products that solve legacy web2 security issues - phishing and social engineering.