One Ethicist’s Compromise To Stop Killer Robots

Wendell Wallach is a consultant, ethicist, and scholar at Yale University's Interdisciplinary Center for Bioethics

A Yale bioethicist says the United Nations won’t stop killer robots, but the United States could start right now setting a few rules limiting their development and use.
    
The United Nations’ effort to ban killer robots will fail, but there are three important steps the United States can take to help slow the rise of lethal autonomous weapons systems, one of the most prominent voices in the robotics debate said this week.

Pentagon officials insist they don’t want to allow an autonomous weapon to kill people without a human in the loop, but greater levels of autonomy and artificial intelligence are making their way into more and more pieces of military technology, like in recognizing targets, piloting drones, and driving supply trucks. Defense Department leaders advocate for robotic intelligence and autonomy as thread-reducing (and cost-saving) measures key to securing the United States’ technological advantage over adversaries over the coming decades (the so-called ‘third offset’ strategy). Defense Secretary Ash Carter, Deputy Defense Secretary Bob Work and former Defense Secretary Chuck Hagel have talked up the importance of artificial intelligence to the military’s future plans.

“We know we’re going to have to go somewhere with automation,” Air Force Brig. Gen. John Rauch, director of ISR (intelligence, surveillance, and reconnaissance) Capabilities for the Air Force, said at a breakfast in Washington sponsored by AFCEA, a technology industry association.

Rauch was referring to the rapidly growing demands on human image analysts in the Air Force, especially as additional small drones enter service in the years ahead. “It’s: ‘What level of automation is allowed?’ And then when you start talking about munitions, it becomes a whole other situation.” The Air Force will be coming out with a flight plan for small unmanned aerial systems, or UAS’s, in the next four months, Lt. Gen. Robert Otto, deputy chief of staff for Intelligence, Surveillance and Reconnaissance, said at the meeting.

UN officials working to govern international rules on conventional weapons, however, have been meeting since 2014 on a calls by scientists, academics, human rights activists, and NGOs, to ban autonomous weapon systems, with another meeting scheduled for April.

Yale bioethicist Wendell Wallach doused that notion. “It’s going to be hard to put an arms control agreement in place for robotics. The difference between an autonomous weapons system and non-autonomous may be just a difference of a line of code,” he told a packed room on Sunday at the annual meeting of the Association for the Advancement of Science the Lollapalooza of general science, a meeting that draws brilliant brains from across the globe to make major announcements about new discoveries.

The rapid rise of artificial intelligence is cause for optimism and concern among Defense Department leaders. Deputy Defense Secretary Work, at a Center for New American Security event in December (to which Defense One was a media partner), said, “We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield and he said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.’”

Wallach implied the automatic arms race to come won’t result in some malevolent evil weapon. The future will, in fact, bring us robot tools sharing the battle space with humans that are far scarier, very stupid, poorly-designed, but incredibly well-armed.

“The vast preponderance of people in the US military … are very concerned about having robots on battlefields that can respond in milliseconds when the human response time is roughly 250 milliseconds, at least. So we are going down a pretty dangerous road if we allow this to be an arms race between lethal autonomous weapons,” he said.

Here’s what Wallach proposed.

1. An executive order from the president proclaiming that lethal autonomous weapons constitute a violation of existing international humanitarian law. The “existing” part is key as that means that no one has to draft a new provision. The United States should draw a moral line and call machines making life and death decisions mala en se, or a thing that is bad in and of itself, because they are unpredictable and can’t be fully controlled, and their use would make attribution and affixing responsibility for wrongful death difficult if not impossible. The executive order would stress that designing lethal robotic systems would make the designer responsible for the actions that that robot takes.

2. Create an oversight and governance coordinating committee for AI. “The committee should be mandated to favor soft governance solutions (industry standards, professional codes of conduct, etc.) over laws and regulatory agencies in forging adaptive solutions to recognized risks and dangers,” Wallach told Defense One. He said that the committee might include a mix of government, military, business leaders, and public representatives, but the ultimate makeup should be part of a larger discussion.

3. Direct 10 percent of the funding in artificial intelligence to studying, shaping, managing and helping people adapt to the “societal impacts of intelligent machines.” That doesn’t just mean dedicating part of your research project money to figuring out how not to make your robot deadly. It also refers to figuring out how the artificial intelligence you are designing could throw people out of work, and how to help them adapt to that.

Another idea that Wallach has floated in the past is embedding theorists and ethicists on AI design teams where they could build machines capable of what Wallach calls “operational morality,” or a type of scaled-down, robot-friendly morale reasoning free from all the philosophical underpinnings and grey-matter clutter that beset human discussions about good and bad. Operational morality would help the robot know when it’s in “a morally significant situation.” In his book A Dangerous Master: How to Keep Technology From Slipping Beyond Our Control Wallach describes it thus: “Engineers already program values and moral behavior into computers and robots…the engineers determine in advance the various situations robots will encounter within tightly constrained contexts. They then program the system to act appropriately in each situation.”

In those situations where the machine is encountering an unexpected event, the problem becomes more difficult, but not impossible.

Of course, if you talk to members of the military leadership, they say the 2012 directive that then-Deputy Defense Secretary Ash Carter signed forbids automating the kill decision and they aren’t interested in changing it. When it comes to offensive weapons like armed drones, a human must be the ultimate decision-maker.

Physicist Mark Grubard points out that a directive is different than a moratorium. He argues that the Pentagon is still racing towards very dangerous and very autonomous weapons.

Wallach acknowledged Defense Department ambivalence and the importance of the 2012 directive. In the end, he said, it was up to the United States to take a clearer leadership role in keeping death bots off of the battlefield.

“If that’s really how the secretary of defense feels,” he said, “then let’s have the United States stop sitting on the sidelines, put a foot forward, and put this international principal in place.

DefenseOne: http://bit.ly/1LYrdch

« Japan's Critical Infrastructure Under Cyberattack
A Cashless Society? Be Careful What You Wish For »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

IT Security Guru

IT Security Guru

IT Security Gurus publish daily breaking news. interviews with the key thinkers in IT security, videos and the top 10 stories as picked by our Editor.

Orolia

Orolia

Orolia are experts in deploying high precision GPS time through network infrastructure to synchronize critical operations.

Covenco

Covenco

Covenco is a data management and IT infrastructure specialist. Working with customers to transform their IT environments, with data protection and security at the forefront of everything we do.

Vitrociset

Vitrociset

Vitrociset design complex systems for defence, homeland security, space and transport. Activities include secure communications and cybersecurity.

Sintef Digital

Sintef Digital

Sintef Digital carries out research in Information and Communication Technology for industry and the public sector.

BehavioSec

BehavioSec

BehavioSec uses the way your customers type, swipe, and hold their devices, and enables them to authenticate themselves through their own behavior patterns.

Department of Energy - Cybersecurity, Energy Security, and Emergency Response (CESER)

Department of Energy - Cybersecurity, Energy Security, and Emergency Response (CESER)

The Office of Cybersecurity, Energy Security, and Emergency Response (CESER) addresses the emerging threats of tomorrow while protecting the reliable flow of energy to Americans today.

Dark Cubed

Dark Cubed

Dark Cubed is an easy-to-use cyber security software as a service (SaaS) platform that deploys instantly and delivers enterprise-grade threat identification and protection at a fraction of the cost.

SecuLution

SecuLution

SecuLution is an Antivirus product using Application Whitelisting which offers much more protection than Virus Scanners ever can.

Cyber Army Indonesia (CyberArmyID)

Cyber Army Indonesia (CyberArmyID)

Cyber Army Indonesia (CyberArmyID) is the first platform in Indonesia to collect and validate reports from hackers (referred to as Bug Hunter) regarding vulnerabilities that exist in an organization.

Singular Security

Singular Security

Singular Security help public and private organizations minimize cybersecurity risk and pass their IT compliance audit.

Pragma Strategy

Pragma Strategy

Pragma is a CREST approved global provider of cybersecurity solutions. We help organisations strengthen cyber resilience and safeguard valuable information assets with a pragmatic approach.

TryHackMe

TryHackMe

TryHackMe is an online platform that teaches cyber security through short, gamified real-world labs. We have content for both complete beginners and seasoned hackers.

Ross & Baruzzini

Ross & Baruzzini

Ross & Baruzzini delivers integrated technology, consulting, and engineering solutions for safe, sustainable, and resilient facilities.

SecureWeb3

SecureWeb3

SecureWeb3 helps businesses and brands to secure their Web3 presence by offering a full suite of security services including training, consultancy & brand protection solutions.

SecureDNE

SecureDNE

SecureDNE are a leading provider of cutting-edge Fractional CISO, Managed Cybersecurity Services, and Cybersecurity Engineering Solutions.