AI - Driven Warfare Using Robots
Humans will always make the final decision on whether armed robots can shoot, the US Department of Defense said earlier this year. However, our relationship with military robots goes back even further than that. This is because when people say “robot”, they can mean any technology with some form of “autonomous” element that allows it to perform a task without the need for direct human intervention.
These technologies have existed for a very long time. During World War II, a fuse was developed to explode artillery shells at a predetermined distance from their target. This made the shells far more effective than they would otherwise have been by augmenting human decision making and, in some cases, taking the human out of the loop completely.
So the question is not so much whether we should use autonomous weapon systems in battle but what form should human intervention should take.
Recently on Wallops Island in the US which is a remote, marshy spit of land along the eastern shore of Virginia, near a famed national refuge for horses and is mostly known as a launch site for government and private rockets. But it also makes for a perfect, quiet spot to test a revolutionary weapons technology.
If a fishing vessel had steamed past the area last October, the crew might have glimpsed half a dozen or so 35-foot-long inflatable boats darting through the shallows, and thought little of it. But if crew members had looked closer, they would have seen that no one was aboard: The engine throttle levers were shifting up and down as if controlled by ghosts.
The boats were using high-tech gear to sense their surroundings, communicate with one another, and automatically position themselves so, in theory, .50-caliber machine guns that can be strapped to their bows could fire a steady stream of bullets to protect troops landing on a beach. The secretive effort, part of a US Marine Corps program called Sea Mob, was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm.
It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield.
Lethal, largely autonomous weaponry isn’t entirely new: A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input.
So far, US military officials haven’t given machines full control, and they say there are no firm plans to do so. Many officers, schooled for years in the importance of controlling the battlefield, remain deeply skeptical about handing such authority to a robot.
Critics, both inside and outside of the military, worry about not being able to predict or understand decisions made by artificially intelligent machines, about computer instructions that are badly written or hacked, and about machines somehow straying outside the parameters created by their inventors. Some also argue that allowing weapons and robots to decide to kill violates the ethical and legal norms governing the use of force on the battlefield since the horrors of World War II.
But if the drawbacks of using artificially intelligent war machines are obvious, so are the advantages. Humans generally take about a quarter of a second to react to something we see, think of a batter deciding whether to swing at a baseball pitch. But now machines we’ve created have surpassed us, at least in processing speed.
Earlier this year, for example, researchers at Nanyang Technological University, in Singapore, focused a computer network on a data set of 1.2 million images; the computer then tried to identify all the pictured objects in just 90 seconds, or 0.000075 seconds an image. The outcome wasn’t perfect, or even close: At that incredible speed, the system identified objects correctly only 58 percent of the time, a rate that would be catastrophic on a battlefield.
Nevertheless, the fact that machines can act, and react, much more quickly than humans can is becoming more relevant as the pace of war speeds up.
In the next decade, missiles will fly near the Earth at several miles per second, too fast for humans to make crucial defensive decisions on their own. Drones will attack in self-directed swarms, and specialised computers will assault one another at the speed of light.
Humans might create the weapons and give them initial instructions, but after that, many military officials predict, they’ll only be in the way.
So far, new weapons systems are being designed so that humans must still approve the unleashing of their lethal violence, but only minor modifications would be needed to allow them to act without human input.
US Pentagon rules, put in place during the Obama administration, don’t prohibit giving computers the authority to make lethal decisions; they only require more careful review of the designs by senior officials.
Consequently, the US military services have begun the thorny, existential work of discussing how and when and under what circumstances they will let machines and robots decide to kill.
Public Integrity: DefenseOne: Cosmos:
You Might Also Read:
Distinguished AI Expert Is Concerned About ‘Killer Robots’: