AI And The Future Of Warfare
What happens when Artificial Intelligence (AI) creates a new war strategy which the human mind cannot comprehend? This is a question that was recently discussed at the US War College. The main question is how can AI change military command structures?
“I’m not talking about killer robots,” said Prof. Andrew Hill, the War College’s Chair of Strategic Leadership.
The Pentagon would like AI to help human analysis not replace it. However, the questions are about how humans deal with robotic and AI machine intelligence and instructions.
This process has already begun with the use of sonar and radar software. The Aegis missile defense system on a number of US Navy warships advises on targets and which weapons to use. Aegis is not AI but it is an example of multifaceted automation that will become far more frequently used as AI and other technologies develop.
Even if the US don’t let AI machines take over firing other countries probable will as AI becomes part of everything from ground based robotic attack machines to drones.
The US military are testing predictive algorithms that warn mechanics to fix failing components and they are using cognitive electronic warfare systems that can jam enemy radar, airspace management systems that converge strike fighters, helicopters, and artillery shells on the same target without fratricidal collisions.
Future “decision aids” will probably automate staff work, turning a commander’s general plan of attack into detailed timetables.
Now AI can print a mathematical proof that explains, with excellent logic, the best solution from the given information. But no human being, not even the AI’s own programmers, possess the math skills, mental focus, or sheer stamina to double-check hundreds of pages of complex equations
Creating artificial intelligence that lays out its reasoning in terms human users can understand is a DARPA project and the Intelligence Community has already had some success in developing analytical software that human analysts can comprehend. But that does rule out a lot of cutting-edge machine learning techniques.
The whole point of AI is to think of things we humans can’t. Asking AI to restrict its reasoning to what we can understand is a bit like asking Einstein to prove the theory of relativity using only addition, subtraction and a box of crayons. While humans would put paper towels on one aisle, ketchup on another, and laptop computers on a third, Amazon’s algorithms instruct the human workers to put incoming deliveries on whatever empty shelf space is nearby: here, towels next to ketchup next to laptops; there, more ketchup, two copies of 50 Shades of Grey, and children’s toys.
As each customer’s order comes in, the computer calculates the most efficient route through the warehouse to pick up that specific combination of items. No human mind could keep track of the different items scattered randomly about the shelves, but the computer can, and it tells the humans where to go. Counter-intuitive as it is, random stow actually saves Amazon time and money compared to a warehousing scheme a human could understand.
In fact, AI frequently comes up with effective strategies that no human would conceive of and, in many cases, that no human could execute. Deep Blue beat Garry Kasparov at chess with moves so unexpected he initially accused it of cheating by getting advice from another grandmaster. However, there was no cheating, it was an algorithm.
If you reject an AI’s plans because you can’t understand them, you’re ruling out a host of potential strategies that, while deeply weird, might work. That means you’re likely to be outmaneuvered by an opponent who does trust his AI and its “crazy enough to work” ideas.
As one participant put it: At what point do you give up on trying to understand the alien mind of the AI and just “hit the I-believe button”?
The New Principles of War
If you do let the AI take the lead, several conference participants argued, you need to redefine or even abandon some of the traditional “principles of war” taught in military academies. But these rules boil down centuries of experience: mass your forces at the decisive point, surprise the enemy when possible, aim for a single and clearly defined objective, keep plans simple to survive miscommunication and the chaos of battle, have a single commander for all forces in the operation, and so on.
To start with, the principle of simplicity starts to fade if you’re letting your AI make plans too complex for you to comprehend. As long as there are human soldiers on the battlefield, the specific orders the AI gives them have to be simple enough to understand. However, robotic soldiers, including drones and unmanned war can remember and execute complex orders without error, so the more machines that fight, the more simplicity becomes obsolete.
The principle of the objective mutates too, for much the same reason. Getting a group of humans to work together requires a single, clear vision of victory they all can understand. Algorithms, however, optimize complex utility functions. For example, how many enemies can we kill while minimizing friendly casualties and civilian casualties and collateral damage to infrastructure?
Finally, and perhaps most painfully for military professionals, what becomes of the hallowed principle of unity of command? Even if a single human being has the final authority to approve or disapprove the plans the AI proposes, is that officer really in command if he isn’t capable of understanding those plans? Is the AI in charge?
The conference here didn’t come up with a decisive answer but these are the questions that need review and potential answers.
You Might Also Read:
US Army Wants To Convert Tanks Into Autonomous Weapons: