Securing AI In Military Systems
Artificial Intelligence (AI) is a very important current technology that will alter warfare in the years to come. It is hard to predict the exact impact and trajectory of technologies but some analysts are saying that these AI technologies will enable military transformations that is comparable with the invention of electricity and the airplane.
Both military and commercial robots will in the future incorporate AI that could make them capable of undertaking tasks and missions on their own.
There is an important debate amongst military experts about whether robots should be allowed to execute some missions if human life could be at stake.
AI software in military battlefield autonomous and self-governing systems are sometimes extremely vulnerable to cyber attacks. Now researchers are reviewing techniques to make the systems’ Machine Learning (ML) algorithms more secure. These ML algorithms make decisions and adjust the machines on the battlefield. The research project, led by Purdue University is part of the US Army Research Laboratory Army Artificial Intelligence Institute.
The prototype system will be called SCRAMBLE, short for “SeCure Real-time Decision-Making for the AutonoMous BattLefield.” Army researchers will be evaluating SCRAMBLE at the Army Research Laboratory’s autonomous battlefield test bed to ensure that the ML algorithms can be feasibly deployed and avoid cognitive overload for combatants using these machines.
There are several points of an autonomous operation where a hacker might attempt to compromise a ML algorithm.
Before even putting an autonomous machine on a battlefield, an adversary could manipulate the process that technicians use to feed data into algorithms and train them offline. SCRAMBLE would close these hackable loopholes in three ways.
- The first is through “robust adversarial” machine learning algorithms that can operate with uncertain, incomplete or maliciously manipulated data sources.
- Second, the prototype will include a set of “interpretable” machine learning algorithms aimed at increasing a combatants trust of an autonomous machine while interacting with it.
- The third strategy will be a secure, distributed execution of these various machine learning algorithms on multiple platforms in an autonomous operation.
The research objective is to make all of these algorithms secure despite the fact that they are distributed and separated out over an entire domain, according to researchers at Purdue University. The US military is already integrating AI systems into combat via a controversial initiative called Project Maven, which uses AI algorithms to identify targets in Iraq and Syria.
The AI revolution and accompanying technologies are transforming geopolitical competition and the development of AI, machine learning, and autonomous systems relies on factors such as data, workforces, computing power, and semiconductors, disparities in how well different countries harness these technologies may prove to be critical military technologies.
US Army: Carnegie Endowment: Chatham House: USCongress: I-HLS: Modern War Institute:
You Might Also Read:
Cyber Warfare Creates Ghosts In Our Machines: