DARPA Wants AI To Reveal Adversaries' True Intentions
From eastern Europe to southern Iraq, the US military faces an ancient but also current difficult problem: Adversaries who pretending to be someone they’re not.
A new program from the Defense Advanced Research Projects Agency seeks to apply artificial intelligence to detect and understand how adversaries are using sneaky tactics to create chaos, undermine governments, spread foreign influence and sow discord.
This activity, hostile action that falls short of, but often precedes, violence, is sometimes referred to as gray zone warfare, the ‘zone’ being a sort of liminal state in between peace and war. The actors that work in it are difficult to identify and their aims hard to predict, by design.
“We’re looking at the problem from two perspectives: Trying to determine what the adversary is trying to do, his intent; and once we understand that or have a better understanding of it, then identify how he’s going to carry out his plans — what the timing will be, and what actors will be used,” said DARPA program manager Fotis Barlos.
Dubbed COMPASS, the new program will “leverage advanced artificial intelligence technologies, game theory, and modeling and estimation to both identify stimuli that yield the most information about an adversary’s intentions, and provide decision makers high-fidelity intelligence on how to respond, with positive and negative tradeoffs for each course of action,” according to DARPA.
Teaching software to understand and interpret human intention, a task sometimes called “plan recognition”, has been a subject of scholarship since at least a 1978 paper by Rutgers University researchers who sought to understand whether computer programs might be able to anticipate human intentions within rule-based environments like chess.
Since then, the science of plan recognition has advanced as quickly as the spread of computers and the internet, because all three are intimately linked.
From Amazon to Google to Facebook, the world’s top tech companies are pouring money into probabilistic modeling of user behavior, as part of a constant race to keep from losing users to sites that can better predict what they want.
A user’s every click, “like,” and even period of inactivity adds to the companies’ almost unimaginably large sets, and new machine learning and statistical techniques (especially involving Bayesian reasoning) make it easier than ever to use the information to predict what a given user will do next on a given site.
Among these tools is Google’s Activity Recognition library, which helps app developers imbue their software with a better sense of what the user is doing.
But inferring a user’s next Amazon purchase (based on data that user has volunteered about previous choices, likes, etc.) is altogether different from predicting how an adversary intends to engage in political or unconventional warfare. So the COMPASS program seeks to use video, text, and other pieces of intelligence that are a lot harder to get than shopping-cart data.
The program aligns well with the needs of the Special Operations Forces community in particular. Gen. Raymond “Tony” Thomas, the head of US Special Operations Command, has said that he’s interested in deploying forces to places before there’s a war to fight. Thomas has discussed his desire to apply artificial intelligence, including neural nets and deep learning techniques, to get “left of bang.”
Unlike shopping, the analytical tricks that apply to one gray-zone adversary won’t work on another. “History has shown that no two [unconventional warfare] situations or solutions are identical, thus rendering cookie-cutter responses not only meaningless but also often counterproductive,” wrote Gen. Joseph Votel, who leads US Central Command, in his seminal 2016 treatise on gray zone warfare.
As practiced by Amazon and others within the domain of online shopping, “plan recognition” at scale is very cookie-cutter. If COMPASS succeeds, it will have to apply game theory and big data to behavior prediction in ways that Silicon Valley has never attempted.
It will have to do so repeatedly, in the face of varied and constantly morphing adversaries looking to keep as much of their activity hidden as possible.
You Might Also Read:
The Pentagon Puts Google’s AI To Use:
AI Is Replacing Human Made Decisions: