AI Could Transform Submarine Warfare
A new program seeks to apply artificial intelligence to ocean data and thereby help submarine operators understand where their adversaries are, what they’re doing, and what they can see.
Even today’s best sonar technology doesn’t give a sub captain a very good sense of the battlespace, says Jules Jaffe, a research oceanographer at Scripps Oceanography at the University of California San Diego who is embarking on the US Navy program.
There’s a lot of potentially valuable data that towed sonar doesn’t capture because it’s only collecting one type of data and only at one point. If you could collect and properly analyze sound and wave data from other points in the ocean, you could develop a much better sense of what an adversary is doing.
In 2018 The Office of US Naval Research, or ONR, asked for white papers that explore “analytic techniques linking physical oceanographic variability with acoustic propagation, including field efforts to collect relevant data sets,” and “Analysis of large oceanographic and acoustic data sets, including the development and use of artificial intelligence and machine learning techniques.”
#ONR has selected more 30 projects funding to fund under the Task Force Ocean research program, which an official said would total more than $60 million spread over the next three years. “These will have various start dates, but all should be underway by October 2019” said an ONR spokesperson in an email.
Now the office plans to fund about 15 research groups under a new program that largely aims to develop better ocean-sound sensors. It also funds Jaffe’s work, which aims to turn all that new undersea-sound-propagation data into 3-D pictures that submarine crews could use to immediately see where they were in relation to other objects and figure out how they might move to get a better picture or hide their activities.
“I visualise more of a three-dimensional environment where the submariner could understand where they are, geologically [in terms of ocean topography], geographically, what the radiated noise levels were…which is something they (the US Navy) are very concerned about because they don’t want their adversary to know they can hear them,” told DefenseOne.
Jaffe is also looking to build an AI engine that can do better than human submarine captains and crews. And he’ll teach it on data that comes from the submarine captains and crews themselves.
“If I know this adversary is located along a certain bearing angle, then where might I go to optimise my ability to localize them? Those are decisions that submariners are making all the time,” he says. “They’re really playing a game. They want to find the bad guy without the bad guy knowing that we know where they are.”
If you can take that data and use reinforcement learning, essentially, showing the software lots of examples of submarine captains executing missions, you could train an AI that would out-perform a human crew on some of these decisions.
“We can watch them while they are making these decisions and then the reinforcement algorithm will learn what they’re doing in order to minimise ambiguities and understand what the results are,” he says.
It’s similar to the way the researchers at DeepMind trained an algorithm to play a very difficult game of Go better than the most accomplished human player, using reinforcement learning. Sub-hunting, in theory, is just a wetter version of a similar game.
You Might Also Read:
Artificial Intelligence, Robotics & All Tomorrows Wars: