Drone Operators’ Jobs Are Safe From Robots…For Now
A typical drone combat air patrol, or CAP, is a lot more manpower-intensive than the term “unmanned aerial vehicle” would suggest. In fact, as many as 150 people—from repairmen to image analysts—play some sort of role in every drone flight that takes place over Iraq and Syria. It’s a problem created by technology. Unfortunately, it’s not a problem that technology is going to solve any time soon, according to Steven K. Rogers, the senior scientist for automatic target recognition and sensor fusion at the Air Force Research Laboratory.
Rogers is leading efforts to reduce the amount of manpower needed to fly those combat air patrols by moving the state of technology forward. “To give you an idea of the state of the art in this space,” he told a group at the GEOINT Symposium on Monday in downtown Washington, D.C., “I have young airmen, analysts— they are ordered, ‘You stare at that screen. You call out anything you see, if you need to turn your eyes for any reason, you need to sneeze, you need to ask permission, because someone else has to come put their eyes on that screen.’ The point I’m driving home to you is that the state of the art in our business is people… The diversity of these tasks means that quite often, we throw together ad hoc combinations of sensors and people and resources to find information we need.”
The Pentagon is putting a huge emphasis on autonomy for tasks like intelligence, surveillance and reconnaissance. When former Defense Secretary Chuck Hagel announced the Defense Innovation Initiative last November, he named robotics and autonomous systems as keys to military innovation. The initiative is part of the so-called “offset strategy,” a bid to develop new silver-bullet technologies to secure military dominance for decades to come.
Says Rogers: “Every place through those documents you see autonomy, autonomy, autonomy.”
His message to the brass is this: manage your expectations. Extra computational power and “slight improvements in algorithms won’t solve the autonomy problems,” he says.
What’s so hard about autonomy? It’s not that machines can’t see and it’s not that they can’t think. What they lack is imagination; that ability is central even to tasks like target recognition, which would seem not to require it.
“Imagined representation is key to autonomy,” says Rogers.
In essence, that refers to the ability to fill in gaps in data and rapidly construct new mental models of external situations. It requires a very high level of mental adaptability, even when there seems to be more than enough data at hand.
“To do autonomy, I have to be able to handle when I don’t have all the information that I need, or I don’t have the right mental model. That’s what we have to push on to achieve autonomy. “ he says. “Figuring out what’s going to happen next, it’s not sensor data populated, it’s an imagined representation.”
Artificial intelligence agents can “figure out what’s going to happen next” only in extremely limited domains, ones they’ve experienced or learned about by structured data that they’ve been fed. You can program a machine to anticipate every possible chess move but not to understand how a human will feel about losing at chess to a machine.
Humans are constantly constructing new models of the world, recognizing patterns, on the basis of lived experience. In fact, everything you think will happen is a projection based in part on something that’s already happened to you. PalmPilot creator Jeff Hawkins dubs this the “memory prediction framework.”
“The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence,” Hawkins writes in On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines, his seminal book on the automation of human thinking
Intelligence, thus, is the ability to collect data and know what the data is even at the moment it is being collected, so that the intelligent agent can change the way that data is used, it is knowing through anticipating. Within the human brain, that processing happens immediately and constantly thanks, in part, to the neocortex, the new brain that our pre-mammalian ancestors evolved more than 50 million years ago. It’s the neocortex that allows humans to fill in holes in sensed data with stuff from memory in order to make an imagined representation of a future event, to complete a pattern, to predict. This is the precisely the challenge that autonomy for intelligence collection represents.
Without some sort of major technological breakthrough, full autonomy for intelligence collection – for example, replacing human eyes on that drone feed — will be impossible, says Rogers. Science is scoring scattered victories. “Tracking? We’re here and there, depending on the environment. That’s going to keep improving. But target recognition? I have job security. We’ve thrown billions of dollars at that and we don’t have it yet.”
DefenseOne: http://bit.ly/1GChRPf