Are We Really Safe From Self-Aware Robots?
End-of-mankind predictions about artificial intelligence, which have issued from some of today’s most impressive human intellects, including Stephen Hawking, Elon Musk, Bill Gates, Steve Wozniak and other notables, have generally sounded overly alarmist to me, exhibiting a bit more fear-of-the-unknown than I would have expected from such eminences, especially the scientists. But that was before I saw reports on the self-aware robot.
The reports, including a recent piece in New Scientist http://bit.ly/iCubNS tell of a breakthrough in artificial intelligence. A robot was able to figure out a complex puzzle that required it to recognize its own voice and to extrapolate the implications of that realization. (Shorthand version: Three robots were told that two of them had been silenced and they needed to determine which one had not been. All three robots tried saying “I don’t know,” but only one could vocalize. Once that robot heard the sound of its own voice saying, “I don’t know,” it changed its answer and said that it was the one robot that had not been silenced.)
What’s noteworthy is that this same test had been given to these same robots many times before, and this was the first time that one of these self-learning robots figured it out.
The classic argument against the robot takeover of the world is that while computers can go haywire, think the Windows operating system on almost any given day, so can humans. That’s undeniable, but society has established some extensive checks-and-balances that limit how much damage any one person can do. The military has a chain of command, and killers on a shooting spree are eventually stopped, either by the police or bystanders. Consider 9/11. Although terrorists flying planes into buildings was unexpected, as soon as the nature of the attack became apparent, all US aircraft were grounded.
But our reliance on computers to assist us and even take control just keeps increasing, and today machine intelligence is integral to military weapons systems, nuclear power plants, traffic signals, wireless-equipped cars, aircraft and more. One of our greatest fears now is that terrorists will gain control over any such key computer systems. But an even greater threat might be that the machines themselves gain the upper hand through artificial intelligence and wrest control from us.
It’s become something of a classic science-fiction storyline: The systems calculate that they need to take a different path than we humans have envisioned. Consider this passage from that New Scientist story: “The test also shines light on what it means for humans to be conscious. What robots can never have, which humans have is phenomenological consciousness: ‘the first-hand experience of conscious thought,’ as Justin Hart of the University of British Columbia in Vancouver, Canada, puts it. It represents the subtle difference between actually experiencing a sunrise and merely having visual cortex neurons firing in a way that represents a sunrise. Without it, robots are mere ‘philosophical zombies,’ capable of emulating consciousness but never truly possessing it.”
I suppose that was intended to be comforting to its human readers, suggesting that consciousness will always keep humans one big step beyond computers. But another way to look at it is that these systems will eventually have the ability to think any thoughts humans can, but without our moral compass. So the machines, confronted by a starving population and an agricultural system that is maxed out, might conclude that a sharp population reduction is the solution — and that the nuclear power plants within its control offer a way to achieve it.
You can forget Isaac Asimov’s Three Laws of Robotics. The United Nations has already attempted to set rules for battlefield robots that can decide on their own when it’s a good idea to kill people.
There is a subtle line that shouldn’t be crossed with artificial intelligence. Making Siri smarter so that she understands questions better and delivers more germane answers is welcome. But what about letting her decide to delete apps that are never used or add some that your history suggests you’d like? What if she sees from your calendar that you’re on a critical deadline this afternoon and decides to prevent you from launching distracting games when you should be working?
Engineers are not the best at setting limits. They are much better suited — both in terms of temperament and intellectual curiosity — to seeing how far they can push limits. That’s admirable, except when its results move from C3PO to HAL 9000 to Star Trek: TNG’s Lore.
When superior engineering truly engineers something superior — superior to the engineers — can disasters imagined in science fiction become science fact?
Computerworld: http://http://bit.ly/1NrmcMm