Machine 2 Machine: Where will that leave us?
Over 1,000 high-profile artificial intelligence experts and leading researchers have recently signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.
The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.
The letter states: “AI technology has reached a point where the deployment of autonomous weapons is, practically if not legally , feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.
Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.
We are witnessing the growth of an Internet of Things (IoT) – where machines increasingly communicate directly with each other without a human intermediary – will the machines prefer their own company? And where would that leave us?
Of course the very idea of machines “preferring” their own company is absurd in terms of today’s artificial intelligence (AI), but look at it this way: machine to machine (M2M) communications can take place at machine speeds so, before any human has had time to “get a word in edgeways”, a population of machines could in theory complete a conversation of sufficient importance and complexity to initiate global economic meltdown.
Last year Stephen Hawking and a group of leading scientists said: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.” The sort of risk that caught the public imagination following this statement was of an emergent super-intelligence that would arise to dominate and reduce humankind to slave status.
If that sounds far-fetched, consider those early AI projects that showed how surprisingly intelligent behaviour can emerge from populations of very simple elements obeying simple rules of interaction. While one or two ants will tend to wander around aimlessly, once the number increases beyond a certain threshold the ant colony as a whole begins to behave in an adaptive way that suggests remarkable survival “intelligence”.
The small possibility that some malevolent intelligence could emerge from the growing network of smart household gadgets makes great science fiction, but it obscures a far more immediate danger: that unexpected, unpredictable and “irrational” consequences might emerge from adding billions of relatively simple devices to our already complex Internet. The financial markets gave us a glimpse of what might happen in 2010, when high frequency trading (HFT) systems contributed to the “Flash Crash”. Each HFT system was following its own set of rules and they were inter-communicating via the markets at M2M speeds to form what was – in terms of future IoT scenarios – a relatively tiny “Intranet of Things”. But the results still came as a shock to the financial markets.
Already there are many more devices than humans connected to the Internet. According to IDC’s estimation the number of devices capable of being connected approaches 200 billion and around 20 billion of them are already connected. So the danger is not so much about the impact of any particular connection, as about the possibility of unpredicted responses or vulnerabilities emerging out of sheer complexity.
There is also another even more immediate danger. The very idea of intelligence suggests some ability to learn new behaviours: so what if the wrong people provide the teaching? The 2013 holiday season saw a smart, Internet-connected fridge sending out spam as part of a junk mail campaign that had hijacked more than 100,000 connected devices. The funny side was the idea that a smart fridge might turn criminal; the nasty truth was that a device created to perform a simple, useful task could be recruited into a criminal gang.
Whereas those HFT systems were highly sophisticated, the intelligence in the smart fridge is very limited – call them “naïve” and one can understand how easily these devices can be lured into a life of crime. Whereas each new computer added to the Internet comes with some degree of malware protection already built into its operating system, things like smoke detectors, security alarms and utility meters come from a different culture. Traditionally all such devices were either autonomous units or else, if connected, they were on a closed, dedicated network.
What is especially disturbing about the IoT is not just its vulnerability but also that so many of its components have a direct, physical function. It is very inconvenient when a computer virus causes your PC to crash and lose your latest documents, but at least no-one is physically hurt. But if an attack on the IoT were to prevent a fire alarm from being triggered, cause a life-sustaining medical system to fail, disrupt air traffic control, or the brakes to fail on a connected vehicle – then lives and property would be endangered as a direct result of the attack.
This escalates the possibilities for serious criminal activity and opens new doors to terrorists and cyber war between nations.
This was the sort of attack seen in 2010 when the Stuxnet worm closed down Iran’s Natanz nuclear facility: not by simply closing down a thousand centrifuges but by physically damaging them in a manner that would take weeks to repair.
The Internet of Things does indeed present a new challenge. But the networking industry has, for longer than three decades, been gearing up to address this sort of challenge.
Guardian: http://http://bit.ly/1E6gFUq