Super Intelligent Machines Need An Off Switch
The development of computer systems that can evaluate visual data with greater accuracy than humans has been one of the most challenging areas of research in computer science for the last four decades.
Now, with recent advances in machine learning and artificial intelligence, global businesses from a wide range of industries are achieving unprecedented benefits from implementing the latest computer technology.
In a recent study, researchers from Germany’s Max Planck Institute for Human Development say they’ve shown that an artificial intelligence in the category known as “superintelligent” will be impossible for humans to with competing software. “We are fascinated by machines that can control cars, compose symphonies, or defeat people at chess, Go, or Jeopardy! While more progress is being made all the time in Artificial Intelligence (AI), some scientists and philosophers warn of the dangers of an uncontrollable superintelligent AI... Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide.” says the Institure.
A leading expert on AI, Prof. Stuart Russell, of the University of California at Berkeley, is suggesting a way forward for human control over super-powerful Artificial Intelligence, advocating the abandonment of the current “standard model” of AI, proposing instead a new model based on three principles - chief among them the idea that machines should know that they don’t know what humans’ true objectives are.
Russel makes reference to the pioneer cryptographer, Alan Turing who introduced many of the core ideas of what became the academic discipline of AI. Russell said: “Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” and this thought echoes one of Turing’s colleagues at who said “The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Russell suggests that machines designed according to the new model would be, deferential to humans, cautious and minimally invasive in their behaviour and, most importantly, be willing to be switched off.
“Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” Russell says. AI researchers build machines, give them certain specific objectives and judge them to be more or less intelligent by their success in achieving those objectives, but, according to Russell, “when we start moving out of the lab and into the real world, we find that we are unable to specify these objectives completely and correctly. In fact, defining the other objectives of self-driving cars, such as how to balance speed, passenger safety, sheep safety, legality, comfort, politeness, has turned out to be extraordinarily difficult.”
That doesn’t seem to deter the giant technology corporations including IBM and Alphabet from the development of increasingly capable machines and their ubiquitous installation at critical points in human society.
This is the dystopian future that Russell fears if his discipline continues on its current path and succeeds in creating super-intelligent machines.
But for anyone who thinks that living in a world dominated by super-intelligent machines, we already live in such a world. The AIs in question are called corporations. They are definitely super-intelligent, in that the collective IQ of the humans they employ dwarfs that of ordinary people and of government institutions.
They have immense wealth and resources and their lifespans greatly exceed that of mere humans and they exist to achieve one overriding objective - to increase and thereby maximise shareholder value. In order to achieve that they will relentlessly do whatever it takes, regardless of ethical considerations, collateral damage to society, democracy or the planet.
As the renowned science fiction novelist William Gibson has observed, the future’s already here, it’s just not evenly distributed.
Max Plank Inst: Prof Stuart Russel: TechRegister: BBC / Reith Lectues: Chooch AI:
Popular Mechanics: Eminetra: Guardian:
You Might Also Read:
AI Market Forecast To Be Worth $190b By 2025: