Artificial Intelligence: Myths, Facts and Future
Computer scientists have been trying to creative smarter, more capable programs and, to an extent, robots, since the birth of the computer technology. It’s one thing to program something that takes cues from the operator and another to watch something operate on their own.
This was made possible through machine learning – programs were made such that they could take cues from the environment and “decide” what to do with them. This can manifest as computer adversaries in video games, robots walking on uneven land, cleaning robots knowing the layout of your apartment or predicting financial markets. This, however, is not true intelligence.
So what is true intelligence in this sense? Well, it’s somewhat ambiguous. Should artificial intelligence be as smart as an average human? Should it be smarter? Could it be as smart as a dog? Is “smart” even the right word here?
One of the key aspects of artificial intelligence, should it ever be created, is "formal" reasoning, or logic. The first programmable digital computer was based on the work of Alan Turing. His paper "Computing Machinery and Intelligence" published back in 1950, opens with the words: “I propose to consider the question, ‘Can machines think?’ “ He expresses his views on then-emerging field:
We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child.
Of course, at the time, all of this was just theory. But as far as the initial question goes, the author finds it difficult to precisely define the meaning of "thinking", at least in this situation.
He thus proposes a simple test, which would allow to determine whether a machine possesses intelligence or not – or appears to. He calls the test "imitation game" — it was only later dubbed the Turing test. Anyway, the "standard interpretation" of the Turing test is this: there are players A and B, one of which is a human, the other is a computer. Then there's the player C, the interrogator who doesn't see A and B and can communicate with them only through text. Player C asks player A and B questions, again, only in written form. Based on answers provided, eventually C has to make a choice, which one of the players A and B is a machine and which one is a human. There are different views on what actually a Turing test is, but however it may be, to this day no machine has passed any variation of the Turing test.
Not everyone wants machines to think, though. Elon Musk, engineer, inventor, business magnate, and generally one of the personalities the more tech-inclined people look up to, expressed his concern with artificial intelligence last year: I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s sure he can control the demon? It doesn’t work out.
This fear of artificial intelligence is understandable – if a robot can “reason”, there’s no telling what conclusions it will come to regarding humans. With ‘digital shackles’ it won’t be a true intelligence, and without them there’s simply no telling what it will be capable of doing.