Artificial Intelligence : A Quick Overview
There is barely a field of human endeavour that Artificial Intelligence (AI) does not have the potential to impact. While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.
Today, AI is fast becoming a very important feature of daily life across many sectors, This article is a brief guide to AI, from Machine Learning (ML) to general AI.
History
The idea of inanimate objects coming to life as intelligent beings has been around for a long time. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built automatons. The beginnings of modern AI can be traced to classical philosophers' attempts to describe human thinking as a symbolic system. But the field of AI wasn't formally founded until 1956.
The main events over the past fifty years have been advances in search algorithms, machine learning algorithms, and integrating statistical analysis into understanding the world at large.
Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. At that time computers could be told what to do, but they didn’t remember what they had done. At that time, computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. In 1950 English Mathematician Alan Turing published a paper entitled “Computing Machinery and Intelligence” which opened the doors to the field that would be called AI. At that time AI was described as any task performed by a machine that previously required human intervention and intelligence. In August 1955 a paper was published announcing a study of artificial intelligence to be carried out in the Summer of 1956 at Dartmouth College in New Hampshire USA.
It said that, “if a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.”
Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.
At a very high level, artificial intelligence can be split into two broad types: Narrow AI and General AI.
Computers today now have intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so. This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past.
Narrow AI: Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI. There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organising personal and business calendars, responding to simple customer-service queries, and the list goes on and on. New applications of these learning systems are emerging all the time.
General AI: is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or reasoning about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.
There have been too many breakthroughs to put together a definitive list, but some recent highlights include:
- In 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each, setting society on a path towards driverless vehicles.
- In 2011, the computer system IBM Watson won the US TV quiz show Jeopardy beating two of the best human players the show had. To win the show, Watson used natural language processing and analytics and answered the questions, often in a fraction of a second.
- The next time a machine-learning system caught the public's attention was in 2016 when Google DeepMind's AlphaGo beat a Go master ten years before experts predicted. Go is an ancient Chinese game of great complexity with about 200 possible alternative moves per turn, compared to about 20 in Chess. AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.
- Then in 2017 Google launched AlphaGo Zero a system that played "completely random" games against itself, and then learnt from the results.
Most of the examples so far come from Machine Learning, a subset of AI that accounts for the vast majority of achievements in the field in recent years. When people talk about AI today they are often really talking about machine learning.
The description of machine learning dates back to 1959, when it was coined by Arthur Samuel, a pioneer of the field who developed one of the world's first self-learning systems, the Samuel Checkers-playing Program. To learn, these systems are fed huge amounts of data, which they then use to learn how to carry out a specific task. For example, if you were building a machine-learning system to predict house prices, the training data should include more than just the property size, but other salient factors such as the number of bedrooms or the size of the garden.
Military Applications
The US military command charged with watching and protecting North American airspace is using (AI) to detect the threats that previously slipped its notice. The new capability, named Pathfinder, fuses data from military, commercial and government sensors to create a common operating picture for North American Aerospace Defense Command and US Northern Command.
The platform aggregates data from multiple systems, data that would in the past have been left on the cutting room floor and not analysed or assessed in a timely manner. Using machine learning, the Pathfinder helps analyse that data from multiple military, commercial and governmental systems. Previously that data stayed in separate systems, preventing NORAD from seeing the whole picture and allowing potential threats to slip through unnoticed. Pathfinder takes the data from each of those systems and fuses it into a common operating picture,
Deep Learning
AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.
This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power, during which time the use of clusters of graphics processing units (GPUs) to train machine-learning systems has become more prevalent.
Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google, Microsoft, and Tesla, have moved to using specialised chips tailored to both running, and more recently training, machine-learning models.
The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.
The evidence of which jobs will be supplanted is starting to emerge. In the US, there are now 28 Amazon Go stores, cashier-free supermarkets where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who work as cashiers remains to be seen. Fully autonomous self-driving vehicles aren't a reality yet, but by some predictions are that self-driving trucks will take over 1.7 million jobs in the next decade, and will also impact couriers and taxi drivers. Yet some of the easiest jobs to automate won't even require robotics. At present there are millions of people working in administration, entering and copying data between systems, chasing and booking appointments for companies.
As software gets better at automatically updating systems and flagging the information that's important, so the need for administrators will reduce with every technological shift, although new jobs will be created to replace those lost.
Among AI experts there's a broad range of opinion about how quickly artificially intelligent systems will surpass human capabilities. They have estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years. Too fast? Too slow... ? Hard to say.
Live Science: ZDNet: Stanford University: Harvard University:
TechRepublic: Washington University: InnoArchiTect: TechRepublic:
You Might Also Read:
Artificial Intelligence, Automation & Drones (£)
Artificial Intelligence – A Brief History: