What is Machine Learning?
Building systems that learn from data is a better way to solve complex problems, given enough meaningful data to learn from
You’ve probably encountered the term “machine learning” more than a few times lately. Often used interchangeably with artificial intelligence, machine learning is in fact a subset of AI, both of which can trace their roots to MIT in the late 1950s.
Machine learning is something you probably encounter every day, whether you know it or not. The Siri and Alexa voice assistants, Facebook’s and Microsoft’s facial recognition, Amazon and Netflix recommendations, the technology that keeps self-driving cars from crashing into things, all are a result of advances in machine learning.
While still nowhere near as complex as a human brain, systems based on machine learning have achieved some impressive feats, like defeating human challengers at chess, Jeopardy, Go, and Texas Hold ‘em.
Dismissed for decades as overhyped and unrealistic (the infamous ”AI winter”), both AI and machine learning have enjoyed a huge resurgence over the last few years, thanks to a number of technological breakthroughs, a massive explosion in cheap computing horsepower, and a bounty of data for machine learning models to chew on.
Self-taught software
So what is machine learning, exactly? Let’s start by noting what it is not: a conventional, hand-coded, human-programmed computing application.
Unlike traditional software, which is great at following instructions but terrible at improvising, machine learning systems essentially code themselves, developing their own instructions by generalizing from examples.
The classic example is image recognition. Show a machine learning system enough photos of dogs (labeled “dogs”), as well as pictures of cats, trees, babies, bananas, or any other object (labeled “not dogs”), and if the system is trained correctly it will eventually get good at identifying canines, without a human being ever telling it what a dog is supposed to look like.
The spam filter in your email program is a good example of machine learning in action. After being exposed to hundreds of millions of spam samples, as well as non-spam email, it has learned to identify the key characteristics of those nasty unwanted messages. It’s not perfect, but it’s usually pretty accurate.
Supervised vs. unsupervised learning
This kind of machine learning is called supervised learning, which means that someone exposed the machine learning algorithm to an enormous set of training data, examined its output, then continuously tweaked its settings until it produced the expected result when shown data it had not seen before.
This is analogous to clicking the “not spam” button in your inbox when the filter traps a legitimate message by accident. The more you do that, the more the accuracy of the filter should improve.
The most common supervised learning tasks involve classification and prediction (i.e. “regression”). Spam detection and image recognition are both classification problems. Predicting stock prices is a classic example of a regression problem.
A second kind of machine learning is called unsupervised learning. This is where the system pores over vast amounts of data to learn what “normal” data looks like, so it can detect anomalies and hidden patterns. Unsupervised machine learning is useful when you don’t really know what you’re looking for, so you can’t train the system to find it.
Unsupervised machine learning systems can identify patterns in vast amounts of data many times faster than humans can, which is why banks use them to flag fraudulent transactions, marketers deploy them to identify customers with similar attributes, and security software employs them to detect hostile activity on a network.
Clustering and association rule learning are two examples of unsupervised learning algorithms. Clustering is the secret sauce behind customer segmentation, for example, while association rule learning is used for recommendation engines.
Limitations of machine learning
Because each machine learning system creates its own connections, how a particular one actually works can be a bit of a black box. You can’t always reverse engineer the process to discover why your system can distinguish between a Pekingese and a Persian. As long as it works, it doesn’t really matter.
But a machine learning system is only as good as the data it has been exposed to, the classic example of “garbage in, garbage out.” When poorly trained or exposed to an insufficient data set, a machine learning algorithm can produce results that are not only wrong but discriminatory.
HP got into trouble back in 2009 when facial recognition technology built into the webcam on an HP MediaSmart laptop was able to unable to detect the faces of African Americans. In June 2015, faulty algorithms in the Google Photos app mislabeled two black Americans as gorillas.
Another dramatic example: Microsoft’s ill-fated Taybot, a March 2016 experiment to see if an AI system could emulate human conversation by learning from tweets. In less than a day, malicious Twitter trolls had turned Tay into a hate-speech-spewing chat bot from hell. Talk about corrupted training data.
A machine learning lexicon
But machine learning is really just the tip of the AI berg. Other terms closely associated with machine learning are neural networks, deep learning, and cognitive computing.
Neural network. A computer architecture designed to mimic the structure of neurons in our brains, with each artificial neuron (microcircuit) connecting to other neurons inside the system.
Neural networks are arranged in layers, with neurons in one layer passing data to multiple neurons in the next layer, and so on, until eventually they reach the output layer. This final layer is where the neural network presents its best guesses as to, say, what that dog-shaped object was, along with a confidence score.
There are multiple types of neural networks for solving different types of problems. Networks with large numbers of layers are called “deep neural networks.” Neural nets are some of the most important tools used in machine learning scenarios, but not the only ones.
Deep learning. This is essentially machine learning on steroids, using multi-layered (deep) neural networks to arrive at decisions based on “imperfect” or incomplete information. The deep learning system DeepStack is what defeated 11 professional poker players last December, by constantly re-computing its strategy after each round of bets.
Cognitive computing. This is the term favored by IBM, creators of Watson, the supercomputer that kicked humanity’s ass at Jeopardy in 2011. The difference between cognitive computing and artificial intelligence, in IBM’s view, is that instead of replacing human intelligence, cognitive computing is designed to augment it—enabling doctors to diagnose illnesses more accurately, financial managers to make smarter recommendations, lawyers to search case-law more quickly, and so on.
This, of course, is an extremely superficial overview. Those who want to dive more deeply into the intricacies of AI and machine learning can start with this semi-wonky tutorial from the University of Washington’s Pedro Domingos, or this series of Medium posts from Adam Geitgey, as well as “What deep learning really means” by InfoWorld’s Martin Heller.
Despite all the hype about AI, it’s not an overstatement to say that machine learning and the technologies associated with it are changing the world as we know it. Best to learn about it now, before the machines become fully self-aware.
You Might Also Read:
Machine Learning Writes Better Emails:
A Major Development in Deep-Learning: