From Machine Learning To Machine Reasoning
The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation.
However, the history and evolution of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks.
There seems to be an indelible pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realisation of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat appear to be as consistent as sea waves on the shore.
This pattern is vexing to technologists and investors because it doesn’t follow the usual technology adoption lifecycle. Popularised by Geoffrey Moore in his book “Crossing the Chasm“, technology adoption usually follows a well-defined path.
Technology is developed and finds early interest by innovators and then early adopters, and if the technology can make the leap across the “chasm”, it gets adopted by the early majority market and then it’s off to the races with demand by the late majority and finally technology laggards.
If it can’t cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn’t fit the technology adoption lifecycle pattern.
AI isn’t a discrete technology. Rather it’s a quest… a quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology.
AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren’t investing in AI, they’re investing in the output of AI research. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.
The Need for Understanding
It’s clear that intelligence is like an onion, many layers. Once we understand one layer, we find that it only explains a limited amount of what intelligence is about. We discover there’s another layer underneath, and back to our research institutions we go to figure out how it works. In our recent exploration of the intelligence of voice assistants, we’re teasing at one of those next layers: understanding.
That is, knowing what something is, recognising an image among a category of trained concepts, converting audio wave forms into words, identifying patterns among a collection of data, or even playing games at advanced levels, is different from actually understanding what those things are.
This lack of understanding is why we get hilarious results in our Voice Assistant Benchmark, but also why we can’t truly get autonomous machine capabilities in a wide range of situations. Without understanding, there’s no common sense. Without common sense and understanding, machine learning is just a bunch of learned patterns that can’t adapt to the constantly evolving changes of the real world.
While this description conveniently skips the Understanding step, we believe that understanding is the next logical threshold of AI capability. And like all previous layers of this AI onion, tackling this layer will require new research breakthroughs, dramatic increases in compute capabilities, and volumes of data.
What? Don’t we have almost limitless data and boundless computing power? Not quite. Read on.
The Quest for Common Sense: Machine Reasoning
Early in the development of artificial intelligence, researchers realised that for machines to successfully navigate the real world, they would have to gain an understanding of how the world works and how various different things are related to each other.
In 1984, the world’s longest-lived AI project started. The Cyc project is focused on generating a comprehensive “ontology” and knowledge base of common sense, basic concepts and “rules of thumb” about how the world works. The Cyc ontology uses a knowledge graph to structure how different concepts are related to each other, and an inference engine that allows systems to reason about facts.
The main idea behind Cyc and other understanding-building knowledge encodings is the realization that systems can’t be truly intelligent if they don’t understand what the underlying things they are recognising or classifying are. This means we have to dig deeper than machine learning for intelligence.
We need to peel this onion one level deeper, scoop out another tasty parfait layer. We need more than machine-learning, we need, machine reasoning.
Machine reason is the concept of giving machines the power to make connections between facts, observations, and all the magical things that we can train machines to do with machine learning. Machine learning has enabled a wide range of capabilities and functionality and opened up a world of possibility that was not possible without the ability to train machines to identify and recognise patterns in data.
However, this power is crippled by the fact that these systems are not really able to functionally use that information for higher ends, or apply learning from one domain to another without human involvement. Even transfer learning is limited in application.
Indeed, we’re rapidly facing the reality that we’re going to soon hit the wall on the current edge of capabilities with machine learning-focused AI. To get to that next level we need to break through this wall and shift from machine learning-centric AI to machine reasoning-centric AI. However, that’s going to require some breakthrough in research that we haven’t realized yet.
Are we Still Limited by Data and Compute Power?
The fact that the Cyc project has the distinction as being the longest-lived AI project is a bit of a back-handed compliment. The Cyc project is long lived because after all these decades the quest for common sense knowledge is proving elusive.
Codifying commonsense into a machine-processable form is a tremendous challenge. Not only do you need to encode the entities themselves in a way that a machine knows what you’re talking about but also all the inter-relationships between those entities.
There are millions, if not billions, of “things” that a machine needs to know. Some of these things are tangible like “rain” but others are intangible such as “thirst”. The work of encoding these relationships is being partially automated, but still requires humans to verify the accuracy of the connections… because after all, if machines could do this we would have solved the machine recognition challenge. It’s a bit of a chicken and egg problem this way.
You can’t solve machine recognition without having some way to codify the relationships between information. But you can’t scalable codify all the relationships that machines would need to know without some form of automation.
Machine learning has proven to be very data-hungry and compute-intensive. Over the past decade, many iterative enhancements have lessened compute load and helped to make data use more efficient. GPUs, TPUs, and emerging FPGAs are helping to provide the raw compute horsepower needed. Yet, despite these advancements, complicated machine learning models with lots of dimensions and parameters still require intense amounts of compute and data. Machine reasoning is easily one order or more of complexity beyond machine learning. Accomplishing the task of reasoning out the complicated relationships between things and truly understanding these things might be beyond today’s compute and data resources.
Onward Progress
The current wave of interest and investment in AI doesn’t show any signs of slowing or stopping any time soon, but it’s inevitable it will slow at some point for one simple reason: we still don’t understand intelligence and how it works. Despite the amazing work of researchers and technologists, we’re still guessing in the dark about the mysterious nature of cognition, intelligence, and consciousness.
At some point we will be faced with the limitations of our assumptions and implementations and we’ll work to peel the onion one more layer and tackle the next step of challenges. Machine reasoning is quickly approaching as the next challenge we must surmount on the quest for artificial intelligence.
If we can apply our research and investment talent to tackling this next layer, we can keep the momentum going with AI research and investment. If not, the pattern of AI will repeat itself, and the current wave will crest. It might not be now or even within the next few years, but the ebb and flow of AI is as inevitable as the waves upon the shore.
You Might Also Read:
Machine Learning & Big Data - Where You Least Expect It:
Next-Gen Robotic Process Automation Leverages AI And Machine Learning: