Cognitive Computing: What Can and Can’t Be Done.
Cognitive computing systems make use of the information to give out necessary guidelines while interacting naturally with the end user.
Next year will mark the 60th anniversary of the Dartmouth Artificial Intelligence (AI) Conference. That conference, which marked the birth of AI research, explored whether machines could simulate any aspect of human intelligence.
Since then, Google has developed a self-driving car, computers can type what you speak, and phones have become really good at playing chess.
We’ve come a long way, but now, cognitive computing promises to take us a step further. Ever since IBM’s Watson computer won Jeopardy, researchers have been busy working on the idea that computers can solve the kinds of woolly, messy problems that humans deal with on a daily basis.
Professor Mark Bishop, Director of the University of London Goldsmith’s Tungsten Centre for Intelligent Data Analytics, sees different definitions of cognitive computing.
The commercial one focuses on solving those ambiguous, uncertain problems that humans were always good at, and that traditional computers couldn’t do. Things such as medical diagnoses, for example.
Bishop is an associate editor of the journal Cognitive Computing, which also harbours other definitions. In particular, “biologically-inspired computational accounts of all aspects of natural and artificial cognitive systems”. In short, computer simulations of brains. IBM’s Blue Brain project uses a Blue Gene supercomputer to do that, while the EU-funded Human Brain project is another.
A no-brainer
These two approaches have different goals. One seeks to create a platform akin to a real human mind, possible opening the door to explore things such as consciousness and emotion. The other seeks to focus on real-world tasks without needing a computerised version of a real brain to do it.
That mirrors the divergence in artificial intelligence theory itself. ‘Human-level’ AI was what some envisaged at the original Dartmouth meeting. But many have satisfied themselves with systems that mimic narrowly-defined functions, such as self-driving cars or chess computers.
Perhaps cognitive systems, as commercially defined, inch a little further along the spectrum. They still work in relatively narrowly-defined areas, but they can adapt and learn within those areas, and can handle more complex tasks that require context and complex interaction.
Cognitive systems may not think like people, or feel emotions, but they can discover vast amounts of data, draw decisions from it, and then engage people effectively.
Discovering data
Discovering things about the world around it is a key part of the process for a cognitive system.
“With cognitive computing there is a underlying knowledge model, specifically a semantic model, of the domain and associated cognitive processes, such as decision processes, that are relevant for that domain,” said Tony Sarris, founder of N2Semantics, a consulting firm that works in semantic technologies.
Cognitive systems are able to understand natural language questions and spit out understandable answers because of the taxonomies that they build up around specific knowledge domains.
A cognitive system for tax services wouldn’t be able to answer the same questions as one for medical researchers, for example, because it wouldn’t understand the necessary concepts and how they fit together.Techniques designed to teach computers about different knowledge domains have been developing for years. “Originally, the grandparents of cognitive computing were manually constructed ontologies created in the late eighties and early nineties,” said Sarris.
In the early 2000s, the semantic web movement tried to create open data models using the same concepts. But the real innovations come when machines can react to ontological data, testing relationships and learning from the results, in a form of machine learning.
“Cognitive systems, like their human counterparts, have a major focus on learning, including feedback loops,” said Sarris. “In the latter case, that's usually comparing the results of an action taken, or a decision made, to the desired outcome, and taking into account what worked or what didn't.”
This is why cognitive systems tend to get better as they go along. They create models of the world based on what they try, and what results they get back.
Human beings are very good at basic evidence-based learning too, of course, which is why children learn early on not to do basic things that might hurt them. But applying that to complex business situations is difficult. The challenge lies in the sheer volume of information that specialists must consume.
Cognitive systems can help here, by processing far more evidence than a single human being ever could, munching their way through Terabytes of structured and unstructured data alike, and putting it in context.
This idea of context is particularly important in cognitive computing, and was one of the key characteristics outlined in a joint definition of the topic by a working group on the subject, which included experts from IBM, Microsoft, Oracle, HP, Google and Cognitive Scale.
Context goes far beyond simply relating concepts together in semantic ontologies. It includes a variety of data points ranging from physical location, time, or current task, through to what a user is doing and where they are doing it. Their role – who they are – is another data point that might feed into a cognitive-systems’ decisions.
Digital decisions
Decisions are a crucial component of the cognitive computing process. They take existing bodies of evidence, which could be anything from actuarial data through to patient trials, depending on the industry they work in, and then use it to make the best decisions in responses to questions posed by users.
Currently, cognitive systems still advise people rather than prescribing a final option, according to Big Blue. They may present a variety of options to users, who can then pick from the results. That’s an important point, because when dealing with human-like, complex problems, there may be no ‘right’ answer: there may only be an optimal one.
Confidence scoring and traceability are important factors here. A cognitive system can usually present users with a value representing its confidence in a decision. If the human user needs to understand how that decision was reached, a cognitive computing system may be able to present it with a trail of ‘reasoning’.
Presentation is everything in cognitive computing. The point is to create systems that people can interact with easily when dealing with complex tasks. Today, people with questions plough through whichever automated system they have available to them, but have to work hard to interpret the results. Type “Will my employer increase my matched pension contribution if I participate in the group health insurance scheme?” and you might find yourself struggling to interpret dozens of different search results, none of which really answers your question.
Computer says 'no'
Natural language processing is an important thread that runs through the entire cognitive computing story, explains James Haight, analyst at Blue Hill Research, a boutique research firm with a focus on emerging technology.
“There has been an amazing acceleration of natural language processing. You can take content and understand what it means, which is the major breakthrough,” said Haight. This applies to content discovery, but also applies to user interaction, he adds, “whether you’re speaking to it or typing to it".
This is why a cognitive system that has pre-read all of the relevant documents may be able to listen to you if you ask it a question and then give you a concrete, understandable answer.
These interactions should also span machines rather than just people, say experts. Computers may talk to each other, and to cloud-based services, to complete jobs that humans may ask them to do.
That can be particularly useful in fulfilling another requirement of cognitive computing: that machines be iterative and stateful. A cognitive system should remember interactions with a user, using the history as a basis for future queries.
Today, we see this in simple personal assistant systems.
“Who is the President of the US?” you may ask Google Now.
“Barack Obama is the President of the United States of America”, comes the reply.
“How old is he?” you continue. “He is 54 years old,” the computer replies. It knows who you asked it about before, and fills in the blanks.
Tomorrow, cognitive computers may extend those iterative interactions into far more complex conversations.
Where can it be used?
Personal assistants such as Google Now and others are one area where these cognitive systems can be easily applied, because the heavy lifting is done via back-end, cloud-based services.
“Where we will see quick adoption is on the low-end incremental improvements in the consumer space or frontline business productivity stuff,” said Haight, adding that he’s looking forward to having an Office-integrated Cortana schedule appointments and handle other tasks automatically behind the scenes.
He also sees opportunities in large, high-end projects where the payoffs could be huge, such as in a hospital, where you could tie patient outcomes to quantifiable savings, for example.
Sarris looks for applications where the threshold for accuracy is relatively low and the opportunity for benefit is relatively high. “Those may be applications where you can be 80 per cent correct and still produce results that are valuable in the sense of saving humans time or effort, or augmenting their skills,” he said.
“In general, cognitive computing systems can be deployed in any area that historically involves the deployment of sophisticated, context sensitive, human reasoning,” said Bishop. “This potentially opens up lots of new jobs to computational automation.”
The automation part might be a hot button for many, though, warns Haight. He sees resistance in mid-range projects for just such reasons. “In the middle ground, there is huge resistance,” he said. “People are afraid of it.”
It’s easy to see the fears. Creepy, soft-spoken HAL-like AI bots coming to steal our jobs? No, thank you very much. But then, the same dialogues have sprung up around most information science developments in history, from robots to PCs.
At least one recent study has questioned that rhetoric, arguing that technology has created more jobs than it destroyed in the last 140 years.
Making it work
Assuming we can overcome those fears, there remain considerable challenges around deployment. You don’t just buy one of these things and plug it in. IBM points to a four-step journey towards cognitive computing. It starts with charting the course, and identifying potential use cases in your organization, IBM has said.
Experimentation is the next step, in which prototypes are tested using use case scenarios with users. This part of the process may itself be incremental, argues Sarris. It’ll call for the same cycle of acting, discovery, and learning that cognitive systems themselves try to master.
“What we need to put in place is a culture that encourages commercial use and is tolerant of more of a startup-like MVP (Minimum Viable Product) approach,” said Sarris. “These technologies often need some burn-in and iteration, including feedback and refinement by human users, and in some cases by the application itself.”
When a viable use case has been tested, the system must be developed. That involves feeding it data – lots of it, ideally, and perhaps not just from your own systems. The data will have to be massaged by professionals well versed in such things, and with a high level of domain-specific knowledge. As IBM said, they must be trained, rather than programmed.
Finally, you get to deploy the thing, starting with the developed solution as a baseline, and then embarking on a continuous improvement cycle, in which the machine itself learns, and the data feeds become more sophisticated.
Let’s not get ahead of ourselves, though. It’ll take a considerable investment for organizations to get on board with cognitive computing, and they should also manage their expectations.
Bishop believes there are three things humans do which are simply incomputable. The first is true creativity.
The second is understanding. Computers will never truly understand concepts, he argues, basing this on John Searle’s Chinese Room Argument suggesting instead that they display a kind of computational quasi-understanding.
Finally, he doesn’t believe that computers will never be truly conscious. These three things together form what he calls the humanity gap. Unless a computer can achieve all three, it will always be behind the curve, compared with us. “It seems to me that in these areas at least there will always be spaces where humanity can do more than mere computational systems,” he said.
They might at least do a good job of diagnosing your sciatica in the future, though. And their handwriting might be a bit better than your doctor’s, too.