Google Wants To Mimic The Human Brain
Researchers from Google and the University of Toronto has released an academic paper titled “One Model to Learn Them All,” and they were pretty quiet about it. What Google is proposing is a template for how to create a single machine learning model that can address multiple tasks.
Google calls this MultiModel. The model was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition and object detection. What Google found was the machine slowly but incrementally learned how to do the tasks better with each iteration. Machine translation, for example, improved with each pass.
More significant, Google’s MultiModel improved its accuracy with less training data. That’s important because sometimes you just might not have all the data you need or available to train the computer to learn. One of the problems with deep/machine learning is you have to prime the pump, so to speak, with a ton of information before learning can begin. Here, it did it with less.
The challenge, the researchers note, is to create a single, unified deep learning model to solve tasks across multiple domains. Because right now, each task requires significant data preparation for learning.
IBM and US Air Force Super-Computer Research
In the case of IBM and the US Air Force Research Lab, the two have announced plans to build a supercomputer based on IBM’s TrueNorth neuromorphic architecture. Neuromorphic architectures are very-large-scale integration (VLSI) systems containing electronic analog circuits designed to mimic the neurological architectures in the nervous system. The chips are a mix of analog and digital, so they do more than the usual binary on/off mode of digital processors, again to mimic the complexity of cells.
IBM’s TrueNorth chips first came out in 2014 after several years of research in the DARPA SyNAPSE program. Interest has picked up because people are realising that x86 processors and even FPGAs just are not up to the task of mimicking human cells.
There are quite a few efforts behind neuromorphic design, including Stanford, The University of Manchester, Intel, Qualcomm, Fujitsu, NEC and IBM.
The new super-computer will consist of 64 million neurons and 16 billion synapses, while using just 10W of wall power (less than a lightbulb). The new system will fit in a 4U standard server rack with 512 million neurons in total per rack. A single processor in the system consists of 5.4 billion transistors organised into 4,096 neural cores, creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.
So, what will they do with it? Well, the Air Force operates military systems that must recognise and categorise data from multiple sources (images, video, audio and text) in real time. Some of those systems are ground-based, but others are installed in aircraft. So, the Air Force would like deep neural learning both on the ground and in the air.
The real advance in these neural processors is they stay off until they are actually needed, so they can have a ridiculously low power draw like the IBM chip. That would be welcomed in the super-computing world where those monstrosities use power in the mega-watts.
You Might Also Read:
Five Things AI Can Do Better Than Humans:
The Cusp Of Merging Human With Machine:
Machines Versus Human Brains – Who Wins?: