Neuromorphic Computing Changes Machine Learning
Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements. Neuromorphic computing is sometimes referred to as neuromorphic engineering.
Unlike some of the future computing technologies, numerous physical realisations of neuromorphic hardware are now currently under development or are even available for use to the research community. A number of research institutions have been working for the past few years to find new concepts of how computers can better process data in the future.
One of these concepts is “neuromorphic computing”, a method of computer engineering in which aspects of a computer are modeled like human brain and nervous systems. Compared to traditional Artificial Intelligence (AI) algorithms that have to be trained on large amounts of data before they can be effective, neuromorphic computing systems can learn and adapt as they go.
With the machine learning world growing so quickly, German researchers have devised an efficient training method for neuromorphic computers. Florian Marquardt, a scientist at the Max Planck Institute for the Science of Light in Erlangen, Germany, explains: “We have developed the concept of a self-learning physical machine. The core idea is to carry out the training in the form of a physical process, in which the parameters of the machine are optimised by the process itself.”
The model will require external feedback to improve, as in training conventional artificial neural networks, but the self-learning physical machine the researchers propose makes the training much more efficient and saves energy.
“Our method works regardless of which physical process takes place in the self-learning machine, and we do not even need to know the exact process,” explains Marquardt. “However, the process must fulfill a few conditions. Most importantly, it must be reversible, meaning it must be able to run forwards or backward with a minimum of energy loss.”
Computer hardware today is based on Von Neumann architecture, which is the opposite of a neuromorphic architecture. The researchers state in their study that the von Neumann architecture currently used in electronic devices is highly inefficient for most machine-learning applications.
“We hope to be able to present the first self-learning physical machine in three years.... We are therefore confident that self-learning physical machines have a strong chance of being used in the further development of AI.” said Marquardt.
Arxiv: Intel: Nature: I-HLS: TechTarget: Frontiers: Cambridge Consultants: Interesting Engineering:
Image: Planet Volumes
You Might Also Read:
Musk's Brain Chip Firm Gets US Approval for Human Analysis:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible