Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated more efforts (academic, government, and commercial) but whose payoff also seems more distant. Intel’s introduction this week of Pohoiki Beach – an 8-million-neuron, neuromorphic system using 64 Loihi research chips – brings some (needed) attention back to neuromorphic technology.
The newest system will be available to Intel’s roughly 60 neuromorphic ecosystem partners and represents a significant scaling up of its development platform with more to come; Intel reportedly plans to introduce a 768-chip, 100-million-neuron system (Pohoiki Springs) near the end of 2019.
“Researchers can now efficiently scale up novel neural-inspired algorithms – such as sparse coding, simultaneous localization and mapping (SLAM), and path planning – that can learn and adapt based on data inputs. Pohoiki Beach represents a major milestone in Intel’s neuromorphic research, laying the foundation for Intel Labs to scale the architecture to 100 million neurons later this year,” according to the official announcement.
Neuromorphic or ‘brain-inspired’ computing seeks to mimic the spiking neural network processing approach used by the human brain and also seeks to mimic the brain’s fantastic power efficiency. Emulating what the brain does with about 20 watts requires an exascale system powered by about 30 megawatts. Implementation approaches for neuromorphic computing vary but broadly divide into those trying to use conventional digital circuits (e.g. SpiNNaker) and those trying to actually ‘create’ analog neurons in silicon (e.g. BrainScaleS).
Many observers suggest neuromorphic technology is most likely to be used as adjunct technology for particular workloads. Addison Snell, CEO, Intersect360, noted, “The way high-performance computers are made is changing. Current systems already use heterogeneous processing elements to address a widening array of workloads, including analytics and machine learning. Intel’s advancements with neuromorphic computing open the doors to new possibilities over the horizon.”
Introduced in 2017, Intel’s Loihi neuromorphic chip includes digital circuits that mimic the brain’s basic mechanics.
Here’s a description from Wikichip: [Loihi uses an asynchronous spiking neural network (SNN) to implement adaptive self-modifying event-driven fine-grained parallel computations used to implement learning and inference with high efficiency. The chip is a 128-neuromorphic cores many-core IC fabricated on Intel’s 14 nm process and features a unique programmable microcode learning engine for on-chip SNN training. The chip was formally presented at the 2018 Neuro Inspired Computational Elements (NICE) workshop in Oregon. The chip is named after the Loihi volcano as a play-on-words – Loihi is an emerging Hawaiian submarine volcano that is set to surface one day.”[i]
Intel says Loihi enables users to process information up to 1,000 times faster and 10,000 times more efficiently than CPUs for specialized applications like sparse coding, graph search and constraint-satisfaction problems. In conjunction with announcing the new system, Intel called attention to the ongoing Telluride Neuromorphic Cognition Engineering Workshop where researchers are using Loihi systems – “[P]rojects include providing adaptation capabilities to the AMPRO prosthetic leg, object tracking using emerging event-based cameras, automating a foosball table with neuromorphic sensing and control, learning to control a linear inverted pendulum, and inferring tactile input to the electronic skin of an iCub robot,” according to Intel.
In addition to the work coming out of Telluride, other research partners are already seeing the benefits of Loihi at scale reported Intel:
- “With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware,” said Chris Eliasmith, co-CEO of Applied Brain Research and professor at University of Waterloo. “Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.”
- “Loihi allowed us to realize a spiking neural network that imitates the brain’s underlying neural representations and behavior. The SLAM solution emerged as a property of the network’s structure. We benchmarked the Loihi-run network and found it to be equally accurate while consuming 100 times less energy than a widely used CPU-run SLAM method for mobile robots,” professor Konstantinos Michmizos of Rutgers University said while describing his lab’s work on SLAM to be presented at the International Conference on Intelligent Robots and Systems (IROS) in November
Intel says scaling from a single-Loihi to 64 of them was more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning. The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step,” said Mike Davies, director of neuromorphic research at Intel, who is quoted in a IEEE Spectrum report on the new system.
Link to Intel release: https://newsroom.intel.com/news/intels-pohoiki-beach-64-chip-neuromorphic-system-delivers-breakthrough-results-research-tests/#gs.pqlidf
Link to IEEE Spectrum report: https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/intels-neuromorphic-system-hits-8-million-neurons-100-million-coming-by-2020
[i]https://en.wikichip.org/wiki/intel/loihi