Beyond von Neumann, Neuromorphic Computing Steadily Advances

By John Russell

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical.

This week neuromorphic computing takes another step forward with a workshop being offered to users from academia, industry and education interested in using two European neuromorphic systems that have been years in development and are coming online for broader use – the BrainScaleS system launching at the Kirchhoff Institute for Physics of Heidelberg University and SpiNNaker, a complementary approach and similarly sized system at the University of Manchester.

Ramping up BrainScaleS and SpiNNaker is an important milestone, strengthening Europe’s position in hardware development for alternative computing. Both projects are part of the European Human Brain Project, originally funded by the European Commission’s Future Emerging Technologies program (2005-2015). The webcast, which will be streamed live on Tuesday, will cover the architecture for both systems and approaches to application development.

BrainScaleS and SpiNNaker take different tacks for modeling neuron activity. One approach is to use traditional analog circuits — like the chips being developed by the BrainScaleS. Analog circuits can be fast and energy and efficient. Conversely, SpiNNaker’s architecture closely links a very large number of digital cores (also fast, and in this case, also energy efficient).

BrainScaleS post-processed wafer containing about 20 million plastic synapses.
BrainScaleS post-processed wafer containing about 20 million plastic synapses.

BrainScaleS’s neuromorphic hardware is based around wafer-scale analog, very large scale integration (VLSI). Each 20-cm-diameter silicon wafer contains 384 chips, each of which implements 128,000 synapses and up to 512 spiking neurons[i]. This gives a total of around 200,000 neurons and 49 million synapses per wafer. These VLSI models operate considerably faster than the biological originals and allow the emulated neural networks to evolve tens-of-thousands times quicker than real time. Put another way, a biological day of learning can be compressed to 100 seconds on the machine.

Leader of the BrainScaleS project, Prof. Dr. Karlheinz Meier (Heidelberg University) explains, “The BrainScaleS system goes beyond the paradigms of a Turing machine and the von Neumann architecture. It is neither executing a sequence of instructions nor is it constructed as a system of physically separated computing and memory units. It is rather a direct, silicon based image of the neuronal networks found in nature, realizing cells, connections and inter-cell communications by means of modern analogue and digital microelectronics.”

Learning – not external programming – is a key guiding principle. Unlike traditional computer architecture in which a structured program explicitly carries out an order of tasks, brains are fundamentally learning machines that turn patterns into programs.

Steve Furber, a professor at the University of Manchester and a co-designer of the ARM chip architecture, leads the SpiNNaker team. SpiNNaker is a contrived acronym derived from Spiking Neural Network Architecture. The machine consists of 57,600 identical 18-core processors, giving it 1,036,800 ARM968 cores in total. The die is fabricated by United Microelectronics Corporation (UMC) on a 130 nm CMOS process. Each System-in-Package (SiP) node has an on-board router to form links with its neighbors, as well as 128 Mbyte off-die SDRAM to hold synaptic weights.

SpiNNaker die 800xSpiNNaker, too, is built to mimic the brain’s biological structure and behavior. It will exhibit massive parallelism and resilience to failure of individual components. With more than one million cores, and one thousand simulated neurons per core, SpinNNaker should be capable of simulating one billion neurons in real-time. This equates to a little over one percent of the human brain’s estimated 85 billion neurons.

Rather than implement one particular algorithm, SpiNNaker will be a platform on which different algorithms can be tested. Various types of neural networks can be designed and run on the machine, thus simulating different kinds of neurons and connectivity patterns.

Both BrainScaleS and SpiNNaker architectures will be discussed during the Web-based workshop on March 22, scheduled from 3 pm to 6 pm CET. Together, the systems located in Heidelberg and Manchester comprise the “Neuromorphic Computing Platform” of the Human Brain Project.

Much of the early work on both machines will be basic research on self-organization in neural networks. Other potential applications, for example, are in energy and time efficiency optimization, broadly similar to deep learning technology developed by companies like Google and Facebook for the analysis of large data volumes using conventional high performance computers.

IBM’s Dharmendra Modha
IBM’s Dharmendra Modha

Europe, of course, is hardly alone in pursuing neuromorphic computing. Most prominent in the U.S. is IBM Research’s TrueNorth Chip effort. Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing, wrote an interesting commentary on the TrueNorth project that traces development of von Neumann architecture based computing and contrasts it with neuromorphic computing approaches: Introducing a Brain-inspired Computer. Though written in 2014, it remains relevant.

TrueNorth chip, introduced in August 2014, is a neuromorphic CMOS chip that consists of 4,096 hardware cores, each one simulating 256 programmable siliconneurons” for a total of just over a million neurons. Each neuron has 256 programmable “synapses” which convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). In terms of basic building blocks, its transistor count is 5.4 billion.

Developed under the DARPA SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project, TrueNorth’s computing power has been characterized as roughly equivalent to the brainpower of a rodent. It also circumvents the von-Neumann-architecture bottlenecks, is very energy-efficient, consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt – literally a synaptic supercomputer in your palm.

BrainScaleS, SpiNNaker, and TrueNorth are just three examples of many ongoing neuromorphic computing projects. Turning them into commercial products or more general purpose computing machines remains a challenge.

Indeed, IBM put together a paper on cognitive computing commercialization and barriers[ii]. “New thinking, not only on the part of programmers and application developers, but also by organizational decision makers who seek to link technological possibilities to market opportunity. While incremental innovation can be achieved on the basis of existing knowledge in well-charted commercial territory, radical innovation entails far greater uncertainty.”

Among the barriers cited were: formulating business models and predicting future revenue to calibrate investment, defining strategy and structure to execute and finally, overcoming communicative and functional boundaries.

Much of the drive to push neuromorphic computing stems from the ongoing decline of Moore’s law, and this excerpt from a 2014 ACM article[iii] still sums circumstances today:

As the long-predicted end of Moore’s Law seems ever more imminent, researchers around the globe are seriously evaluating a profoundly different approach to large-scale computing inspired by biological principles. In the traditional von Neumann architecture, a powerful logic core (or several in parallel) operates sequentially on data fetched from memory. In contrast, “neuromorphic” computing distributes both computation and memory among an enormous number of relatively primitive “neurons,” each communicating with hundreds or thousands of other neurons through “synapses.” Ongoing projects are exploring this architecture at a vastly larger scale than ever before, rivaling mammalian nervous systems, and developing programming environments that take advantage of them. Still, the detailed implementation, such as the use of analog circuits, differs between the projects, and it may be several years before their relative merits can be assessed.

Researchers have long recognized the extraordinary energy stinginess of biological computing, most clearly in a visionary 1990 paper by the California Institute of Technology (Caltech)’s Carver Mead that established the term “neuromorphic.” Yet industry’s steady success in scaling traditional technology kept the pressure off.”

[i] “Spiking neural networks (SNNs) fall into the third generation of neural network models, increasing the level of realism in a neural simulation. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal. In the context of spiking neural networks, the current activation level (modeled as some differential equation) is normally considered to be the neuron’s state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.” From https://en.wikipedia.org/wiki/Spiking_neural_network.

[ii] For more on applications, see IBM paper, Cognitive Computing Commercialization: Boundary Objects for Communication, https://dl.dropboxusercontent.com/u/91714474/Papers/023.IDEMI’13_boundary%20objects_3.4.pdf?cm_mc_uid=86343320971914489086046&cm_mc_sid_50200000=1458418853, Presented at 3rd INT. CONF. ON INTEGRATION OF DESIGN, ENGINEERING & MANAGEMENT FOR INNOVATION, Porto, Portugal, 4-6th September 2013

[iii] Communications of the ACM, Neuromorphic Computing Gets Ready for the (Really) Big Time, http://cacm.acm.org/magazines/2014/6/175183-neuromorphic-computing-gets-ready-for-the-really-big-time/abstract

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale companies and their embrace of AI and deep learning – tha Read more…

By Doug Black

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network designed to emulate and compete with the human brain. In thi Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big data and artificial intelligence software to its top-of-the-l Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “global” launch event in Austin TX. In many ways it was a fu Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it, analysts and journalists want to report on it. Deep learni Read more…

By Doug Black

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “g Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it Read more…

By Doug Black

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This