Visit additional Tabor Communication Publications
July 07, 2011
Under the category of "Grand Challenge" applications, perhaps none is grander than simulation of the human brain. Reflecting the complexity and scale of the brain with current computer technology is truly a daunting task. But a group of researchers and computer scientists at a number of UK universities are attempting to do just that under a project named SpiNNaker.
SpiNNaker, which stands for Spiking Neural Network architecture, aims to map the brain's functions for the purpose of helping neuroscientists, psychologists and doctors understand brain injuries, diseases and other neurological conditions. The project is being run out of a group at University of Manchester, which designed the system architecture, and is being funded by a £5m grant from the Engineering and Physical Sciences Research Council (EPSRC). Other elements of the SpiNNaker system are being developed at the universities of Southampton, Cambridge and Sheffield.
For the casual observer, constructing a facsimile of the most complex organ in the human body from digital technology may see like a natural fit for computers. The view of the brain as a biological processor (and the processor as a digital brain) is well entrenched in popular culture. But the designs are fundamentally different.
Operationally, computers are precise, extremely fast and deterministic; brains are imprecise, slow, and non-deterministic. And, of course the underlying architectures are completely different. Computers relying on digital electronics, while the brain employs a complex mix of biomolecular structures and processes.
The SpiNNaker design meets the architecture of the brain halfway by going for lots of simple, low-power computing units, in this case, ARM968 processors. The initial Manchester-designed SpiNNaker multi-processor is a custom SoC with 18 of these processors integrated on-chip. (The original spec called for 20 processors per chip.) The multi-processor also incorporates a local bus, called Network-on-Chip or NoC, which links up the individual processors and off-chip memory. Each SpiNNaker node is reported to draw less than one watt of power, while delivering the computational throughput of a typical PC.
The design is purpose-built to simulate the action of spiking neurons. Spiking in this context means when neurons are stimulated above a certain threshold level to generate an event that can be propagated across a neural net. But instead of using neurotransmitters to do this, the computer is just passing data packets around.
To be truly useful, the spiking needs to happen in real-time. Fortunately, this is where computer technology shines. Electrical communication is actually more efficient than the biochemical version, so nothing exotic needs to be done in the hardware to make all this magical neural spiking a virtual reality.
And that may happen soon. The design phase of the project is coming to a close and the SpiNNaker team is starting to gather the pieces together. According to a news release this week, SpiNNaker chips were delivered in June (from Taiwan -- presumable TSMC), and have passed their functionality tests. The plan is to build a 50,000-node machine with up to one million ARM processors.
While that seems like a lot, researchers estimate that it will only be enough to represent about one percent of the real deal. A human brain contains around 100 billion neurons along with 1,000 million connections and a single ARM processor in the SpiNNaker chip can only handle 1,000 neurons. The good news is that one percent may be enough to answer a lot of questions about the functional operation of the brain.
Even at one percent, the scale of the machine is probably the trickiest part of the project. With so many processors in the mix, there are bound to be individual failures at fairly regular intervals. To deal with the inevitable, the designers made SpiNNaker fault tolerant at multiple levels. For example, each of the ARM processors can be disabled if they fail at start-up and a chip can remain functional even if "several processors fail." If an entire chip goes south, data can be rerouted to neighboring chips thanks to redundant inter-chip links.
The other challenge to scaling out is power, but here is where the ARM architecture pays dividends. The initial system of 50,000 nodes is estimated to draw just 23 KW to 36 KW of power. By supercomputing standards, that's just a pittance. Of course, judged against the 20 watt version in our heads, SpiNNaker has a ways to go.
The power profile suggests that if there are no inherent scaling limitations in the hardware or software, the design could conceivably be used to build a machine that would support a "complete" human brain simulation for just a few megawatts. With improved process technology, that could easily slip into the sub-megawatt level.
For all that, SpiNNaker isn't designed to simulate higher level cognitive features -- the most interesting function of the brain. Inevitably that will require more complex hardware and software. So even if someone builds a super-sized SpiNNaker, it won't come close to the functionality of the 100 percent organic version anytime soon.
Posted by Michael Feldman - July 07, 2011 @ 7:40 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.