Visit additional Tabor Communication Publications
November 18, 2009
Scientists perform cat-scale cortical simulations and map the human brain in effort to build advanced chip technology
PORTLAND, Ore., Nov. 18 -- Today at SC 09, the supercomputing conference, IBM (NYSE:IBM) announced significant progress toward creating a computer system that simulates and emulates the brain's abilities for sensation, perception, action, interaction and cognition, while rivaling the brain's low power and energy consumption and compact size.
The cognitive computing team, led by IBM Research, has achieved significant advances in large-scale cortical simulation and a new algorithm that synthesizes neurological data -- two major milestones that indicate the feasibility of building a cognitive computing chip.
Scientists, at IBM Research - Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses.
Additionally, in collaboration with researchers from Stanford University, IBM scientists have developed an algorithm that exploits the Blue Gene supercomputing architecture in order to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information.
These advancements will provide a unique workbench for exploring the computational dynamics of the brain, and stand to move the team closer to its goal of building a compact, low-power synaptronic chip using nanotechnology and advances in phase change memory and magnetic tunnel junctions. The team's work stands to break the mold of conventional von Neumann computing, in order to meet the system requirements of the instrumented and interconnected world of tomorrow.
As the amount of digital data that we create continues to grow massively and the world becomes more instrumented and interconnected, there is a need for new kinds of computing systems -- imbued with a new intelligence that can spot hard-to-find patterns in vastly varied kinds of data, both digital and sensory; analyze and integrate information real-time in a context-dependent way; and deal with the ambiguity found in complex, real-world environments.
Businesses will simultaneously need to monitor, prioritize, adapt and make rapid decisions based on ever-growing streams of critical data and information. A cognitive computer could quickly and accurately put together the disparate pieces of this complex puzzle, while taking into account context and previous experience, to help business decision makers come to a logical response.
"Learning from the brain is an attractive way to overcome power and density challenges faced in computing today," said Josephine Cheng, IBM Fellow and lab director of IBM Research - Almaden. "As the digital and physical worlds continue to merge and computing becomes more embedded in the fabric of our daily lives, it's imperative that we create a more intelligent computing system that can help us make sense the vast amount of information that's increasingly available to us, much the way our brains can quickly interpret and act on complex tasks."
To perform the first near real-time cortical simulation of the brain that exceed the scale of the cat cortex, the team built a cortical simulator that incorporates a number of innovations in computation, memory, and communication as well as sophisticated biological details from neurophysiology and neuroanatomy. This scientific tool, akin to a linear accelerator or an electron microscope, is a critical instrument used to test hypotheses of brain structure, dynamics and function. The simulation was performed using the cortical simulator on Lawrence Livermore National Lab's Dawn Blue Gene/P supercomputer with 147,456 CPUs and 144 terabytes of main memory.
The algorithm, when combined with the cortical simulator, allows scientists to experiment with various mathematical hypotheses of brain function and structure of how structure affects function as they work toward discovering the brain's core computational micro and macro circuits.
After the successful completion of Phase 0, IBM and its university partners were recently awarded $16.1M in additional funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 1 of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative. This phase of research will focus on the components, brain-like architecture and simulations to build a prototype chip. The long-term mission of IBM's cognitive computing initiative is to discover and demonstrate the algorithms of the brain and deliver low-power, compact cognitive computers that approach mammalian-scale intelligence and use significantly less energy than today's computing systems. The world-class team includes researchers from several of IBM's worldwide research labs and scientists from Stanford University, University of Wisconsin-Madison, Cornell University, Columbia University Medical Center and University of California-Merced.
"The goal of the SyNAPSE program is to create new electronics hardware and architecture that can understand, adapt and respond to an informative environment in ways that extend traditional computation to include fundamentally different capabilities found in biological brains," said DARPA program manager Todd Hylton, Ph.D.
Modern computing is based on a stored program model, which has traditionally been implemented in digital, synchronous, serial, centralized, fast, hardwired, general-purpose circuits with explicit memory addressing that indiscriminately over-write data and impose a dichotomy between computation and data. In stark contrast, cognitive computing -- like the brain -- will use replicated computational units, neurons and synapses that are implemented in mixed-mode analog-digital, asynchronous, parallel, distributed, slow, reconfigurable, specialized and fault-tolerant biological substrates with implicit memory addressing that only update state when information changes, blurring the boundary between computation and data.
For more information about IBM Research, visit www.ibm.com/research.
Technical insight and more details on the SyNAPSE project and recent milestones can also be found on the Cognitive Computing blog at http://modha.org/.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.