Visit additional Tabor Communication Publications
August 18, 2011
The company that brought Jeopardy cyber-champ Watson to the world has come up with some even brainier technology. IBM has unveiled prototypes of microprocessors that are being dubbed "cognitive computing chips." According to IBM, the hardware is made up of neural circuits and are designed to mimic the brain's ability to perceive sensory input, understand it, and take action based on that understanding.
To emulate human-level thinking, Watson used clever software on conventional hardware, which could be described as sort of a brute force approach. In contrast, these new chips are designed to behave fundamentally like our own brains, being able process sensory input in a vastly parallel fashion, create correlations, learn from experience, and adapt its processing dynamically.
Although the work can be categorized as artificial intelligence, it's actually more general than traditional AI, which tends to be focus on individual capabilities, like pattern recognition. In this latest effort, IBM is attempting to integrate all aspects of thinking, including perception and action, as well as cognition. The goal is to build a truly intelligent machine that is able to perform human-level analytics in real time, and with the power and size efficiencies of a biological brain.
The work is being done with funding from the Defense Advanced Research Projects Agency (DARPA) and in conjunction with a number of US universities. The project, which the agency has titled Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, kicked off in 2008, when DARPA anted up $4.9 million to bootstrap the work. In the second phase of the project, they chipped in $16.1 million more to IBM and its university partners. Now DARPA is prepared to kick in an additional $21 million for the third phase.
The defense agency's interest in such technology is understandable, given the DoD's increasing reliance on drone aircraft and other types of unmanned vehicles, not to mention its need to analyze the tremendous amounts of intelligence data. In short, the defense department would like nothing better than to replace its legions of soldiers and analysts with computer chips.
IBM, of course, is aiming at a much larger market. The idea is that system could be hooked up to vast sensor networks, monitoring the environment, cars, homes, even people. Such a product would span every industry and provide the underpinnings of IBM's so-called Smarter Planet, although in this case it's more like Smarter Planet 2.0.
The hardware design is certainly futuristic. The cognitive computing prototype chips are made up of a "neurosynaptic core," which encompass of computational circuits (the neurons), memory (the synapses), and communication lines (the axons). Although this is accomplished with standard digital circuitry, the architecture is unique. From the IBM press release:
IBM’s overarching cognitive computing architecture is an on-chip network of light-weight cores, creating a single integrated system of hardware and software. This architecture represents a critical shift away from traditional von Neumann computing to a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing.
Getting compute, memory and communication integrated together is central to the architecture's brainy behavior. According to Dharmendra Modha, IBM's lead on the project, the tight integration is key to getting the circuitry to behave like biological neurons and synapses, and do so within a very organic power budget. The power consumption of the human brain is estimated to be between 10 and 100 watts.
To date, IBM has developed two chip prototypes, both of which have been implemented on 45 nm SOI CMOS at the company's fab in Fishkill, New York. Each design contains 256 neurons, one with 256K programmable synapses and the other with 64K learning synapses. The IBM researchers claim to have used them to demonstrate simple applications like navigation, machine vision, pattern recognition, associative memory and classification.
IBM has not revealed a timeline for any commercial products. The company's goal is to eventually construct a system with ten billion neurons and a hundred trillion synapses, while consuming a single kilowatt of power. Using future nanoelectronics, the researchers estimate such a machine will take up less than two liters of volume.
The chip prototypes will be described in more detail at the IEEE Custom Integrated Circuits Conference on September 20 in San Jose, California.
Posted by Michael Feldman - August 18, 2011 @ 7:16 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.