Visit additional Tabor Communication Publications
April 15, 2010
As we reported last week, researchers at HP Labs have advanced their memristor work to the point where they now believe commercial products may be in the offing in as little as three years. A memristor, or memory transistor, is the fourth fundamental circuit element that has the unique property of maintaining its state when the power is turned off. (Yes, flash memory does this too, but it uses lots of transistors and capacitors to accomplish this.) In any case, HP is touting memristors as way to build all sorts of nifty digital and analog computing gadgets.
Now this week, the University of Michigan announced that computer engineer Wei Lu has been busy building artificial synapses from memristors. In this case, Lu is exploiting the analog nature of the device to simulate synapse behavior. Potentially, this is a much more straightforward way to construct a thinking machine compared to digitally simulating a brain on a supercomputer. From the press release:
"We are building a computer in the same way that nature builds a brain," said Lu, an assistant professor in the U-M Department of Electrical Engineering and Computer Science. "The idea is to use a completely different paradigm compared to conventional computers. The cat brain sets a realistic goal because it is much simpler than a human brain but still extremely difficult to replicate in complexity and efficiency."
Here we go with the cat brains again. If you'll remember, last November a research team at IBM reported it achieved "the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses." That simulation was accomplished using the Blue Gene/P supercomputer at Lawrence Livermore, but the ultimate goal was to build a more practical version using "synaptronic" chips, phase change memory and magnetic tunnel junctions. I guess there's more than one way to skin a cat brain.
For the near-term, HP is going to concentrate on exploiting memristors for non-volatile RAM. The researchers there believe they can offer a product with a storage density of about 20 gigabytes per square centimeter by 2013. And that product would also be more robust than one based on conventional flash-based memory. HP claims memristor memory can handle up to 1,000,000 read/write cycles before degradation, compared to flash at 100,000 cycles. Furthermore, since the technology can be scaled down to single-digit nanometer geometries, memristors should leave NAND and NOR based flash memories in the dust.
HP also discovered that memristors can serve as logic circuits, so now there's talk of using the devices for computation. This could open the door to building processors with logic and very large memories integrated together on the same die. (The best we can do today is marry CPUs and relatively small caches.) In fact, memristor-based processors could be a dream come true for processor-in-memory (PIM) enthusiasts.
PIM has been a kind of Holy Grail for researchers looking to solve the memory wall dilemma. Getting logic and memory within kissing distance of each other on the die is the goal since that level of proximity delivers substantially greater bandwidth and lower latency for data transfers. USC's DIVA architecture and Caltech's MIND are some early examples of PIM designs based on conventional DRAM technology. The only rub here is that memristors are currently about 10 times slower than DRAM. But considering how far memristors have come in just a few years, the folks at HP Labs might surprise us once again.
Posted by Michael Feldman - April 15, 2010 @ 6:41 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.