Visit additional Tabor Communication Publications
June 07, 2012
HOUSTON, TX, June 7 -- Researchers from Rice University and UCLA unveiled a new data-encoding scheme this week that slashes more than 30 percent of the energy needed to write data onto new memory cards that use "phase-change memory" (PCM) -- a competitor to flash memory that has big backing from industry heavyweights.
The breakthrough was presented at the IEEE/ACM Design Automation Conference (DAC) in San Francisco by researchers from Rice University's Adaptive Computing and Embedded Systems (ACES) Laboratory.
PCM uses the same type of materials as those used in rewritable CDs and DVDs, and it does the same job as flash memory -- the mainstay technology in USB thumb drives and memory cards for cameras and other devices. IBM and Samsung have each demonstrated PCM breakthroughs in recent months, and PCM is ultimately expected to be faster, cheaper and more energy-efficient than flash.
"We developed an optimization framework that exploits asymmetries in PCM read/write to minimize the number of bit transitions, which in turns yields energy and endurance efficiency," said researcher Azalia Mirhoseini, a Rice graduate student in electrical and computer engineering, who presented the research results at DAC.
In PCM technology, heat-sensitive materials are used to store data as ones and zeros by changing the material resistance. The electronic properties of the material change from low resistance to high resistance when heat is applied to alter the arrangement of atoms from a conducting, crystalline structure to a nonconducting, glassy structure. Writing data on PCM takes a fraction of the time required to write on flash memory, and the process is reversible but asymmetric; creating one state requires a short burst of intense heat, and reversing that state requires more time and less heat.
The new encoding method is the first to take advantage of these asymmetric physical properties. One key to the encoding scheme is reading the existing data before new data is written. Using a combination of programming approaches, the researchers created an encoder that can scan the "words" -- short sections of bits on the card -- and overwrite only the parts of the words that need to be overwritten.
"One part of the method is based on dynamic programming, which starts from small codes that we show to be optimal, and then builds upon these small codes to rapidly search for improved, longer codes that minimize the bit transitions," said lead researcher Farinaz Koushanfar, director of Rice's ACES Laboratory and assistant professor of electrical and computer engineering and of computer science at Rice.
The second part of the new method is based on integer-linear programming (ILP), a technique that can find optimal solutions. The more complex the solution, the longer ILP takes to find the optimal solution, so the team found a shortcut by using dynamic programming to create a cheat sheet of small codes that could be quickly combined for more complex solutions.
Research collaborator Miodrag Potkonjak, professor of computer science at UCLA, said the team's solution to PCM optimization is pragmatic.
"The overhead for ILP is practical because the codes are found only once, during the design phase," Potkonjak said. "The codes are stored for later use during PCM operation."
The researchers also found the new encoding scheme cut more than 40 percent of "memory wear," the exhaustion of memory due to rewrites. Each memory cell can handle a limited number of rewrite cycles before it becomes unusable.
The researchers said the applicability, low overhead and efficiency of the proposed optimization methods were demonstrated with extensive evaluations on benchmark data sets. In addition to PCM, they said, the encoding method is also applicable for other types of bit-accessible memories, including STT-RAM, or spin-transfer torque random-access memory.
The research was funded by the Office of Naval Research, the Army Research Office and the National Science Foundation.
Source: Rice University
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.