Visit additional Tabor Communication Publications
August 30, 2011
Richard Murphy, a computer architect at Sandia National Laboratory, recently weighed in on progress toward the goals set forth by the Ubiquitous High Performance Computing program (UHPC). For those who are not familiar, this initiative, which was set forth by the Defense Advanced Research Projects Agency (DARPA) aims to bring petascale and exascale computing innovations into military use via a program of focused research efforts on everything from power and efficiency to performance to applications.
The program, which got its start last year posed a challenge to scientists to build a petaflop system that consumes no more than 57 kilowatts of electricity, in part so that the military could bring computing power out of large datacenters and into the field for immediate, on-spot use. Aside from this more practical military use of high-end HPC systems on the fly, massive benefits for computing efficiency for cost savings and reduced environmental impact would be realized as well.
To bring the kilowatt usage down to the challenge level of 57 kilowatts is no simple task; it will require a dramatic, almost unthinkable reduction in electricity use—all the while retaining the key performance required for military high performance computing applications.
Teams working on such initiatives are vying for the chance to win an award to build a supercomputer for DARPA. Those who come close to the power goals will need to dramatically rethink how computers are designed, particularly in terms of how memory and processors move data. As Discover Magazine pointed out, “The energy required for this exchange is manageable when the task is small—a processor needs to fetch less data from memory. Supercomputers, however, power through much larger volumes of data—for example, while modeling a merger of two black holes—and their energy can become overwhelming.”
According to Richard Murphy, “it’s all about data movement.” Those in the race to meet DARPA’s challenge are seeking ways to make data movement more efficient via distributed architectures, which clip the distance data travels by the addition of adding memory chips to processors. “We move the work to the data rather than move the data to where the computing happens,” Murphy says.
As Eric Smalley wrote today following a discussion with Richard Murphy:
“Sandia National Laboratory’s effort, dubbed X-caliber, will attempt to further limit data shuffling with something called smart memory, a form of data storage with rudimentary processing capabilities. Performing simple calculations without moving data out of memory consumes an order of magnitude less energy than today’s supercomputers."
Full story at Discover Magazine
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.