Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
August 30, 2011

UHPC Developments Move DARPA Closer to Goals

Nicole Hemsoth

Richard Murphy, a computer architect at Sandia National Laboratory, recently weighed in on progress toward the goals set forth by the Ubiquitous High Performance Computing program (UHPC). For those who are not familiar, this initiative, which was set forth by the Defense Advanced Research Projects Agency (DARPA) aims to bring petascale and exascale computing innovations into military use via a program of focused research efforts on everything from power and efficiency to performance to applications.

The program, which got its start last year posed a challenge to scientists to build a petaflop system that consumes no more than 57 kilowatts of electricity, in part so that the military could bring computing power out of large datacenters and into the field for immediate, on-spot use. Aside from this more practical military use of high-end HPC systems on the fly, massive benefits for computing efficiency for cost savings and reduced environmental impact would be realized as well.

To bring the kilowatt usage down to the challenge level of 57 kilowatts is no simple task; it will require a dramatic, almost unthinkable reduction in electricity use—all the while retaining the key performance required for military high performance computing applications.

Teams working on such initiatives are vying for the chance to win an award to build a supercomputer for DARPA. Those who come close to the power goals will need to dramatically rethink how computers are designed, particularly in terms of how memory and processors move data. As Discover Magazine pointed out, “The energy required for this exchange is manageable when the task is small—a processor needs to fetch less data from memory. Supercomputers, however, power through much larger volumes of data—for example, while modeling a merger of two black holes—and their energy can become overwhelming.”

According to Richard Murphy, “it’s all about data movement.” Those in the race to meet DARPA’s challenge are seeking ways to make data movement more efficient via distributed architectures, which clip the distance data travels by the addition of adding memory chips to processors. “We move the work to the data rather than move the data to where the computing happens,” Murphy says.

As Eric Smalley wrote today following a discussion with Richard Murphy:

“Sandia National Laboratory’s effort, dubbed X-caliber, will attempt to further limit data shuffling with something called smart memory, a form of data storage with rudimentary processing capabilities. Performing simple calculations without moving data out of memory consumes an order of magnitude less energy than today’s supercomputers.”

Full story at Discover Magazine

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video