Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
September 24, 2010

Supercomputing Energy Use Getting a Bad Rap

Tiffany Trader

In a thought-provoking piece over at ZDNet, Numerical Algorithms Group’s Andrew Jones takes a look at the supercomputing power consumption equation, examining whether its current trajectory might not be so untenable.

He writes:

There are a range of estimates for the likely power consumption of the first exaflops supercomputers, which are expected at some point between 2018 and 2020. But probably the most accepted estimate is 120MW, as set out in the Darpa Exascale Study edited by Peter Kogge (PDF).

At this figure, the supercomputing community panics and says it is far too much — we must get it down to between 20MW and 60MW, depending who you ask — and we worry even that is too much. But is it?

What follows is a comparison of today’s largest supercomputers with their closest kin, major scientific research facilities.

In Jones’ opinion:

[T]he largest supercomputers at any time, including the first exaflops, should not be thought of as computers. They are strategic scientific instruments that happen to be built from computer technology. Their usage patterns and scientific impact are closer to major research facilities such as Cern, Iter, or Hubble.

Thinking of the big supercomputers that way, their power consumption and other costs — construction, operation, and so forth — are comparable to other major research centers and not that outrageous, concludes Jones.

Jones also tackles the subject of whether it makes sense to continually improve and replace systems every couple of years (as we currently do) or whether it would offer more value to society to collaborate on the construction of one mega-supercomputer every decade – putting ten years of resources into it, and then relying only on that system for ten years. There are, of course, pros and cons to each path. Because supercomputing performance increases exponentially, the first option results in a greater number of exflops per year, but also think of the resources saved with the second option by not having to continually rewrite and validate code and the value to society in having a 2030-era system ten years ahead of schedule.

Jones is not sold on either path, but wonders why we are so set on the first option without giving some consideration to the second. Check out the full article for more in-depth treatment of these ideas.

Full story at ZDNet

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video