Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
February 8, 2008

A Modest Proposal for Petascale Computing

Michael Feldman

In typical forward-thinking California fashion, the folks at Lawrence Berkeley National Laboratory (LBNL) are already looking beyond single petaflop systems, even before a single one has been released into the wild. LBNL researchers have started to explore what a multi-petaflop computer architecture might look like. Even ignoring the challenge of software concurrency, they point out that power and system costs will determine how such machines can be built.

To some extent, these costs are already constraining what can be built in the pre-petaflops era. To date, no one has bought a maximally configured version of any current leading edge supercomputer — for example, an IBM Blue Gene, Cray XT, or NEC SX system — not so much because users couldn’t make good use of the computing muscle, but because the initial cost of the hardware and the power to run them would have been prohibitive.

At last year’s SIAM Conference on Computational Science and Engineering, LBNL researchers Lenny Oliker, John Shalf, Michael Wehner authored a presentation about what kind of supercomputer would be required for a climate modeling system with kilometer-scale fidelity. They estimated that sustained performance of 10 petaflops would be required for such an application. They then extrapolated the power requirements and hardware costs of a 10 petaflop (peak) computer based on dual-core Opterons and one based on Blue Gene/L PowerPC system on a chip (SoC) technology. The 10 petaflop Opteron-based system was estimated to cost $1.8 billion and require 179 megawatts to operate; the corresponding Blue Gene/L system would cost $2.6 billion and draw 27 megawatts. The system costs are scary enough, but with energy rates at over $50/megawatt-hour and rising, you’d never be able to turn the thing on.

Since that estimate was made in early 2007, AMD has (sort of) released the quad-core Opterons and IBM has delivered Blue Gene/P. If one were to extrapolate the half petaflop Barcelona-based Ranger supercomputer to 10 petaflops, it would require about 50 megawatts and cost $600 million (although it’s widely assumed that Sun discounted the Ranger price significantly). A 10 petaflop Blue Gene/P system would draw 20 megawatts, with perhaps a similar cost as the Blue Gene/L.

The Berkeley guys took this into account in 2007, extrapolating that over the next five years or so power and cost efficiencies in processor technologies would increase by a factor of 8 to 16. Such an increase in energy efficiency would at least make the power requirements of a Blue Gene-type system reasonable. But even with a 10X decrease in hardware costs, a $200 million system price tag seems daunting, even considering inflation. (If you’re holding euros you might be in even better shape in five years.) In either case, rising energy costs are likely to offset some of the increased power efficiencies.

Unfortunately, the type of climate model envisioned will require more like 10 petaflops of sustained performance, which means something like 100-200 petaflops of peak performance will actually be needed. So now we’re back to billion dollar systems using tens or hundreds of megawatts.

The fundamental problem is that as we move below the 90nm process node, power and die area (and thus cost) is increasing faster than performance. The challenge will become how to get more performance from fewer transistors. One avenue the Berkeley researchers are looking at is the use of embedded processor SoC technology to construct ultra-low power, low-cost systems. A few HPC system vendors have already traveled down this road, namely IBM with their PowerPC SoC for Blue Gene and SiCortex with their MIPS64 SoC-based clusters. By using a larger number of slower and simpler cores, overall performance per watt is greatly increased. As long as the software can scale as well, application performance per watt can be an order of magnitude better than an x86-based system.

But the Berkeley researchers have something more in mind. Rather than exploiting general-purpose embedded processors like MIPS and PowerPC, they are considering semi-custom ASICs that contain hundreds of cores and achieve much better power-performance efficiencies than more generic solutions.

In general, customized ASICs are very expensive to design and manufacture for anything other than high volume applications — hence the attraction of FPGAs. But the consumer electronics market is changing the rules. In an industry that traditionally looked to the desktop and server space for ideas, embedded computing is now where the action is. With the proliferation of mobile consumer devices, entertainment appliances and GPS gadgets, and with the industry’s obsession with hardware costs and power usage, embedded computing has become a major driver for processor innovation.

One area the Berkeley researchers are looking at is configurable processor technology developed by Tensilica Inc. The company offers a set of tools that system developers can employ to design both the SoC and the processor cores themselves. A real-world implementation of this technology is the 188-core Metro network processor used in Cisco’s CRS-1 terabit router.

For practical reasons, the cores tend to be very simple, far simpler than even a PowerPC or MIPS core. But this is exactly what you want for optimal performance efficiency. One of the most compelling aspects to the Tensilica technology is that the hardware design and the associated software toolchain (compiler, debugger, simulator) are generated in concert, giving developers a reasonable path to system implementation. Even though the resulting SoC will only serve a domain of applications, the extra initial cost may be more than justified when you’re dealing with large numbers of chips and unrelenting power constraints.

The advantages of this approach for petascale systems are evident when you compare the 10 petaflop Opteron-based and Blue Gene-based systems mentioned above with one constructed from configurable processors targeted specifically to climate modeling. The Berkeley guys estimate that a system built with Tensilica technology would only draw 3 megawatts and cost just $75 million. True, it’s not a general-purpose system, but neither is it a one-off machine for a single application (like Japan’s MD-GRAPE machine, for example). With such an obvious cost and power advantage, the tradeoff between general-purpose and special-purpose computing seems like a good deal — again putting aside the software issues.

The real paradigm shift is thinking about supercomputers as appliances rather than as general-purpose computers. The LBNL researchers are focused only on petascale-level science applications like climate modeling, fusion simulation research or astrophysics, where hardware and power costs would seem to prevent a scaled up version of current architectures. The real trick though would be to generalize the model for mainstream computing.

A glimpse of how this might take shape was revealed in a recent IBM Research paper that described using the Blue Gene/P supercomputer as a hardware platform for the Internet. The authors of the paper point to Blue Gene’s exceptional compute density, highly efficient use of power, and superior performance per dollar. Regarding the drawbacks of the current infrastructure of the Internet, the authors write:

At present, almost all of the companies operating at web-scale are using clusters of commodity computers, an approach that we postulate is akin to building a power plant from a collection of portable generators. That is, commodity computers were never designed to be efficient at scale, so while each server seems like a low-price part in isolation, the cluster in aggregate is expensive to purchase, power and cool in addition to being failure-prone.

The IBM’ers are certainly talking about a more general-purpose petascale application than the Berkeley researchers, but one aspect is the same: ditch the loosely coupled, commodity-based systems in favor of a tightly coupled, customized architecture that focuses on low power and high throughput. If this is truly the model that emerges for ultra-scale computing, then the whole industry is in for a wild ride.


As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video