November 20, 2009

Exascale Expectations

by Michael Feldman

During Al Gore’s SC09 keynote speech on Thursday, he correctly observed that “Moore’s Law is not a law of physics, it’s a law of self-fulfilling expectations.” So it is with the unwritten law of supercomputer power, which tells us that system performance will increase about 1,000-fold per decade. If that pace continues, we should see the first exaflop systems show up in the 2018-2019 timeframe.

Because that expectation is so ingrained in the HPC mindset, a subset of the community has already coalesced to make sure we hit the mark. In fact, one of the last sessions of SC09 on Friday morning was a discussion about the road to exascale. Some of the heavy hitters in HPC were on the panel, including Jack Dongarra, Peter Kogge, Marc Snir, and Steve Scott. Intel’s Bill Camp was the moderator.

As you might expect, there was general agreement about the big exascale challenges: software scalability and models; memory and storage bandwidth; system resiliency; and power and cooling. All of these issues really stem from the fact that processing horsepower is outrunning the capabilities of all the surrounding technologies. And this is because the processor core count and the number of processors per HPC system are going to continue to increase faster than the software and other system components can match. As a result, much of the panel discussion tended to drift into a sort of “the sky is falling” narrative.

Cray CTO Steve Scott, representing the only vendor on the panel, had a somewhat different take. From his perspective, there aren’t any real showstoppers on the way to exaflop computing; there just aren’t any ideal solutions. He predicted that the first machine will arrive 2017 and will be based on 16nm process technology. Scott estimated each socket will deliver about 8 teraflops per socket, which works out to 125,000 sockets for one exaflop. The entire system will draw 31 MW and span about 10,000 square feet. All doable.

What Cray is counting on though is a shift to heterogeneous processing. “Most of the FLOPS are going to be in the so-called accelerators, whether they’re GPUs or SIMD vector units,” said Scott. “We’ll have some fast threads for performance on serial codes, coupled to large numbers of efficient, low control overhead, more efficient FLOPS. This is absolutely necessary to get both good performance and energy efficiency.”

But, he said, the memory bandwidth per FLOP is going to have to be a lot lower than it is today. The result is that some apps will be left behind performance-wise. Codes that do big matrix multiplies will be fine, but ones that need to do lots of memory references will be “SOL,” according to Scott. In fact, this has already occurred. Many applications are able to use only a small fraction of the potential performance on supercomputers, and that’s been going on for years.

Scott also concedes that some problems, such as reliability, will have to be accounted for in new ways. The commodity parts upon which these systems are based won’t have built-in fault-tolerance, since the volume market for these components (i.e., client devices and large datacenter servers) don’t require it. But good system design should take care of the problem at the hardware level.

“We can make the systems resilient,” claimed Scott. “I’m not too worried about that. It’s the applications that are the hard part.” He said checkpoint-restart can be used as a temporary solution, but as mean time between failure (MTBF) approaches the time to do a checkpoint, that model breaks down. What will be needed is application-side help that is able to deal with frequent failures. Scott thinks there’s some potential for automatic application resiliency, via the compiler and runtime, but it’s likely that the user application model will have to change to handle full resiliency. Again though, there are no showstoppers.

It’s worth noting average HPC systems follow the same relative performance pace as the top machines. For example, the bottom-ranked supercomputer on the TOP500 list also increased its Linpack performance 1,000-fold each decade. That means when the first exaflop system appears, the 500th fastest computer in the world will be 10 petaflops or so. Thus, when the exascale era is inaugurated in eight or nine years, most HPC users will be booting up their first petaflop machines — and will be thrilled to do so.