Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 18, 2011

NVIDIA Offers Exascale Vision at SC11

Michael Feldman

When NVIDIA CEO Jen-Hsun Huang delivered his keynote at SC11 this week, it was easy to forget that a few short years ago, the company and its GPU products had absolutely nothing to do with supercomputing. Today, of course, the technology is a driving force in the HPC ecosystem and is challenging the entrenched interests of chip makers Intel, AMD, and IBM.

Not that GPUs are a huge revenue generator for NVIDIA just yet. Of the company’s $3.3 billion in annual revenue, just $100 million can be attributed to HPC Tesla sales. “We haven’t turned it into a great business yet,” Huang told HPCwire, after Tuesday’s keynote.

NVIDIA’s journey down the HPC path did not begin the company boardroom, however. According to Huang, the most important day for GPU computing happened several years ago when two doctors from Massachusetts General Hospital in Boston approached NVIDIA with idea of using their GPUs for computed tomography (CT) imaging reconstruction to detect breast cancer.

The problem with the hospital’s current setup was that an HPC cluster was needed to for the compute-intensive rendering of the CT scans. The doctors wanted to shrink this work down onto a workstation and had heard these new-fangled things in GPUs called programmable shaders might make it possible tap into the floating point power of graphics processors.

Sure enough the GPUs work as expected, and they were able to reduce CT rendering times, improving the whole diagnostic workflow. Although Mass General only bought two graphics cards for their needs at the time, Huang says GPUs are now the de facto rendering accelerator and are in 100 percent of CT scanners today.

The rest, as they say, is history. Today all of the HPC OEMs offer NVIDIA GPU-equipped systems of one sort or another, and system deployments are on the rise. According to IDC, 28 percent of HPC sites were using accelerators in 2010 — predominantly NVIDIA GPUs — from a standing start of zero in 2005.

At the top of the HPC food chain, there are 35 TOP500 systems with NVIDIA GPUs (twice as many as in June). Of these, three of the top five supercomputers are equipped with GPUs, with more on the way in 2012 with the 20-petaflop Titan system at Oak Ridge National Lab and the 11.5 petaflop Blue Waters super at NCSA.

Most of the popularity of this architecture for HPC rests on the fact that NVIDIA’s GPUs are ubiquitous in the adjacent areas of computing. Today there are 350 million or so CUDA-capable GPUs that have been shipped, the majority of which are in desktops and laptops, and this has attracted over 120 thousand CUDA developers. As a result, CUDA programming is being taught at nearly five hundred universities around the world.

In Huang’s SC11 keynote, he pointed out that the rise of HPC-style GPU computing has come about because traditional CPUs, especially x86 ones, have become rather inefficient at compute- and data-intensive computation. For example, he said CPUs use 50 times the energy to schedule the instructions and 20 times the energy to move the data than doing the actual calculation.

GPUs, by contrast, are designed to reduce data movement, and although they have poor single threaded performance because of their simple processing engines, there are many more of them to do the work in parallel. That makes for more efficient computation, assuming the application can be molded into the GPU computing model.

Huang believes the demand for energy efficient HPC flops will work in NVIDIA’s favor, noting that “supercomputers have become power limited. — just like cell phones, just like tablets.” From his perspective, future GPUs will be the platform of choice to power exaflop machines. And although Huang said those supercomputers will be able to perform at that level with just 20 MW, his crystal ball doesn’t have that happening until 2022.

In that timeframe, a second or third generation integrated ARM-GPU processor will be the most likely design. NVIDIA’s “Maxwell” GPU generation, scheduled to make its appearance in the middle of the decade, is slated to be the first NVIDIA platform to integrate their upcoming “Project Denver” ARM CPU, a homegrown design that will become the basis for all of the company’s product lines. From then on, it’s safe to assume that integration will just get tighter. By 2022, it may not make much sense to even refer to these heterogeneous processors as GPUs anymore.

NVIDIA’s early lead in the HPC accelerator business is not insurmountable though. Intel is also positioning itself to be the dominant chip maker of the exascale era, drawing its own line in the sand with a target of 2018 for an Intel-powered exaflop machine. The most likely processor design for such a system will involve Xeon cores integrated with MIC cores on the same chip, although no public plans to that effect have been aired.

AMD has been more equivocal with regard to its exascale aspirations, but the company has certainly been the early mover in heterogeneous CPU-GPU designs with its Fusion APU architecture. Their near-term plans involve putting high-end “Bulldozer” cores into an APU next year as well as adding ECC to their GPU computing line.

Their could be other vendors to challenge NVIDIA and its competitors for the future of supercomputing. Texas Instruments, for example, has just officially launched a floating point DSP with rather impressive performance/watt numbers that is being cross-targeted to HPC. Other ARM vendors could get into the act too, especially if the chip is able to establish itself in the server space with the upcoming 64-bit designs.

The lesson of NVIDIA, pointed out by Huang in his keynote, is that disruptive technologies, like GPU computing, often emerge from new products, like cell phones and tablets, which quickly ramp into volume markets. And although NVIDIA has managed to exploit that phenomenon very effectively for HPC over the last five years, it is unlikely to be the last company to do so. The volume market for the processor of the exascale era may not even exist yet.

Tags: