Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
May 4, 2009

NVIDIA Shifts GPU Clusters Into Second Gear

by Michael Feldman

GPU-accelerated clusters are moving quickly from the “kick the tires” stage into production systems, and NVIDIA has positioned itself as the principal driver for this emerging high performance computing segment.

The company’s Tesla S1070 hardware, along with the CUDA computing environment, are starting to deliver real results for commercial HPC workloads. For example Hess Corporation has a 128-GPU cluster that is performing seismic processing for the company. The 32 S1070s (4 GPUs per board) are paired with dual-socket quad-core CPU servers and are performing at the level of about 2,000 dual-socket CPU servers for some of their workloads. For Hess, that means it can get the same computing horsepower for 1/20 the price and for 1/27 the power consumption.

Hess is not alone. Brazilian oil company Petrobas has built a 72-GPU Tesla cluster for its seismic codes. Although the company hasn’t released specific performance data, based on preliminary testing, Petrobas expects to see a 5X to 20X improvement compared to a CPU-based cluster platform. Chevron and Total SA are also experimenting with GPU acceleration and although they haven’t divulged what types of systems are being used, NVIDIA products are almost certainly in the mix.

BNP Paribas, a French banking firm, is using a Tesla S1070 to compute equity pricing on the derivatives the company tracks. According to Stéphane Tyc, head of the company’s Corporate and Investment Banking Division in the GECD Quantitative Research group, they were able to achieve the same performance as 500 CPU cores with just half a Tesla board (two GPUs). Better yet, the platform delivered a 100-fold increase in computations per watt compared to a CPU-only system. “We were actually surprised to get numbers of that magnitude,” said Tyc. As of March, BNP Paribas had not deployed the system for live trading, but there are already plans in place to port more software.

Up until now, all of these GPU-accelerated clusters had to be custom-built. In an effort to get a more “out of box” experience for GPU cluster users, NVIDIA has launched its “Tesla GPU Preconfigured Cluster” strategy. Essentially, it’s a set of guidelines for OEMs and system builders for NVIDIA-accelerated clusters, the idea being to make GPU clusters as easy to order and install as their CPU-only counterparts. It’s basically a parallel strategy to NVIDIA’s personal supercomputer workstation program, which the company rolled out in November 2008.

The guidelines consist of a set of hardware and software specs that define a basic GPU cluster configuration. In a nutshell, each cluster has a CPU head node that runs the cluster management software, an InfiniBand switch for node-to-node communication, and four or more GPU-accelerated compute nodes. Each compute node has a CPU server hooked up to a Tesla S1070 via PCI Express. On the software side, a system includes clustering software, MPI, and NVIDIA’s CUDA development tools. Most of this is just standard fare, but the cluster software is typically a Rocks roll for CUDA or something equivalent.

NVIDIA itself isn’t building any systems. As the company did with personal supercomputing, it has enlisted partner OEMs and distributors to offer GPU-accelerated clusters. The system vendors can add value by selling their own clustering software, tools, services and hardware options. Currently NVIDIA has signed more than a dozen players, including many of the usual HPC suspects: Cray, Appro, Microway, Penguin Computing, Colfax International, and James River Technical. NVIDIA has also corralled some regional workstation and server distributors to attain a more global reach. In this category we have CADNetwork (Germany), E4 (Italy), T-Platforms (Russia), Netweb Technologies (India), Viglen (UK). The complete list of partners is on NVIDIA’s Web site.

A bare-bones system — a head node and four GPU-accelerated servers — should run about $50,000. That configuration will deliver around 16 (single-precision) teraflops. But larger systems can scale into the 100s of teraflops territory and run $1 million. In this $50K to $1M price range, the systems are aimed at research groups of varying sizes. A low-end 16-GPU machine, for example, could serve a professor and his or her graduate research team, while a 100-GPU system would most likely be shared by multiple research groups spread across an organization.

This reflects how multi-teraflop CPU clusters are used today, but in the case of GPUs, the price point is an order of magnitude lower. NVIDIA’s goal is to make this capability available for the hundreds of thousands of researchers who could potentially use this level of computing, but who can’t afford a CPU-based system or don’t have the power or floor space to accommodate such a machine.

Software will continue to be the limiting factor, since a lot of important technical computing codes are just now being ported to the GPU. CUDA-enabled packages like NAMD (NAnoscale Molecular Dynamics) and GROMACS (GROningen MAchine for Chemical Simulations) are well into development and will soon make their way into commercial systems. In the near future, OpenCL should offer another avenue for porting higher level GPU computing codes. All of this means system builders will increasingly be able to craft turnkey GPU clusters for specific application segments.

If GPU clusters take off, it would be especially welcome news for NVIDIA. Like many chip manufacturers, the company is struggling through the economic downturn. Its revenues declined 16 percent last year, and it recorded its first net loss in a decade. The good news is that in the GPU computing realm, NVIDIA is the clear market leader. And while the company’s HPC offerings are not a volume business, if Tesla GPUs become the accelerator of choice for millions of researchers, that could change.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video