NVIDIA Shifts GPU Clusters Into Second Gear

By Michael Feldman

May 4, 2009

GPU-accelerated clusters are moving quickly from the “kick the tires” stage into production systems, and NVIDIA has positioned itself as the principal driver for this emerging high performance computing segment.

The company’s Tesla S1070 hardware, along with the CUDA computing environment, are starting to deliver real results for commercial HPC workloads. For example Hess Corporation has a 128-GPU cluster that is performing seismic processing for the company. The 32 S1070s (4 GPUs per board) are paired with dual-socket quad-core CPU servers and are performing at the level of about 2,000 dual-socket CPU servers for some of their workloads. For Hess, that means it can get the same computing horsepower for 1/20 the price and for 1/27 the power consumption.

Hess is not alone. Brazilian oil company Petrobas has built a 72-GPU Tesla cluster for its seismic codes. Although the company hasn’t released specific performance data, based on preliminary testing, Petrobas expects to see a 5X to 20X improvement compared to a CPU-based cluster platform. Chevron and Total SA are also experimenting with GPU acceleration and although they haven’t divulged what types of systems are being used, NVIDIA products are almost certainly in the mix.

BNP Paribas, a French banking firm, is using a Tesla S1070 to compute equity pricing on the derivatives the company tracks. According to Stéphane Tyc, head of the company’s Corporate and Investment Banking Division in the GECD Quantitative Research group, they were able to achieve the same performance as 500 CPU cores with just half a Tesla board (two GPUs). Better yet, the platform delivered a 100-fold increase in computations per watt compared to a CPU-only system. “We were actually surprised to get numbers of that magnitude,” said Tyc. As of March, BNP Paribas had not deployed the system for live trading, but there are already plans in place to port more software.

Up until now, all of these GPU-accelerated clusters had to be custom-built. In an effort to get a more “out of box” experience for GPU cluster users, NVIDIA has launched its “Tesla GPU Preconfigured Cluster” strategy. Essentially, it’s a set of guidelines for OEMs and system builders for NVIDIA-accelerated clusters, the idea being to make GPU clusters as easy to order and install as their CPU-only counterparts. It’s basically a parallel strategy to NVIDIA’s personal supercomputer workstation program, which the company rolled out in November 2008.

The guidelines consist of a set of hardware and software specs that define a basic GPU cluster configuration. In a nutshell, each cluster has a CPU head node that runs the cluster management software, an InfiniBand switch for node-to-node communication, and four or more GPU-accelerated compute nodes. Each compute node has a CPU server hooked up to a Tesla S1070 via PCI Express. On the software side, a system includes clustering software, MPI, and NVIDIA’s CUDA development tools. Most of this is just standard fare, but the cluster software is typically a Rocks roll for CUDA or something equivalent.

NVIDIA itself isn’t building any systems. As the company did with personal supercomputing, it has enlisted partner OEMs and distributors to offer GPU-accelerated clusters. The system vendors can add value by selling their own clustering software, tools, services and hardware options. Currently NVIDIA has signed more than a dozen players, including many of the usual HPC suspects: Cray, Appro, Microway, Penguin Computing, Colfax International, and James River Technical. NVIDIA has also corralled some regional workstation and server distributors to attain a more global reach. In this category we have CADNetwork (Germany), E4 (Italy), T-Platforms (Russia), Netweb Technologies (India), Viglen (UK). The complete list of partners is on NVIDIA’s Web site.

A bare-bones system — a head node and four GPU-accelerated servers — should run about $50,000. That configuration will deliver around 16 (single-precision) teraflops. But larger systems can scale into the 100s of teraflops territory and run $1 million. In this $50K to $1M price range, the systems are aimed at research groups of varying sizes. A low-end 16-GPU machine, for example, could serve a professor and his or her graduate research team, while a 100-GPU system would most likely be shared by multiple research groups spread across an organization.

This reflects how multi-teraflop CPU clusters are used today, but in the case of GPUs, the price point is an order of magnitude lower. NVIDIA’s goal is to make this capability available for the hundreds of thousands of researchers who could potentially use this level of computing, but who can’t afford a CPU-based system or don’t have the power or floor space to accommodate such a machine.

Software will continue to be the limiting factor, since a lot of important technical computing codes are just now being ported to the GPU. CUDA-enabled packages like NAMD (NAnoscale Molecular Dynamics) and GROMACS (GROningen MAchine for Chemical Simulations) are well into development and will soon make their way into commercial systems. In the near future, OpenCL should offer another avenue for porting higher level GPU computing codes. All of this means system builders will increasingly be able to craft turnkey GPU clusters for specific application segments.

If GPU clusters take off, it would be especially welcome news for NVIDIA. Like many chip manufacturers, the company is struggling through the economic downturn. Its revenues declined 16 percent last year, and it recorded its first net loss in a decade. The good news is that in the GPU computing realm, NVIDIA is the clear market leader. And while the company’s HPC offerings are not a volume business, if Tesla GPUs become the accelerator of choice for millions of researchers, that could change.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

QDC24 Quick Hit: IBM Plows Ahead with Quantum Hardware Development

November 26, 2024

While software development and error correction rightfully receive most of the attention in quantum development today, hardware advances have hardly stopped. IBM showcased hardware milestones and refreshed QPU plans at i Read more…

Granite Rapids CFD, No GPUs, No Problem

November 26, 2024

Dr. Moritz Lehman is a CFD (Computational Fluid Dynamics) trailblazer. Over a year ago, he posted his now famous simulation of the Concord jet. HPCwire covered the remarkable feat (details at the end of the article). Read more…

HPSF Gets Off to a Strong Start in Taking HPC Software Mainstream

November 26, 2024

The newly formed High-Performance Software Foundation (HPSF) is already up and running with tools to bring HPC software to the mainstream. Hardware used in supercomputers is now making its way to cloud and personal compu Read more…

Leveraging AI for Precision in Hurricane Forecasting

November 26, 2024

Since 1980, the United States has faced 363 billion-dollar weather disasters. Hurricanes have caused the most damage, with total losses exceeding $1.3 trillion. On average, every hurricane event racks up a staggering $22 Read more…

Getting There: WHPC SC24 Travel Fellow Divya Joy

November 25, 2024

Each year at SC, Women in HPC (WHPC) provides travel assistance to an exceptional woman in the field. Divya Joy from Queensland University of Technology (QUT) in Australia was chosen to attend SC24 this year. Divya is is Read more…

Microsoft Azure & AMD Solution Channel

Announcing Azure HBv5 Virtual Machines: A Breakthrough in Memory Bandwidth for HPC

The most powerful Azure virtual machine for HPC

On November 19 at Ignite 2024, Microsoft unveiled their most advanced and efficient high-performance computing infrastructure to date, Azure HBv5. Read more…

Why Supercomputer Benchmarking Is So Important

November 25, 2024

SC2024 had a few sessions talking about the importance of benchmarking systems. The relevance of LINPACK -- which is used to rank the Top500 -- has been a hot topic of discussion lately. Panelists said supercomputing ben Read more…

QDC24 Quick Hit: IBM Plows Ahead with Quantum Hardware Development

November 26, 2024

While software development and error correction rightfully receive most of the attention in quantum development today, hardware advances have hardly stopped. IB Read more…

HPSF Gets Off to a Strong Start in Taking HPC Software Mainstream

November 26, 2024

The newly formed High-Performance Software Foundation (HPSF) is already up and running with tools to bring HPC software to the mainstream. Hardware used in supe Read more…

Leveraging AI for Precision in Hurricane Forecasting

November 26, 2024

Since 1980, the United States has faced 363 billion-dollar weather disasters. Hurricanes have caused the most damage, with total losses exceeding $1.3 trillion. Read more…

Why Supercomputer Benchmarking Is So Important

November 25, 2024

SC2024 had a few sessions talking about the importance of benchmarking systems. The relevance of LINPACK -- which is used to rank the Top500 -- has been a hot t Read more…

DeltaAI Unveiled: How NCSA is Meeting the Demand for Next-Gen AI Research

November 21, 2024

The National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign has just launched its highly anticipated DeltaAI syste Read more…

What’s the Status of Quantum-HPC Integration? What’s it Good for?

November 21, 2024

No one knows the full answer to those questions yet, but one panel at SC24 — HPC Meets Quantum Computing: When and How Will Applications Benefit? And Which On Read more…

SC24 Observations: China, Politics, RISC-V and CXL

November 21, 2024

The technical sessions at past Supercomputing conferences have always been a wealth of information. SC2024 was no different – some of the industry's smartest Read more…

SC24: Halfway There

November 20, 2024

Much like my Under the Wire series, there are news and events at SC that don't merit a full story but are worth mentioning. The following is a quick check-in mi Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire