NVIDIA Unleashes Fermi GPU for HPC

By Michael Feldman

November 15, 2009

NVIDIA has announced the first Fermi GPU products here at the Supercomputing Conference (SC09) in Portland, Oregon, where thousands of attendees will get a chance to see the company’s next-generation chip in action. The GPUs will first touch down in NVIDIA’s new Tesla 20-series products aimed at HPC workstations and servers. The company will be demonstrating the new hardware at its booth on the SC09 exhibition floor, starting on Tuesday.

For those of you who somehow missed the big Fermi unveiling in September, NVIDIA’s latest GPU looks and acts much like a vector processor. The new architecture offers double-precision (DP) floating point performance north of 500 gigaflops per chip, systematic support for ECC memory, L1 and L2 caches, GDDR5 support, and a raft of new features to make the processor more programmer friendly, including C++ support. In short, Fermi is designed as a true computational GPU that is designed to offer a much wider application aperture for HPC, visual computing and data analytics than any previous graphics processor.

What they announced this week at SC09 were four Tesla 20 offerings — the C2050 and C2070 for workstations, and the S2050 and S2070 for 1U servers. What follows are the specs the company is quoting today, but since the products won’t hit the streets until next year, NVIDIA cautions that these numbers are “subject to change.”

Unlike the Tesla 10-series, which came standard with 4 GB of on-board memory per GPU, the first 20-series products are offering two memory configurations. The x2050 models come with 3 GB per GPU (2.625 GB per GPU with ECC enabled), while the x2070 models double that to 6 GB per GPU (5.25 GB per GPU with ECC enabled). Local memory capacity is quite important to these devices since the new Teslas use the PCI Express bus to transfer data back and forth to the CPU. So to avoid the time-consuming data shuffling, it pays to have the entire data set the GPU is operating in its local memory.

NVIDIA is planning for volume deployment of the new Teslas starting in May 2010. That’s probably later than the company would have preferred, given that there are plenty of users who would like to get their hands on them today. But with no equivalent technology in the HPC market, NVIDIA can afford to slip and slide a bit with the rollout. Fortunately, developers can get a jump start on their codes today. The CUDA C/C++ 3.0 beta, which incorporates Fermi support, is already available for download on NVIDIA’s Web site.

When the new hardware does arrive, it will look much the same as the 10-series boards. As before, the workstation Teslas are populated with a single GPU, but because it’s Fermi technology, they a deliver a lot more peak DP horsepower — between 520 to 630 DP per chip. That means that a Dell or HP workstation, which can house two of these cards, can provide well over a teraflop. NVIDIA quotes typical power draw at 190W, with a maximum of 225W. That’s a significant bump from the peak draw of the current C1060s at 187.8W, but since double precision performance is several times higher on the new parts, performance per watt is much improved.

The Tesla server boards contains four Fermi GPUs, and provide between 2.1 and 2.5 teraflops of DP — pretty amazing figures for a 1U box. Again, there’s a power penalty: 900W under a typical load, with a maximum of 1200W. That’s roughly twice the power draw of a typical x86 dual-socket 1U server. However, since the fastest x86 server chips churn out roughly 100 peak gigaflops per CPU, a Tesla server is going to be about five times better in the performance per watt department.

GPUs have an additional advantage. Compared to a graphics memory, CPU memory tends to be much more bandwidth constrained, thus it is comparatively more difficult to extract all the theoretical FLOPS from the processor. This is one of the principal reasons that performance on data-intensive apps almost never scales linearly on multicore CPUs. GPU architectures, on the other hand, have always been designed as data throughput processors, so the FLOPS to bandwidth ratio is much more favorable.

Compared to a quad-core x86 CPU, application speedups of 10x -200x are fairly typical on the current generation 10-series. For example, using the C1060, users have demonstrated a 31x speedup for seismic processing, 83x for certain financial computing applications, and 17x on some molecular dynamics codes. Those numbers are bound to improve further once the Fermi-equipped Teslas are in the field.

Beyond the performance numbers, NVIDIA thinks its best story is really price-performance. But first you have to get past the up-front costs. The new Teslas are not cheap. Suggested retail pricing for the 20-series lineup is as follows: C2050 ($2,499), C2070 ($3,999), S2050 ($12,995) and S2070 ($18,995), which works out to about twice the cost of the current 10-series Teslas: C1060 ($1,299) and S1070 ($8,995). From a capability point of view, though, the Fermi GPUs offer a lot more computational power and application range.

Keep in mind that while the double precision floating point performance for the 20-series parts has improved by a factor of 7 or 8, single precision (SP) performance will get a much more modest bump. Assuming the advertised 2:1 ratio for SP:DP FLOPS, single precision performance will only increase by about 20 percent compared to the Tesla 10s. That’s significant, but it might not be worth the extra cost if your application uses mostly single precision and you don’t require the other dandy capabilities that come with Fermi.

The bottom line is that in 2010, $5,000 can buy you a teraflop of hardware. That’s roughly a 10-fold improvement in price-performance compared to an equivalent CPU-based system. Of course, you have to factor in that you need a bunch of CPU hardware to drive the GPUs — at the minimum, one CPU core per graphics processor. By NVIDIA’s reckoning, a 17 teraflop HPC cluster that makes maximum use of Tesla 20 hardware would run about $250K, while an equivalent CPU-only cluster would cost a $1 million. But because of reduced power and cooling costs, the GPU-accelerated cluster will rack up additional savings over the lifetime of the system.

By offering high performance computing at a fraction of its current cost, NVIDIA is betting that GPU-based HPC will not only become commonplace, but will grow the market. Application deployments that just weren’t economically feasible to do with CPUs should now become quite attractive. When the new Teslas come online next year, this will be an especially important trend to watch.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended to make it easier, faster and cheaper to train and run machi Read more…

By Doug Black

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

2017 Gordon Bell Prize Finalists Named

October 23, 2017

The three finalists for this year’s Gordon Bell Prize in High Performance Computing have been announced. They include two papers on projects run on China’s Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This