NVIDIA Takes GPU Computing to the Next Level

By Michael Feldman

September 29, 2009

GPU Computing 2.0 is upon us. Today at the NVIDIA GPU Technology Conference in San Jose, Calif., company CEO Jen-Hsun Huang unveiled a seriously revamped graphics processor architecture representing the biggest step forward for general-purpose GPU computing since the introduction of CUDA in 2006. The stated goal behind the new architecture is two-fold: to significantly boost GPU computing performance and to expand the application range of the graphics processor.

The new architecture, codenamed “Fermi,” incorporates a number of new features aimed at technical computing, including support for Error Correcting Code (ECC) memory and greatly enhanced double precision (DP) floating point performance. Those additions remove the two major limitations of current GPU architectures for the high performance computing realm, and position the new GPU as a true general-purpose floating point accelerator. Sumit Gupta, senior product manager for NVIDIA’s Tesla GPU Computing Group, characterized the new architecture as “a dramatic step function for GPU computing.” According to him, Fermi will be the basis of all NVIDIA’s GPU offerings (Tesla, GeForce, Quadro, etc.) going forward, although the first products will not hit the streets until sometime next year.

Besides ECC and a big boost in floating point performance, Fermi also more than doubles the number of cores (from 240 to 512), adds L1 and L2 caches, supports the faster GDDR5 memory, and increases memory reach to one terabyte. NVIDIA has also tweaked the hardware to enable greater concurrency and utilization of chip resources. In a nutshell, NVIDIA is making its GPUs a lot more like CPUs, while expanding the floating point capabilities.

First up is the addition of ECC support, a topic we covered earlier this month, (not realizing that NVIDIA was just weeks away from officially announcing it). The impetus behind ECC for GPUs is the same as it was for CPUs: to make sure data integrity is maintained throughout the memory hierarchy so that errant bit flips don’t produce erroneous results. Without this level of reliability, GPU computing would have been relegated to a niche play in supercomputing.

In Fermi, ECC will be supported throughout the architecture. All major internal memories are protected, including the register file and the new L1 and L2 caches. For off-chip DRAM, ECC has been cooked into the memory controller interfaces on the GPU. This entailed a significant engineering effort on NVIDIA’s part, requiring a complete redesign of on-chip memory and the memory controller interface logic. With these enhancements, NVIDIA has achieved the same level of memory protection as a CPU running in a server. Gupta says ECC, which has little application for traditional graphics, will only be enabled for the company’s GPU computing products.

To support this error correction feature, future products will use GDDR5 memory, which is the first graphics memory specification that incorporates error detection. (NVIDIA currently uses GDDR3 in its products, while AMD has already made the switch to GDDR5.) The nice side effect of GDDR5 is that it has more than twice the bandwidth of GDDR3, although the actual performance for products will depend upon the specific memory interface and memory speed. For the Tesla products, it would be reasonable to expect a doubling of memory throughput.

Better yet, since Fermi supports 64-bit addressing, memory reach is now a terabyte. Although it’s not yet practical to place that much DRAM on a GPU card, memory capacities will surely exceed the 4 GB per GPU limit in the current Tesla S1070 and C1060 products. For data-constrained applications, the larger memory capacities will lessen the need for repeated data exchanges between the CPU and the GPU, since more of the data can be kept local to the GPU. This should help boost overall performance for many applications, but especially seismic processing, medical imaging, 3D electromagnetic simulation and image searching.

The addition of L1 and L2 cache is an entirely new feature for GPUs. The caches were added to address the irregular data access problem of many scientific codes. As in CPU caches, the caches are there to reduce data access latency and increase throughput, with the overall goal of keeping the working data as close as possible to the computation. Codes that will see a particular benefit from caching include applications using sparse linear algebra and sparse matrix computations, FEA applications, and ray tracing.

In Fermi, the L1 cache is bundled with shared memory, an internal scratch pad memory that already exists in the current GT200 architecture. But while shared memory is under application control, the L1 cache is managed by the hardware. Fermi provides each 32 core group (or streaming multiprocessor) with 64 KB that is divided between the L1 cache and shared memory. Two configurations are supported: either 48 KB of shared memory and 16 KB L1, or vice versa. The L2 cache is more straightforward. It consists of 768 KB shared across all the GPU cores.

Another big performance boost comes from the pumped up double precision support. Gupta says the GT200 architecture has a 1:8 performance ratio of double precision to single precision, which is why the current Tesla products don’t even manage to top 100 DP peak gigaflops per GPU. The new architecture changes this ratio to 1:2, which represents a more natural arrangement (inasmuch as double precision uses twice the number of bits as single precision). Because NVIDIA has also doubled the total core count, DP performance will enjoy an 8-fold increase. By the time the next Tesla products appear, we should be seeing peak DP floating point performance somewhere between 500 gigaflops to 1 teraflop per GPU.

NVIDIA engineers have also improved floating point accuracy. The previous architecture was IEEE compliant for double precision, but for single precision there were some corner cases where they were not compliant. With Fermi, the latest IEEE 754-2008 floating point standard is now implemented, as well as a fused multiply-add (FMA) instruction to help retain better precision. According to Gupta, that means their new GPUs will be more precise, floating-point-wise, than even x86 CPUs.

The Fermi design adds a number of concurrency features so as to make better use of GPU resources. For example, dual thread scheduling was implemented so that each 32-core streaming multiprocessor can execute two groups of threads simultaneously, in a manner analogous to Intel’s hyper-threading technology for x86 CPUs.

In addition, the GPU’s hardware thread scheduler (HTS) has also been enhanced so that thread context switching is ten times faster than it was before. To take advantage of the quicker switching, the HTS is able to concurrently execute multiple slices of computational work (known in CUDA parlance as “kernels”).

The new capability allows the programmer to offload more of the application to the GPU, since even relatively small pieces of work can be bundled up and shuttled to the GPU en masse without having to worry about the housekeeping and overhead of sending each one separately. And since the HTS takes care of parallelization, more computation can be done in a shorter period of time.

Along these same lines, data transfer has been parallelized. Currently, a GPU calculation can overlap a CPU-GPU data transfer. Fermi provides a second DMA engine so two transfers can be overlapped with a computation. For example, one can simultaneously read in data from the CPU for the next computation while the current computation is executing and the data from the previous result is being written back to the CPU.

On the software side, they’ve made a number of enhancements to support a more fully-featured programming environment. Most importantly, they’re extending the native C CUDA model to include C++. To do this they’ve added hardware support for features like virtual functions and exception handling. By the time the first Fermi products show up in 2010, CUDA will almost certainly have a native C++ compiler capability.

If all of that seems like a lot of smarts for a single chip, it is. NVIDIA says the new architecture will use a 40 nm process technology and encompass 3 billion transistors, which happens to be more than in any of the upcoming Xeon, Opteron, Itanium, or Power7 CPUs. Power consumption for the various Fermi-based products will be on par with the current offerings, but performance per watt will be much improved.

Announcing a new GPU architecture so far out ahead of actual products is a big departure for NVIDIA, and is yet another example of how GPU computing has brought the company closer to the CPU way of doing business. The company is especially interested in bringing in new players, such as manufacturing and big government supercomputing, which have mostly watched on the sidelines during GPU Computing 1.0. NVIDIA also believes Fermi will deepen its GPU computing penetration across all HPC segments — financial services, life sciences, oil & gas, and so on.

What they’re trying to accomplish, says Gupta, is to prepare ISVs and end users so they can start gearing up their software in advance of the actual hardware. From his perspective, “this is part of us becoming an HPC company.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Market, Though Small, will Grow 22% and Hit $1.5B in 2026

December 7, 2023

Few markets as small as the quantum information sciences market generate as much lively discussion. Hyperion Research pegged the worldwide quantum market at $848 million for 2023 and expects it to reach ~$1.5 billion in Read more…

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed its new Instinct MI300X GPU is the fastest AI chip in the worl Read more…

Finding Opportunity in the High-Growth “AI Market” 

December 6, 2023

 “What’s the size of the AI market?” It’s a totally normal question for anyone to ask me. After all, I’m an analyst, and my company, Intersect360 Research, specializes in scalable, high-performance datacenter Read more…

Imagine a Beowulf Cluster of SuperNODEs …
(They did)

December 6, 2023

Clustering resources for faster performance is not new. In the early days of clustering, the Beowulf project demonstrated that high performance was achievable from commodity hardware. These days, the "Beowulf cluster mem Read more…

The IBM-Meta AI Alliance Promotes Safe and Open AI Progress

December 5, 2023

IBM and Meta have co-launched a massive industry-academic-government alliance to shepherd AI development. The new group has united under the AI Alliance banner to promote responsible innovation in AI. Historically, techn Read more…

AWS Solution Channel

Shutterstock 2030529413

Reezocar Rethinks Car Buying Using Computer Vision and ML on AWS

Overview

Every car that finds its way to a landfill marks another dent in the fight for a sustainable future. Reezocar, an online hub for buying and selling used cars, has a mission to change this. Read more…

QCT Solution Channel

QCT and Intel Codeveloped QCT DevCloud Program to Jumpstart HPC and AI Development

Organizations and developers face a variety of issues in developing and testing HPC and AI applications. Challenges they face can range from simply having access to a wide variety of hardware, frameworks, and toolkits to time spent on installation, development, testing, and troubleshooting which can lead to increases in cost. Read more…

ChatGPT Friendly Programming Languages
(hello-world.llm)

December 4, 2023

 Using OpenAI's ChatGPT to write code is an alluring goal. Describing "what to" solve, but not "how to solve" would be a huge breakthrough in computer programming. Alas, we are nowhere near this capability. In particula Read more…

Quantum Market, Though Small, will Grow 22% and Hit $1.5B in 2026

December 7, 2023

Few markets as small as the quantum information sciences market generate as much lively discussion. Hyperion Research pegged the worldwide quantum market at $84 Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Finding Opportunity in the High-Growth “AI Market” 

December 6, 2023

 “What’s the size of the AI market?” It’s a totally normal question for anyone to ask me. After all, I’m an analyst, and my company, Intersect360 Res Read more…

Imagine a Beowulf Cluster of SuperNODEs …
(They did)

December 6, 2023

Clustering resources for faster performance is not new. In the early days of clustering, the Beowulf project demonstrated that high performance was achievable f Read more…

The IBM-Meta AI Alliance Promotes Safe and Open AI Progress

December 5, 2023

IBM and Meta have co-launched a massive industry-academic-government alliance to shepherd AI development. The new group has united under the AI Alliance banner Read more…

Shutterstock 1336284338

ChatGPT Friendly Programming Languages
(hello-world.llm)

December 4, 2023

 Using OpenAI's ChatGPT to write code is an alluring goal. Describing "what to" solve, but not "how to solve" would be a huge breakthrough in computer programm Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

The Annual SCinet Mandala

November 30, 2023

Perhaps you have seen images of Tibetan Buddhists creating beautiful and intricate images with colored sand. These sand mandalas can take weeks to create, only Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

SC23 Booth Videos

Achronix @ SC23
AMD @ SC23
AWS @ SC23
Altair @ SC23
CoolIT @ SC23
Cornelis Networks @ SC23
CoreHive @ SC23
DDC @ SC23
HPE @ SC23 with Justin Hotard
HPE @ SC23 with Trish Damkroger
Intel @ SC23
Intelligent Light @ SC23
Lenovo @ SC23
Penguin Solutions @ SC23
QCT Intel @ SC23
Tyan AMD @ SC23
Tyan Intel @ SC23
HPCwire LIVE from SC23 Playlist

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire