NVIDIA Takes GPU Computing to the Next Level

By Michael Feldman

September 29, 2009

GPU Computing 2.0 is upon us. Today at the NVIDIA GPU Technology Conference in San Jose, Calif., company CEO Jen-Hsun Huang unveiled a seriously revamped graphics processor architecture representing the biggest step forward for general-purpose GPU computing since the introduction of CUDA in 2006. The stated goal behind the new architecture is two-fold: to significantly boost GPU computing performance and to expand the application range of the graphics processor.

The new architecture, codenamed “Fermi,” incorporates a number of new features aimed at technical computing, including support for Error Correcting Code (ECC) memory and greatly enhanced double precision (DP) floating point performance. Those additions remove the two major limitations of current GPU architectures for the high performance computing realm, and position the new GPU as a true general-purpose floating point accelerator. Sumit Gupta, senior product manager for NVIDIA’s Tesla GPU Computing Group, characterized the new architecture as “a dramatic step function for GPU computing.” According to him, Fermi will be the basis of all NVIDIA’s GPU offerings (Tesla, GeForce, Quadro, etc.) going forward, although the first products will not hit the streets until sometime next year.

Besides ECC and a big boost in floating point performance, Fermi also more than doubles the number of cores (from 240 to 512), adds L1 and L2 caches, supports the faster GDDR5 memory, and increases memory reach to one terabyte. NVIDIA has also tweaked the hardware to enable greater concurrency and utilization of chip resources. In a nutshell, NVIDIA is making its GPUs a lot more like CPUs, while expanding the floating point capabilities.

First up is the addition of ECC support, a topic we covered earlier this month, (not realizing that NVIDIA was just weeks away from officially announcing it). The impetus behind ECC for GPUs is the same as it was for CPUs: to make sure data integrity is maintained throughout the memory hierarchy so that errant bit flips don’t produce erroneous results. Without this level of reliability, GPU computing would have been relegated to a niche play in supercomputing.

In Fermi, ECC will be supported throughout the architecture. All major internal memories are protected, including the register file and the new L1 and L2 caches. For off-chip DRAM, ECC has been cooked into the memory controller interfaces on the GPU. This entailed a significant engineering effort on NVIDIA’s part, requiring a complete redesign of on-chip memory and the memory controller interface logic. With these enhancements, NVIDIA has achieved the same level of memory protection as a CPU running in a server. Gupta says ECC, which has little application for traditional graphics, will only be enabled for the company’s GPU computing products.

To support this error correction feature, future products will use GDDR5 memory, which is the first graphics memory specification that incorporates error detection. (NVIDIA currently uses GDDR3 in its products, while AMD has already made the switch to GDDR5.) The nice side effect of GDDR5 is that it has more than twice the bandwidth of GDDR3, although the actual performance for products will depend upon the specific memory interface and memory speed. For the Tesla products, it would be reasonable to expect a doubling of memory throughput.

Better yet, since Fermi supports 64-bit addressing, memory reach is now a terabyte. Although it’s not yet practical to place that much DRAM on a GPU card, memory capacities will surely exceed the 4 GB per GPU limit in the current Tesla S1070 and C1060 products. For data-constrained applications, the larger memory capacities will lessen the need for repeated data exchanges between the CPU and the GPU, since more of the data can be kept local to the GPU. This should help boost overall performance for many applications, but especially seismic processing, medical imaging, 3D electromagnetic simulation and image searching.

The addition of L1 and L2 cache is an entirely new feature for GPUs. The caches were added to address the irregular data access problem of many scientific codes. As in CPU caches, the caches are there to reduce data access latency and increase throughput, with the overall goal of keeping the working data as close as possible to the computation. Codes that will see a particular benefit from caching include applications using sparse linear algebra and sparse matrix computations, FEA applications, and ray tracing.

In Fermi, the L1 cache is bundled with shared memory, an internal scratch pad memory that already exists in the current GT200 architecture. But while shared memory is under application control, the L1 cache is managed by the hardware. Fermi provides each 32 core group (or streaming multiprocessor) with 64 KB that is divided between the L1 cache and shared memory. Two configurations are supported: either 48 KB of shared memory and 16 KB L1, or vice versa. The L2 cache is more straightforward. It consists of 768 KB shared across all the GPU cores.

Another big performance boost comes from the pumped up double precision support. Gupta says the GT200 architecture has a 1:8 performance ratio of double precision to single precision, which is why the current Tesla products don’t even manage to top 100 DP peak gigaflops per GPU. The new architecture changes this ratio to 1:2, which represents a more natural arrangement (inasmuch as double precision uses twice the number of bits as single precision). Because NVIDIA has also doubled the total core count, DP performance will enjoy an 8-fold increase. By the time the next Tesla products appear, we should be seeing peak DP floating point performance somewhere between 500 gigaflops to 1 teraflop per GPU.

NVIDIA engineers have also improved floating point accuracy. The previous architecture was IEEE compliant for double precision, but for single precision there were some corner cases where they were not compliant. With Fermi, the latest IEEE 754-2008 floating point standard is now implemented, as well as a fused multiply-add (FMA) instruction to help retain better precision. According to Gupta, that means their new GPUs will be more precise, floating-point-wise, than even x86 CPUs.

The Fermi design adds a number of concurrency features so as to make better use of GPU resources. For example, dual thread scheduling was implemented so that each 32-core streaming multiprocessor can execute two groups of threads simultaneously, in a manner analogous to Intel’s hyper-threading technology for x86 CPUs.

In addition, the GPU’s hardware thread scheduler (HTS) has also been enhanced so that thread context switching is ten times faster than it was before. To take advantage of the quicker switching, the HTS is able to concurrently execute multiple slices of computational work (known in CUDA parlance as “kernels”).

The new capability allows the programmer to offload more of the application to the GPU, since even relatively small pieces of work can be bundled up and shuttled to the GPU en masse without having to worry about the housekeeping and overhead of sending each one separately. And since the HTS takes care of parallelization, more computation can be done in a shorter period of time.

Along these same lines, data transfer has been parallelized. Currently, a GPU calculation can overlap a CPU-GPU data transfer. Fermi provides a second DMA engine so two transfers can be overlapped with a computation. For example, one can simultaneously read in data from the CPU for the next computation while the current computation is executing and the data from the previous result is being written back to the CPU.

On the software side, they’ve made a number of enhancements to support a more fully-featured programming environment. Most importantly, they’re extending the native C CUDA model to include C++. To do this they’ve added hardware support for features like virtual functions and exception handling. By the time the first Fermi products show up in 2010, CUDA will almost certainly have a native C++ compiler capability.

If all of that seems like a lot of smarts for a single chip, it is. NVIDIA says the new architecture will use a 40 nm process technology and encompass 3 billion transistors, which happens to be more than in any of the upcoming Xeon, Opteron, Itanium, or Power7 CPUs. Power consumption for the various Fermi-based products will be on par with the current offerings, but performance per watt will be much improved.

Announcing a new GPU architecture so far out ahead of actual products is a big departure for NVIDIA, and is yet another example of how GPU computing has brought the company closer to the CPU way of doing business. The company is especially interested in bringing in new players, such as manufacturing and big government supercomputing, which have mostly watched on the sidelines during GPU Computing 1.0. NVIDIA also believes Fermi will deepen its GPU computing penetration across all HPC segments — financial services, life sciences, oil & gas, and so on.

What they’re trying to accomplish, says Gupta, is to prepare ISVs and end users so they can start gearing up their software in advance of the actual hardware. From his perspective, “this is part of us becoming an HPC company.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Cray+Azure: Can Cloud Propel Supercomputing?

October 23, 2017

Cray and Microsoft today announced they will offer dedicated Cray supercomputers (the XC and CS-Storm lines) inside the Azure platform allowing customers to run their HPC and AI applications alongside their other cloud w Read more…

By Tiffany Trader

2017 Gordon Bell Prize Finalists Named

October 23, 2017

The three finalists for this year’s Gordon Bell Prize in High Performance Computing have been announced. They include two papers on projects run on China’s Sunway TaihuLight system and a third paper on 3D image recon Read more…

By John Russell

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Cray+Azure: Can Cloud Propel Supercomputing?

October 23, 2017

Cray and Microsoft today announced they will offer dedicated Cray supercomputers (the XC and CS-Storm lines) inside the Azure platform allowing customers to run Read more…

By Tiffany Trader

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This