GPGPU Looks For Respect

By Michael Feldman

May 25, 2007

If general-purpose computation on GPUs (GPGPU) fails to capture the imagination of the technical computing community, it won't be for lack of trying. Chipmakers AMD/ATI and NVIDIA have invested a lot of resources over the few years to make this happen. And while AMD has been relatively quiet on the GPGPU front lately, NVIDIA is making rumblings that something bigger is in the works. I recently got a chance to speak with Andy Keane, general manager of NVIDIA's GPU computing group, who is in charge of building out the GPGPU business. He presented the short version of NVIDIA's “State of the GPGPU Message” and also gave me a pretty good sense of NVIDIA's strategy to take GPUs further into the technical computing realm.

If you haven't been keeping up with the GPGPU story, general-purpose computing with GPUs is about adapting graphics devices to a much wider range of applications — mostly ones that need to process huge amounts of floating point data. These types of applications are commonly used in the medical industry, the oil and gas sector, the financial services industry, and in scientific research. The major advantage of using these devices for technical computing is the speedup that's realized, which can be anywhere from 2x to 100x, when compared to a CPU.

While there's been a lot of talk recently about the merging of CPUs and GPUs, NVIDIA considers the two architectures fundamentally different. Keane says the CPU is a device with relatively few execution units, but a lot of attention is directed at keeping those execution units busy. In contrast, graphics devices have virtually unlimited instruction bandwidth, he says. For example, the NVIDIA G80 has 128 multithreaded processing elements (NVIDIA calls them processors, but that gets confusing). Keane says the goal is to fill the device with thousands of threads, with as many as possible executing concurrently. Unlike CPU threads, GPU threads are extremely lightweight — basically just bundles of instructions. A good deal of a G80's transistor budget is devoted to managing the myriad of threads. And unlike the CPU model, the cost of switching GPU threads is essentially free — a single clock cycle is sufficient.

The GPU's main limitation has to do with the kind of information these devices can process efficiently. They require regular data structures like arrays, where all the elements can be processed in a uniformly parallel manner. GPUs are extremely efficient at doing matrix arithmetic and other highly parallel data operations, but are not good at the more mundane computing tasks such as running the operating system or executing serial applications like a word processor. NVIDIA sees the two architectures as complementary, with the CPU devoted to the logic of the algorithm, while the GPU crunches the data-intensive computation part. Programmers need to recognize that the two types of devices require different approaches.

“With CPUs, they worry about the algorithm; with GPUs, they worry about the data,” explains Keane.

Using GPUs for something other than graphics processing is relatively new. NVIDIA's first attempt at GPGPU was five years ago when the chips first became programmable. Back then, graphics programming languages like OpenGL gave developers access to these graphics engines, but only researchers went to the trouble to learn how to apply the graphics API to general-purpose computing. The model never took off commercially.

So about three years ago when NVIDIA's 8-Series (G8X) architecture was being designed, the engineers went back to the drawing board and devised a much more software-centric graphics processor. Essentially, they gave the device a split personality. In the traditional mode, the GPU behaved like a regular graphics device. But in the new “computing mode” the chip behaved more like a general-purpose computer. While in the latter state, the GPU doesn't even understand triangles or how to draw pixels.

To allow programmers easy access to the computing mode, NVIDIA came up with CUDA (Compute Unified Device Architecture), a C compiler technology that provides an interface to the parallelism in the hardware. A beta version of CUDA is currently available, with a production release planned for June.

CUDA specifies an extension to C that allows the data parallelism to be expressed in a relatively high-level way. The programmer is unaware of the number of individual processing elements in the GPU or any other low-level hardware structures. Therefore, the user code is portable across all CUDA-enabled NVIDIA GPUs, present and future, independent of the number of processing elements. Fundamentally, the CUDA compiler makes the device behave like a general-purpose computer.

Other hardware changes were also added to make the device more CPU-like. For the G8X product line, NVIDIA included a fast memory (what they call a parallel data cache) to be shared among concurrent threads. A load/store capability was made available for reading and writing to main memory. Some support for IEEE 754 floating point compliance was also added. According to Keane, the current generation of CUDA-capable GPUs already has better IEEE floating point support than the Cell processor and is on par with IBM's PowerPC Altivec architecture.

A number of early adopters have taken advantage of the NVIDIA GPUs for technical computing. These devices are currently being used to accelerate applications in X-ray tomography (medical image reconstructions), electromagnetic simulations (computing field radiation from cell phones), prestack data visualization (oil & gas drilling) and brain circuitry simulations (sensor research).

In some cases, employing GPUs has allowed users to swap their cluster systems with workstations and achieve better performance as well. The cluster computer used for the tomography application at Massachusetts General Hospital was replaced with a GPU-equipped workstation that could fit in the X-ray lab. The GPU-accelerated image reconstruction time went from 5 hours to 5 minutes. Initial successes like these seem to point the way toward a bigger role for GPUs in commerical high performance computing.

The question that remains in my mind is to what degree GPU manufacturers are willing to differentiate their products specifically for the scientific computing customer. Attributes such as double precision floating point support, enhanced IEEE 754 compliance and low power consumption are not big concerns to NVIDIA's traditional gaming and visualization customers, but are important in technical computing environments. In addition, if GPUs are to be applied across a typical HPC system, like a cluster, they will need to be incorporated into individual server nodes. This requires relationships with a different set of hardware manufacturers, software vendors, and channels than NVIDIA has traditionally dealt with.

Not surprisingly, NVIDIA has been thinking about these issues as well and has apparently come to the conclusion that a separate HPC product line is required. Keane told me that the company is developing a “computing” product alongside its current Quadro and GeForce CUDA-compatible lines. The NVIDIA computing line — as yet unnamed — will be designed specifically for high performance computing applications, and will be targeted to both workstations and servers. The new devices will support double precision math, a basic requirement for many technical computing applications. Double precision support will make its first NVIDIA appearance at the end of Q4. At this point, it's not clear if NVIDIA's first double precision processor will be in a Quadro product or the new HPC offering.

The HPC products (I hesitate to call them GPUs) will have the same underlying technology as the graphics-centric products. This will enable NVIDIA to leverage its intellectual property in the same way that Intel and AMD do, where different implementations of the x86 architecture are applied across a variety of mobile, desktop and server platforms. NVIDIA is not quite ready to talk in-depth about its upcoming HPC products, but I got the sense that the company is pinning a lot of its GPGPU hopes on this next generation of devices. Stay tuned.

—–

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips are available off the shelf, a concern raised at many recent Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire