GPGPU Looks For Respect

By Michael Feldman

May 25, 2007

If general-purpose computation on GPUs (GPGPU) fails to capture the imagination of the technical computing community, it won't be for lack of trying. Chipmakers AMD/ATI and NVIDIA have invested a lot of resources over the few years to make this happen. And while AMD has been relatively quiet on the GPGPU front lately, NVIDIA is making rumblings that something bigger is in the works. I recently got a chance to speak with Andy Keane, general manager of NVIDIA's GPU computing group, who is in charge of building out the GPGPU business. He presented the short version of NVIDIA's “State of the GPGPU Message” and also gave me a pretty good sense of NVIDIA's strategy to take GPUs further into the technical computing realm.

If you haven't been keeping up with the GPGPU story, general-purpose computing with GPUs is about adapting graphics devices to a much wider range of applications — mostly ones that need to process huge amounts of floating point data. These types of applications are commonly used in the medical industry, the oil and gas sector, the financial services industry, and in scientific research. The major advantage of using these devices for technical computing is the speedup that's realized, which can be anywhere from 2x to 100x, when compared to a CPU.

While there's been a lot of talk recently about the merging of CPUs and GPUs, NVIDIA considers the two architectures fundamentally different. Keane says the CPU is a device with relatively few execution units, but a lot of attention is directed at keeping those execution units busy. In contrast, graphics devices have virtually unlimited instruction bandwidth, he says. For example, the NVIDIA G80 has 128 multithreaded processing elements (NVIDIA calls them processors, but that gets confusing). Keane says the goal is to fill the device with thousands of threads, with as many as possible executing concurrently. Unlike CPU threads, GPU threads are extremely lightweight — basically just bundles of instructions. A good deal of a G80's transistor budget is devoted to managing the myriad of threads. And unlike the CPU model, the cost of switching GPU threads is essentially free — a single clock cycle is sufficient.

The GPU's main limitation has to do with the kind of information these devices can process efficiently. They require regular data structures like arrays, where all the elements can be processed in a uniformly parallel manner. GPUs are extremely efficient at doing matrix arithmetic and other highly parallel data operations, but are not good at the more mundane computing tasks such as running the operating system or executing serial applications like a word processor. NVIDIA sees the two architectures as complementary, with the CPU devoted to the logic of the algorithm, while the GPU crunches the data-intensive computation part. Programmers need to recognize that the two types of devices require different approaches.

“With CPUs, they worry about the algorithm; with GPUs, they worry about the data,” explains Keane.

Using GPUs for something other than graphics processing is relatively new. NVIDIA's first attempt at GPGPU was five years ago when the chips first became programmable. Back then, graphics programming languages like OpenGL gave developers access to these graphics engines, but only researchers went to the trouble to learn how to apply the graphics API to general-purpose computing. The model never took off commercially.

So about three years ago when NVIDIA's 8-Series (G8X) architecture was being designed, the engineers went back to the drawing board and devised a much more software-centric graphics processor. Essentially, they gave the device a split personality. In the traditional mode, the GPU behaved like a regular graphics device. But in the new “computing mode” the chip behaved more like a general-purpose computer. While in the latter state, the GPU doesn't even understand triangles or how to draw pixels.

To allow programmers easy access to the computing mode, NVIDIA came up with CUDA (Compute Unified Device Architecture), a C compiler technology that provides an interface to the parallelism in the hardware. A beta version of CUDA is currently available, with a production release planned for June.

CUDA specifies an extension to C that allows the data parallelism to be expressed in a relatively high-level way. The programmer is unaware of the number of individual processing elements in the GPU or any other low-level hardware structures. Therefore, the user code is portable across all CUDA-enabled NVIDIA GPUs, present and future, independent of the number of processing elements. Fundamentally, the CUDA compiler makes the device behave like a general-purpose computer.

Other hardware changes were also added to make the device more CPU-like. For the G8X product line, NVIDIA included a fast memory (what they call a parallel data cache) to be shared among concurrent threads. A load/store capability was made available for reading and writing to main memory. Some support for IEEE 754 floating point compliance was also added. According to Keane, the current generation of CUDA-capable GPUs already has better IEEE floating point support than the Cell processor and is on par with IBM's PowerPC Altivec architecture.

A number of early adopters have taken advantage of the NVIDIA GPUs for technical computing. These devices are currently being used to accelerate applications in X-ray tomography (medical image reconstructions), electromagnetic simulations (computing field radiation from cell phones), prestack data visualization (oil & gas drilling) and brain circuitry simulations (sensor research).

In some cases, employing GPUs has allowed users to swap their cluster systems with workstations and achieve better performance as well. The cluster computer used for the tomography application at Massachusetts General Hospital was replaced with a GPU-equipped workstation that could fit in the X-ray lab. The GPU-accelerated image reconstruction time went from 5 hours to 5 minutes. Initial successes like these seem to point the way toward a bigger role for GPUs in commerical high performance computing.

The question that remains in my mind is to what degree GPU manufacturers are willing to differentiate their products specifically for the scientific computing customer. Attributes such as double precision floating point support, enhanced IEEE 754 compliance and low power consumption are not big concerns to NVIDIA's traditional gaming and visualization customers, but are important in technical computing environments. In addition, if GPUs are to be applied across a typical HPC system, like a cluster, they will need to be incorporated into individual server nodes. This requires relationships with a different set of hardware manufacturers, software vendors, and channels than NVIDIA has traditionally dealt with.

Not surprisingly, NVIDIA has been thinking about these issues as well and has apparently come to the conclusion that a separate HPC product line is required. Keane told me that the company is developing a “computing” product alongside its current Quadro and GeForce CUDA-compatible lines. The NVIDIA computing line — as yet unnamed — will be designed specifically for high performance computing applications, and will be targeted to both workstations and servers. The new devices will support double precision math, a basic requirement for many technical computing applications. Double precision support will make its first NVIDIA appearance at the end of Q4. At this point, it's not clear if NVIDIA's first double precision processor will be in a Quadro product or the new HPC offering.

The HPC products (I hesitate to call them GPUs) will have the same underlying technology as the graphics-centric products. This will enable NVIDIA to leverage its intellectual property in the same way that Intel and AMD do, where different implementations of the x86 architecture are applied across a variety of mobile, desktop and server platforms. NVIDIA is not quite ready to talk in-depth about its upcoming HPC products, but I got the sense that the company is pinning a lot of its GPGPU hopes on this next generation of devices. Stay tuned.

—–

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire