Visit additional Tabor Communication Publications
May 25, 2007
If general-purpose computation on GPUs (GPGPU) fails to capture the imagination of the technical computing community, it won't be for lack of trying. Chipmakers AMD/ATI and NVIDIA have invested a lot of resources over the few years to make this happen. And while AMD has been relatively quiet on the GPGPU front lately, NVIDIA is making rumblings that something bigger is in the works. I recently got a chance to speak with Andy Keane, general manager of NVIDIA's GPU computing group, who is in charge of building out the GPGPU business. He presented the short version of NVIDIA's "State of the GPGPU Message" and also gave me a pretty good sense of NVIDIA's strategy to take GPUs further into the technical computing realm.
If you haven't been keeping up with the GPGPU story, general-purpose computing with GPUs is about adapting graphics devices to a much wider range of applications -- mostly ones that need to process huge amounts of floating point data. These types of applications are commonly used in the medical industry, the oil and gas sector, the financial services industry, and in scientific research. The major advantage of using these devices for technical computing is the speedup that's realized, which can be anywhere from 2x to 100x, when compared to a CPU.
While there's been a lot of talk recently about the merging of CPUs and GPUs, NVIDIA considers the two architectures fundamentally different. Keane says the CPU is a device with relatively few execution units, but a lot of attention is directed at keeping those execution units busy. In contrast, graphics devices have virtually unlimited instruction bandwidth, he says. For example, the NVIDIA G80 has 128 multithreaded processing elements (NVIDIA calls them processors, but that gets confusing). Keane says the goal is to fill the device with thousands of threads, with as many as possible executing concurrently. Unlike CPU threads, GPU threads are extremely lightweight -- basically just bundles of instructions. A good deal of a G80's transistor budget is devoted to managing the myriad of threads. And unlike the CPU model, the cost of switching GPU threads is essentially free -- a single clock cycle is sufficient.
The GPU's main limitation has to do with the kind of information these devices can process efficiently. They require regular data structures like arrays, where all the elements can be processed in a uniformly parallel manner. GPUs are extremely efficient at doing matrix arithmetic and other highly parallel data operations, but are not good at the more mundane computing tasks such as running the operating system or executing serial applications like a word processor. NVIDIA sees the two architectures as complementary, with the CPU devoted to the logic of the algorithm, while the GPU crunches the data-intensive computation part. Programmers need to recognize that the two types of devices require different approaches.
"With CPUs, they worry about the algorithm; with GPUs, they worry about the data," explains Keane.
Using GPUs for something other than graphics processing is relatively new. NVIDIA's first attempt at GPGPU was five years ago when the chips first became programmable. Back then, graphics programming languages like OpenGL gave developers access to these graphics engines, but only researchers went to the trouble to learn how to apply the graphics API to general-purpose computing. The model never took off commercially.
So about three years ago when NVIDIA's 8-Series (G8X) architecture was being designed, the engineers went back to the drawing board and devised a much more software-centric graphics processor. Essentially, they gave the device a split personality. In the traditional mode, the GPU behaved like a regular graphics device. But in the new "computing mode" the chip behaved more like a general-purpose computer. While in the latter state, the GPU doesn't even understand triangles or how to draw pixels.
To allow programmers easy access to the computing mode, NVIDIA came up with CUDA (Compute Unified Device Architecture), a C compiler technology that provides an interface to the parallelism in the hardware. A beta version of CUDA is currently available, with a production release planned for June.
CUDA specifies an extension to C that allows the data parallelism to be expressed in a relatively high-level way. The programmer is unaware of the number of individual processing elements in the GPU or any other low-level hardware structures. Therefore, the user code is portable across all CUDA-enabled NVIDIA GPUs, present and future, independent of the number of processing elements. Fundamentally, the CUDA compiler makes the device behave like a general-purpose computer.
Other hardware changes were also added to make the device more CPU-like. For the G8X product line, NVIDIA included a fast memory (what they call a parallel data cache) to be shared among concurrent threads. A load/store capability was made available for reading and writing to main memory. Some support for IEEE 754 floating point compliance was also added. According to Keane, the current generation of CUDA-capable GPUs already has better IEEE floating point support than the Cell processor and is on par with IBM's PowerPC Altivec architecture.
A number of early adopters have taken advantage of the NVIDIA GPUs for technical computing. These devices are currently being used to accelerate applications in X-ray tomography (medical image reconstructions), electromagnetic simulations (computing field radiation from cell phones), prestack data visualization (oil & gas drilling) and brain circuitry simulations (sensor research).
In some cases, employing GPUs has allowed users to swap their cluster systems with workstations and achieve better performance as well. The cluster computer used for the tomography application at Massachusetts General Hospital was replaced with a GPU-equipped workstation that could fit in the X-ray lab. The GPU-accelerated image reconstruction time went from 5 hours to 5 minutes. Initial successes like these seem to point the way toward a bigger role for GPUs in commerical high performance computing.
The question that remains in my mind is to what degree GPU manufacturers are willing to differentiate their products specifically for the scientific computing customer. Attributes such as double precision floating point support, enhanced IEEE 754 compliance and low power consumption are not big concerns to NVIDIA's traditional gaming and visualization customers, but are important in technical computing environments. In addition, if GPUs are to be applied across a typical HPC system, like a cluster, they will need to be incorporated into individual server nodes. This requires relationships with a different set of hardware manufacturers, software vendors, and channels than NVIDIA has traditionally dealt with.
Not surprisingly, NVIDIA has been thinking about these issues as well and has apparently come to the conclusion that a separate HPC product line is required. Keane told me that the company is developing a "computing" product alongside its current Quadro and GeForce CUDA-compatible lines. The NVIDIA computing line -- as yet unnamed -- will be designed specifically for high performance computing applications, and will be targeted to both workstations and servers. The new devices will support double precision math, a basic requirement for many technical computing applications. Double precision support will make its first NVIDIA appearance at the end of Q4. At this point, it's not clear if NVIDIA's first double precision processor will be in a Quadro product or the new HPC offering.
The HPC products (I hesitate to call them GPUs) will have the same underlying technology as the graphics-centric products. This will enable NVIDIA to leverage its intellectual property in the same way that Intel and AMD do, where different implementations of the x86 architecture are applied across a variety of mobile, desktop and server platforms. NVIDIA is not quite ready to talk in-depth about its upcoming HPC products, but I got the sense that the company is pinning a lot of its GPGPU hopes on this next generation of devices. Stay tuned.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - May 24, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.