The interest in the general-purpose computation on GPUs (GPGPU) is at an all-time high. If you've been reading this publication for the last several months, you've no doubt noticed we've devoted quite a bit of coverage to this topic since the middle of 2006. The event that triggered this upsurge in interest was AMD's acquisition of ATI in July of 2006, and the subsequent announcement of a product strategy that would bring graphics processors into the mainstream of general-purpose computing. In the fall of 2006 NVIDIA revealed its own GPGPU strategy with its CUDA initiative.
The movement of GPU towards mainstream computing has been taking place for some time. Because of the broader requirements from visualization and game software in recent years, graphics processors are shifting toward a more general-purpose architecture; they're becoming more programmable and more CPU-like. Now, with both AMD and NVIDIA spinning a compelling tale of graphics processors as high performance parallel processing engines, the promise of cheap HPC never seemed closer. But not everyone is cheerleading.
Ars Technica's Jon Stokes is one of those who is keeping his pompoms at his side. In a recent article he wrote: “Anybody's GPU, whether it's from NVIDIA or AMD/ATI, is a big, hot, power-hungry, beast of a coprocessor that's designed to do one thing extremely well: real-time 3D rendering for games. In fact, we can be even more specific and call a GPU a “Microsoft DirectX toaster.” These same DirectX toasters also just happen to offer significant speedups vs. a regular microprocessor for certain types of data-parallel workloads that are important in HPC.”
Speaking of NVIDIA specifically, he adds: “They have a floor wax that happens to taste pretty good, so they're trying to use it to break into the food business by marketing it as a dessert topping.”
OK. So Stokes is obviously not a fan. He doesn't reject the notion of general-purpose computing on GPUs outright; he just thinks the proper place for the current crop of GPUs is on the motherboards of gaming enthusiasts, not in the sockets of HPC servers. He brings up some of the downsides of doing HPC with graphics processors, namely high power usage, programmer difficulty, vendor lock-in, and backward compatibility. (He doesn't even mention the current lack of 64-bit floating-point support.) Most of these factors point to the current immaturity of the GPGPU world.
But the same disadvantages existed in x86 designs before competition, standard software libraries and tools, and processor technology advancements made that architecture suitable for supercomputing. These disadvantages are well understood by both AMD and NVIDIA and they're working to address them.
On the other hand, GPUs do have to overcome a hurdle that the x86 never faced: its reputation as a specialized device for graphics processing. In this instance, the success of GPUs in the game market cuts both ways. The high-volume chip production that results from the huge demand by the game industry provides low prices, which offers an incentive to enter the HPC market. But the market pressure to make GPUs more targeted to visualization applications in some cases pushes the design away from general-purpose computing. This seems like a Catch-22 type of model.
Some of this uneasiness is misplaced. All processors, even general-purpose CPUs, devote silicon that targets certain types of applications, for example, the SSE instructions on x86 for (coincidentally) stream processing. Also, the GPU manufacturers will probably end up developing separate lines of GPGPU-oriented offerings which are variants of their core graphics devices for gamers. Finding the proper balance between specialized and general-purpose technology will be the key.
There is a continuum of coprocessing specialization from FPGAs, to GPUs and Cell processors, to floating point coprocessors, like ClearSpeed boards. As you go from FPGAs (least specialized) to FP coprocessors (most specialized), prices go up as a reflection of volume demand, but the difficulty of programming the devices decreases. Cell processors and GPUs are somewhere in the middle and may represent a sweet spot for HPC acceleration, offering high performance/price and relatively easy, at or least attainable, programmability.
The bigger problem for GPUs may be PR. AMD and NVIDIA are going to have to convince system manufactures and ISVs that graphics processors will be a mainstream technology. The hardest part will be developing a GPGPU software ecosystem around these devices. Game developers and HPC programmers live in different worlds. To get the HPC crowd interested you have to stop talking about pixel shaders and DirectX and start talking about stream computing.
This is where companies like PeakStream and RapidMind can help. Their software development platforms are designed to hide the GPU's 'gaminess' from the programmer. In fact, the software interfaces in these platforms are such that the developer need not be concerned with the underlying processor hardware. At a somewhat lower level, AMD's CTM (“Close To Metal”) open hardware interface and NVIDIA's C compiler CUDA technology have been introduced to offer programmers high-level access to the graphics processors' capabilities. We're just at the beginning of the software side of GPGPU, so it's too early to say what the best programming model is. But everyone agrees that raising the level of software abstraction will help to drive GPUs into the mainstream.
As far as the suitability of the graphics hardware for HPC servers, the biggest problem will be power usage. Since the gamers were never that concerned about an extra 100 watts or so in their machines, energy-efficiency was never much of a design issue. But if you want to start putting high-powered GPUs in already overheated server nodes, the devices are going to have to run a lot cooler.
Ars Technica's Stokes has something to say on this topic as well. In an article published this week, he posits that GPUs will have to become less energy hoggish to penetrate into the HPC market. He believes that getting the devices onto 65 nm process technology may be a good way to do start. In general, GPUs are a process technology cycle behind CPUs; the current NVIDIA G80 devices are at 90 nm. The GPGPU trend may create the incentive to bring graphics processors into the same technology cycle as their CPU counterparts. Certainly as AMD starts creating the CPU/GPU Fusion hybrid processors, that process synchronization will have to occur. If Intel gets into the GPU game, they are almost sure to press their advantage in process technology for their graphics devices. This is just another example of how GPUs are becoming more CPU-like.
But it's not just that GPUs are becoming more like CPUs, it's that the applications are becoming more game-like, that is, more data parallel in nature. Seismic modeling, financial options pricing and computational biology are all examples of the kinds of workloads that can be greatly accelerated with graphics processors today. The next generation of software designed for increasingly sophisticated pattern recognition, data mining, and data analytics are also going to be rather well-suited to the GPU architecture. If, in five years, all the interesting software requires data parallelism, graphics processors are likely to be the commodity hardware solution. So get those pompoms ready.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].