Filling the Gap
The dance between computer hardware and software has been going on for fifty years. In times past though, the relationship was kept at arm's length. The hardware engineers just cranked out the chips and threw them over the fence to the programmers. With the coming of multicore processors, the hardware/software connection has become more intimate. Chipmakers realize that multicore architectures are going to fundamentally change the software model. So if they want to move product, they have to narrow the gap between the hardware and the applications.
And this is happening. To one degree or another, Intel, IBM, AMD, NVIDIA are all partnering with ISVs and research organizations, providing early access to hardware, software support and training. The chipmakers have introduced software support, in the form of compilers and processor interface libraries, to help tool developers bring up code on their hardware. IBM offers an SDK and other tools for the Cell processor; AMD has introduced its “Close to Metal” program for GPU programming; and NVIDIA has its CUDA platform for their GPUs. Although I'm not going to talk much about multicore x86 software support in this article, Intel has a wide range of commercial products, software tools, and educational initiatives to help developers wrap their minds around multicoredness.
In high performance computing, the strategy is beginning to pay off. The most recent example of this is how rapidly development environments appeared for the relatively new IBM Cell BE and general-purpose GPU processors. The chips from the fab were barely cool before PeakStream and RapidMind delivered application development platforms for the new accelerator devices. If these products are successful, they will help create an important synergy between the chip vendors and the software developers.
Using GPUs and Cell processors as stream processing accelerators is creating a good deal of excitement in the HPC crowd. Hardly a week goes by when there's not at least one announcement of someone using these processors to speed up their application. Target workloads include 3D visualization, broadcast encoding, medical imaging, multimedia content generation, image and signal processing, financial analysis, seismic analysis, large-scale database transactions and enterprise search. This corresponds to almost any data-intensive application that requires lots of computational muscle. The broad applicability of these multicore accelerators for HPC has attracted the attention of software developers who would love to exploit this relatively cheap source of hardware.
In announcing their platform this week, RapidMind claimed support for the IBM Cell processor and the latest NVIDIA and AMD/ATI GPUs for high performance computing applications. The company says multicore x86 support is not far behind. Our feature article this week talks about how the RapidMind platform is targeting the hardware-agnostic application developer for these emerging architectures.
Academicians are also taking a hard look at the newer multicore accelerators. At the University of Tennessee (UT), Jack Dongarra and the team at the Innovative Computing Laboratory have been working with the IBM Cell processor. At their lab, a PlayStation3 (PS3) cluster of four systems is being used as a research platform for scientific computing. For the price of around $2400, they have built a system that offers 600 gigaflops (single-precision floating point) of peak performance. Although the PS3 was never designed to be a cluster node for a high performance computing system, its price and ubiquity have attracted HPC folks looking for cheap FLOPS. The UT team is evaluating programming models for the PS3 cluster and is looking at some of the limitations of the architecture for high performance computing.
In the process, the UT researchers have produced a technical report on using the PlayStation 3 as an HPC platform called “A Rough Guide to Scientific Computing On the PlayStation 3” (http://www.netlib.org/utk/people/JackDongarra/PAPERS/scop3.pdf). Less glib than an “IBM Cell Programming For Dummies” but more accessible than your average technical report, the guide should be required reading for developers who are new to technical computing on the Cell processor.
The guide outlines the Cell chip and PS3 hardware capabilities, the system software support available, and how to set up a lab-sized PS3 cluster. It also delves into programming techniques and offers some real-world examples. One of the more useful aspects of the guide is that it discusses a number of commercial and academic software platforms for the Cell architecture. Not meant to be the last word on Cell/PS3 software development, the report manages to give a balanced overview of the technologies currently available. Here's a clip from the introduction:
“As exciting as it may sound, using the PS3 for scientific computing is a bumpy ride. Parallel programming models for multi-core processors are in their infancy, and standardized APIs are not even on the horizon. As a result, presently, only hand-written code fully exploits the hardware capabilities of the CELL processor. [Editor's note: RapidMind would certainly dispute this.] Ultimately, the suitability of the PS3 platform for scientific computing is most heavily impaired by the devastating disproportion between the processing power of the processor and the crippling slowness of the interconnect, explained in detail in section 9.1. Nevertheless, the CELL processor is a revolutionary chip, delivering ground-breaking performance and now available in an affordable package. We hope that this rough guide will make the ride slightly less bumpy.”
The report contains a good discussion of the limitations of the PS3 for scientific computing including the memory bandwidth and capacity, the network interconnect speed, and shortcomings of the floating point implementation. These issues are discussed in more technical detail in a companion report: Limitations of the PlayStation 3 for High Performance Cluster Computing (http://www.netlib.org/utk/people/JackDongarra/PAPERS/ps3-summa-2007.pdf).
Some of the floating point weaknesses that limit the Cell's use in scientific computing are going to be addressed in future generations of the processor. According to the UT report, IBM is planning to pump up the double-precision performance from 14 to 102 gigaflops in the next implementation — no word if IEEE 754 floating point support issues will be addressed as well.
GPUs have similar floating point limitations. If NVIDIA and AMD want to penetrate the technical computing market with GPUs, they're going to have to make some decisions about floating point capabilities on these devices. Neither vendor offers any double precision hardware today, and IEEE 754 compliance is still a work in progress. However, NVIDIA's newest G80 device has included some support for rounding modes, overflow and NaN. (For a good discussion of floating point precision issues, read Michael Wolfe's article in this week's Feature section.)
The question here is how far will NVIDIA and AMD evolve their GPU architectures away from their graphics roots in order to support scientific floating point capabilities. The GPU engineers will also have to consider memory error correction and lower power consumption to offer a more robust HPC solution.
The market should be able to figure out how to balance this tension between application requirements and hardware capabilities. Although I've expressed my doubts about the capitalistic approach to cutting-edge supercomputing, that's not the case for commercial HPC. If GPUs and Cell processors were not applicable to industrial HPC applications, companies like PeakStream and RapidMind wouldn't exist, and researchers like Jack Dongarra would probably be working on something else. If the HPC software community figures out how to leverage the current generation of multicore hardware and starts to build a user base, the chipmakers will dance a little closer to the software.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.