Where Are We Today?
The general-purpose GPU (GPGPU or GP^2U) computing phenomenon has been gaining momentum over the last three years, and has reached the point where it has gained acceptance as an application acceleration technique. Various innovative uses of GPUs include computing game physics between frames, linear algebra (e.g., LU decomposition), in-situ signal and image processing, database “SELECT” processing, finite element and partial differential equation solvers, and tomography image reconstruction, to name a few. Applications continue to appear on the horizon that exploit the GPU's parallelism and vector capabilities, which was the original intent behind the Supercomputing '06 workshop, “General-Purpose GPU Computing: Practice And Experience”.
More broadly, the GPGPU phenomenon belongs to a larger research and commercial area dubbed heterogeneous multi-core computing. Heterogeneous multi-core computing is the fraternal twin of homogeneous multi-core, the more traditional replicated execution unit/core/multiprocessor approach. Innovation in both of these system categories is being driven by a variety of factors that includes physics, “Moore's Gap”, the need for increased operations/watt, the need to decrease total power consumption, and the rapidly diminishing “bag of tricks” in super-scalar processor design.
“Moore's Gap” refers to the relatively modest incremental performance gains brought about by the increased number of transistors on current uniprocessor dies despite increases in clock speeds. Today's uniprocessors tend follow a “90/10” rule, where 90 percent of the processor is passive and 10 percent is doing active work. By contrast, multi-core processors follow the same general rule but with 10 percent passive and 90 percent active when working at full throughput. An added benefit is energy efficiency, since inactive cores can be put into hibernation. Another benefit is improved heat dissipation, where workloads can be balanced across the various cores to evenly distribute the generated heat.
Given the rapid change in the multi-core and GPGPU landscapes, the “General-Purpose GPU Computing: Practice And Experience” workshop became dual-tracked. The first track remained true to the workshop's original intent, with current research, practice and experience in GPGPU. Presentations in the GPGPU track included Ian Buck (NVIDIA), Mark Segal (ATI), Dominik Goeddeke (University of Dortmund, Germany), PeakStream and Acceleware. The second track offered insights into the heterogeneous and homogeneous multi-core future, with presentations from IBM, the Los Alamos National Laboratories' “Roadrunner” team, and Burton Smith of Microsoft. The desired outcome from this workshop is a new set of ideas and research directions that help evolve today's multi-core ecosystem.
Heterogeneous multi-core computing itself isn't particularly new: systems have been around since the mid-80's where a problem's workload is split between a general-purpose processor and one or more specialized, problem-specific processors. Notable historical examples include Floating Point Systems' array processors, the Inmos “Transputer” and the Connection Machine. Today's attached processor systems, besides GPUs, include ClearSpeed's accelerator systems and the Ageia PHYSX physics processing unit. In the processor realm, the IBM Cell Broadband Engine (a.k.a., “Cell BE” or simply, “Cell”) is the best example of an entirely heterogeneous multi-core processor. The difference today is packaging: these processor systems are delivered as systems-on-a-chip (SOC). The heterogeneous multi-core SOC integration trend is very likely to continue in the future if IBM's Cell or the AMD/ATI merger in the GPGPU domain are indications of commercial trends.
Heterogeneous Multi-Core Challenges
The challenges facing heterogeneous multi-core software development are entirely more interesting than those faced by homogeneous multi-core. At a very general level, homogeneous multi-core systems don't require much, if any, code modification to make existing software work. Code for these systems often requires refinement and tweaking when performance is not as expected, such as the thundering herd hot lock contention that can be experienced on the Sun Microsystems' UltraSparc T1 processors. Making spin locks adaptive, as Sun suggests, remedies the problem. Obviously, poorly implemented code won't run better on homogeneous multi-core, but it suffices to say that the porting challenges are less than would be experienced on heterogeneous multi-core systems.
On the other hand, the software ecosystem for heterogeneous multi-core has several stages of evolution to progress through — and, hopefully, learning by making better mistakes along the way. The first evolutionary stage is making existing software work. As Rob Pike stated in Systems Software Research Is Irrelevant, “To be a viable computer system, one must honor a huge list of large, and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode, POSIX, NFS, SMB, MIME, POP, IMAP, X,… A huge amount of work, but if you don't honor the standards you're marginalized.” In the HPC arena, it's at least OpenMP, MPI and potentially PVM, as well as toolkits such as LAPACK, LAPACK++, BLAS, FFTW, VSIPL, VSIPL++, etc.
Task-level parallelism and workload partitioning have been and continue to be the dominant software development issues for multi-core platforms, heterogeneous and homogeneous alike. These issues are more acute on heterogeneous multi-core, since the specialized processors may have additional constraints. The IBM Cell is a good example, in which the symbiotic (or synergistic) processor units (SPUs) have a 256K local store memory. The SPU's local store holds all of the code and data. Consequently, message orchestration becomes another resource management task to keep the SPUs executing close to peak throughput. Another interesting feature of the IBM Cell is the SPU register set that contains 128, 128-bit vector registers (“AltiVec on steroids”). Data orchestration and organization is yet another software developer task required to ensure that the SPU's capabilities are used to maximal advantage. In particular, data orchestration devolves into organizing a problem's data such that it is properly aligned within the vector registers and minimizing the data shuffle overhead (i.e., data movement or realignment within vector registers). Neither data nor message orchestration are insurmountable problems, but they do require an amount of design and forethought to implement properly.
Improved compiler technology is the second evolutionary stage, where the message and data orchestration burden is shifted from the software developer's shoulders and onto the compiler. Progress on this front is being made in a research version of IBM's xlC compiler that implements OpenMP directives and features automatic SIMD vectorization (see Optimizing Compiler For The Cell Processor). It isn't clear whether this compiler will become a commercial product and what it will cost, if and when it does become available. Consequently, many IBM Cell developers will be stuck with the GNU gcc compiler, which only recently added support for OpenMP directives and does not support automatic SIMD vectorization. gcc does support SIMD vector types and operations, but it has a ways to go before it rivals the Cray compilers that recognize triple-for-loop matrix multiplication and replace the loops with a high performance library function call. The Reservoir Labs' R-Stream compiler is a commercial compiler infrastructure that bears mentioning because it targets embedded heterogeneous and homogeneous multi-core systems such as the MIT RAW processor, and can potentially target the IBM Cell. In the open source arena, the Low Level Virtual Machine is a promising compiler optimization infrastructure to which an auto-vectorization pass could be added with the additional benefit of serving as a code analysis tool.
Code analysis tools are compiler technology's “kissing cousins.” A compiler's optimizer and code generator are pattern matchers; code analysis tools can be thought of as compiler backends that explain why optimizations failed and sub-optimal code generation occur (i.e., why patterns failed to match.) Code analysis is important to both the novice and experienced HPC software developers because languages like C and C++ do not reorder the data placement defined in structures and classes. Code analysis tools can suggest data reorderings that enable the compiler to generate better code, thereby improving overall problem throughput. Another desirable feature in a code analysis tool is catching constructs where a developer attempts to be more clever than the compiler or attempts to predict a compiler's code generation behavior. More often than not, attempting to outwit the compiler requires making a sequence of assumptions that causes the compiler to match a sequence of patterns resulting in sub-optimal code generation. As the “Rules of Optimization” attributed to M. A. Jackson says, “Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet.”
A third evolutionary front in the heterogeneous multi-core ecosystem is language development. A significant amount of work has been done on parallelized algebraic languages (i.e., C, C++ and FORTRAN) such as Ken Kennedy's work at Rice University, Monica Lam's at Stanford and Mary Hall's at USC/ISI, to name but a few. Consequently, this existing body of work can be adapted to heterogeneous multi-core. But the problem at the heart of algebraic languages is the developer-directed parallelism, of which OpenMP is an example. Embedded languages offer a hybrid approach to identifying task-level parallelism, which were originally developed for GPGPU and stream-oriented computation. RapidMind, Inc. and PeakStream are two examples of this approach. The embedded language approach replaces the original C or C++ numerically intensive code with an inline version written in a functional “stream” language that is better suited for expressing the input problem on a GPU or heterogeneous multi-core processor like the IBM Cell. An API and on-the-fly code generator translate the inline embedded language to the target GPU or multi-core processor. Thus, functional languages are also poised to make a comeback, above and beyond the current embedded stream processing languages.
Multi-core processors, both homogeneous and heterogeneous, are experiencing a healthy revival commercially and in the research community. Unfortunately, the cynic can quickly point out that there's nothing new in computer science these days, merely a rehashing of previous concepts. This completely bypasses the point that multi-core systems are now more feasible than they were in the past. And, multi-core systems are exciting because they are poised to unleash the computing power to attack what once were considered to be hard problems and remove simplifying assumptions that once constrained their solutions. What makes the overall multi-core landscape even more interesting is that while some of today's multi-core processors are geared toward high performance computing, like the IBM Cell BE, other multi-core processors, like the Sun UltraSparc T1 and T2, are geared toward specific application acceleration such as Web services delivery.
General-purpose GPU computing led and continues to lead the heterogeneous multi-core research community. Innovative concepts such as using embedded languages to exploit parallelism and coping with numerical stability, given floating point units that truncate results, originated in GPGPU research. Thus, the ultimate intent embedded in the “General-Purpose GPU Computing: Practice And Experience” workshop is that the continual learning process and application of historical lessons learned will move the combined GPGPU and multi-core ecosystem forward.
1. Pike, R. “Systems Software Research Is Irrelevant”. http://herpolhode.com/rob/utah2000.pdf (2000).
2. Eichenberger, A., et. al. “Optimizing Compiler For The Cell Processor”. In proceedings of 14th International Conference on Parallel Architectures and Compilation Techniques.