GPGPU Computing and the Heterogeneous Multi-Core Future

By B. Scott Michel

December 1, 2006

Where Are We Today?

The general-purpose GPU (GPGPU or GP^2U) computing phenomenon has been gaining momentum over the last three years, and has reached the point where it has gained acceptance as an application acceleration technique. Various innovative uses of GPUs include computing game physics between frames, linear algebra (e.g., LU decomposition), in-situ signal and image processing, database “SELECT” processing, finite element and partial differential equation solvers, and tomography image reconstruction, to name a few. Applications continue to appear on the horizon that exploit the GPU's parallelism and vector capabilities, which was the original intent behind the Supercomputing '06 workshop, “General-Purpose GPU Computing: Practice And Experience”.

More broadly, the GPGPU phenomenon belongs to a larger research and commercial area dubbed heterogeneous multi-core computing. Heterogeneous multi-core computing is the fraternal twin of homogeneous multi-core, the more traditional replicated execution unit/core/multiprocessor approach. Innovation in both of these system categories is being driven by a variety of factors that includes physics, “Moore's Gap”, the need for increased operations/watt, the need to decrease total power consumption, and the rapidly diminishing “bag of tricks” in super-scalar processor design.

“Moore's Gap” refers to the relatively modest incremental performance gains brought about by the increased number of transistors on current uniprocessor dies despite increases in clock speeds. Today's uniprocessors tend follow a “90/10” rule, where 90 percent of the processor is passive and 10 percent is doing active work. By contrast, multi-core processors follow the same general rule but with 10 percent passive and 90 percent active when working at full throughput. An added benefit is energy efficiency, since inactive cores can be put into hibernation. Another benefit is improved heat dissipation, where workloads can be balanced across the various cores to evenly distribute the generated heat.

Given the rapid change in the multi-core and GPGPU landscapes, the “General-Purpose GPU Computing: Practice And Experience” workshop became dual-tracked. The first track remained true to the workshop's original intent, with current research, practice and experience in GPGPU. Presentations in the GPGPU track included Ian Buck (NVIDIA), Mark Segal (ATI), Dominik Goeddeke (University of Dortmund, Germany), PeakStream and Acceleware. The second track offered insights into the heterogeneous and homogeneous multi-core future, with presentations from IBM, the Los Alamos National Laboratories' “Roadrunner” team, and Burton Smith of Microsoft. The desired outcome from this workshop is a new set of ideas and research directions that help evolve today's multi-core ecosystem.

Heterogeneous multi-core computing itself isn't particularly new: systems have been around since the mid-80's where a problem's workload is split between a general-purpose processor and one or more specialized, problem-specific processors. Notable historical examples include Floating Point Systems' array processors, the Inmos “Transputer” and the Connection Machine. Today's attached processor systems, besides GPUs, include ClearSpeed's accelerator systems and the Ageia PHYSX physics processing unit. In the processor realm, the IBM Cell Broadband Engine (a.k.a., “Cell BE” or simply, “Cell”) is the best example of an entirely heterogeneous multi-core processor. The difference today is packaging: these processor systems are delivered as systems-on-a-chip (SOC). The heterogeneous multi-core SOC integration trend is very likely to continue in the future if IBM's Cell or the AMD/ATI merger in the GPGPU domain are indications of commercial trends.

Heterogeneous Multi-Core Challenges

The challenges facing heterogeneous multi-core software development are entirely more interesting than those faced by homogeneous multi-core. At a very general level, homogeneous multi-core systems don't require much, if any, code modification to make existing software work. Code for these systems often requires refinement and tweaking when performance is not as expected, such as the thundering herd hot lock contention that can be experienced on the Sun Microsystems' UltraSparc T1 processors. Making spin locks adaptive, as Sun suggests, remedies the problem. Obviously, poorly implemented code won't run better on homogeneous multi-core, but it suffices to say that the porting challenges are less than would be experienced on heterogeneous multi-core systems.

On the other hand, the software ecosystem for heterogeneous multi-core has several stages of evolution to progress through — and, hopefully, learning by making better mistakes along the way. The first evolutionary stage is making existing software work. As Rob Pike stated in Systems Software Research Is Irrelevant[1], “To be a viable computer system, one must honor a huge list of large, and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode, POSIX, NFS, SMB, MIME, POP, IMAP, X,… A huge amount of work, but if you don't honor the standards you're marginalized.” In the HPC arena, it's at least OpenMP, MPI and potentially PVM, as well as toolkits such as LAPACK, LAPACK++, BLAS, FFTW, VSIPL, VSIPL++, etc.

Task-level parallelism and workload partitioning have been and continue to be the dominant software development issues for multi-core platforms, heterogeneous and homogeneous alike. These issues are more acute on heterogeneous multi-core, since the specialized processors may have additional constraints. The IBM Cell is a good example, in which the symbiotic (or synergistic) processor units (SPUs) have a 256K local store memory. The SPU's local store holds all of the code and data. Consequently, message orchestration becomes another resource management task to keep the SPUs executing close to peak throughput. Another interesting feature of the IBM Cell is the SPU register set that contains 128, 128-bit vector registers (“AltiVec on steroids”). Data orchestration and organization is yet another software developer task required to ensure that the SPU's capabilities are used to maximal advantage. In particular, data orchestration devolves into organizing a problem's data such that it is properly aligned within the vector registers and minimizing the data shuffle overhead (i.e., data movement or realignment within vector registers). Neither data nor message orchestration are insurmountable problems, but they do require an amount of design and forethought to implement properly.

Improved compiler technology is the second evolutionary stage, where the message and data orchestration burden is shifted from the software developer's shoulders and onto the compiler. Progress on this front is being made in a research version of IBM's xlC compiler that implements OpenMP directives and features automatic SIMD vectorization (see Optimizing Compiler For The Cell Processor[2]). It isn't clear whether this compiler will become a commercial product and what it will cost, if and when it does become available. Consequently, many IBM Cell developers will be stuck with the GNU gcc compiler, which only recently added support for OpenMP directives and does not support automatic SIMD vectorization. gcc does support SIMD vector types and operations, but it has a ways to go before it rivals the Cray compilers that recognize triple-for-loop matrix multiplication and replace the loops with a high performance library function call. The Reservoir Labs' R-Stream compiler is a commercial compiler infrastructure that bears mentioning because it targets embedded heterogeneous and homogeneous multi-core systems such as the MIT RAW processor, and can potentially target the IBM Cell. In the open source arena, the Low Level Virtual Machine is a promising compiler optimization infrastructure to which an auto-vectorization pass could be added with the additional benefit of serving as a code analysis tool.

Code analysis tools are compiler technology's “kissing cousins.” A compiler's optimizer and code generator are pattern matchers; code analysis tools can be thought of as compiler backends that explain why optimizations failed and sub-optimal code generation occur (i.e., why patterns failed to match.) Code analysis is important to both the novice and experienced HPC software developers because languages like C and C++ do not reorder the data placement defined in structures and classes. Code analysis tools can suggest data reorderings that enable the compiler to generate better code, thereby improving overall problem throughput. Another desirable feature in a code analysis tool is catching constructs where a developer attempts to be more clever than the compiler or attempts to predict a compiler's code generation behavior. More often than not, attempting to outwit the compiler requires making a sequence of assumptions that causes the compiler to match a sequence of patterns resulting in sub-optimal code generation. As the “Rules of Optimization” attributed to M. A. Jackson says, “Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet.”

A third evolutionary front in the heterogeneous multi-core ecosystem is language development. A significant amount of work has been done on parallelized algebraic languages (i.e., C, C++ and FORTRAN) such as Ken Kennedy's work at Rice University, Monica Lam's at Stanford and Mary Hall's at USC/ISI, to name but a few. Consequently, this existing body of work can be adapted to heterogeneous multi-core. But the problem at the heart of algebraic languages is the developer-directed parallelism, of which OpenMP is an example. Embedded languages offer a hybrid approach to identifying task-level parallelism, which were originally developed for GPGPU and stream-oriented computation. RapidMind, Inc. and PeakStream are two examples of this approach. The embedded language approach replaces the original C or C++ numerically intensive code with an inline version written in a functional “stream” language that is better suited for expressing the input problem on a GPU or heterogeneous multi-core processor like the IBM Cell. An API and on-the-fly code generator translate the inline embedded language to the target GPU or multi-core processor. Thus, functional languages are also poised to make a comeback, above and beyond the current embedded stream processing languages.

Continual Learning…

Multi-core processors, both homogeneous and heterogeneous, are experiencing a healthy revival commercially and in the research community. Unfortunately, the cynic can quickly point out that there's nothing new in computer science these days, merely a rehashing of previous concepts. This completely bypasses the point that multi-core systems are now more feasible than they were in the past. And, multi-core systems are exciting because they are poised to unleash the computing power to attack what once were considered to be hard problems and remove simplifying assumptions that once constrained their solutions. What makes the overall multi-core landscape even more interesting is that while some of today's multi-core processors are geared toward high performance computing, like the IBM Cell BE, other multi-core processors, like the Sun UltraSparc T1 and T2, are geared toward specific application acceleration such as Web services delivery.

General-purpose GPU computing led and continues to lead the heterogeneous multi-core research community. Innovative concepts such as using embedded languages to exploit parallelism and coping with numerical stability, given floating point units that truncate results, originated in GPGPU research. Thus, the ultimate intent embedded in the “General-Purpose GPU Computing: Practice And Experience” workshop is that the continual learning process and application of historical lessons learned will move the combined GPGPU and multi-core ecosystem forward.

References

1. Pike, R. “Systems Software Research Is Irrelevant”. http://herpolhode.com/rob/utah2000.pdf (2000).
2. Eichenberger, A., et. al. “Optimizing Compiler For The Cell Processor”. In proceedings of 14th International Conference on Parallel Architectures and Compilation Techniques.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire