GPGPU Computing and the Heterogeneous Multi-Core Future

By B. Scott Michel

December 1, 2006

Where Are We Today?

The general-purpose GPU (GPGPU or GP^2U) computing phenomenon has been gaining momentum over the last three years, and has reached the point where it has gained acceptance as an application acceleration technique. Various innovative uses of GPUs include computing game physics between frames, linear algebra (e.g., LU decomposition), in-situ signal and image processing, database “SELECT” processing, finite element and partial differential equation solvers, and tomography image reconstruction, to name a few. Applications continue to appear on the horizon that exploit the GPU's parallelism and vector capabilities, which was the original intent behind the Supercomputing '06 workshop, “General-Purpose GPU Computing: Practice And Experience”.

More broadly, the GPGPU phenomenon belongs to a larger research and commercial area dubbed heterogeneous multi-core computing. Heterogeneous multi-core computing is the fraternal twin of homogeneous multi-core, the more traditional replicated execution unit/core/multiprocessor approach. Innovation in both of these system categories is being driven by a variety of factors that includes physics, “Moore's Gap”, the need for increased operations/watt, the need to decrease total power consumption, and the rapidly diminishing “bag of tricks” in super-scalar processor design.

“Moore's Gap” refers to the relatively modest incremental performance gains brought about by the increased number of transistors on current uniprocessor dies despite increases in clock speeds. Today's uniprocessors tend follow a “90/10” rule, where 90 percent of the processor is passive and 10 percent is doing active work. By contrast, multi-core processors follow the same general rule but with 10 percent passive and 90 percent active when working at full throughput. An added benefit is energy efficiency, since inactive cores can be put into hibernation. Another benefit is improved heat dissipation, where workloads can be balanced across the various cores to evenly distribute the generated heat.

Given the rapid change in the multi-core and GPGPU landscapes, the “General-Purpose GPU Computing: Practice And Experience” workshop became dual-tracked. The first track remained true to the workshop's original intent, with current research, practice and experience in GPGPU. Presentations in the GPGPU track included Ian Buck (NVIDIA), Mark Segal (ATI), Dominik Goeddeke (University of Dortmund, Germany), PeakStream and Acceleware. The second track offered insights into the heterogeneous and homogeneous multi-core future, with presentations from IBM, the Los Alamos National Laboratories' “Roadrunner” team, and Burton Smith of Microsoft. The desired outcome from this workshop is a new set of ideas and research directions that help evolve today's multi-core ecosystem.

Heterogeneous multi-core computing itself isn't particularly new: systems have been around since the mid-80's where a problem's workload is split between a general-purpose processor and one or more specialized, problem-specific processors. Notable historical examples include Floating Point Systems' array processors, the Inmos “Transputer” and the Connection Machine. Today's attached processor systems, besides GPUs, include ClearSpeed's accelerator systems and the Ageia PHYSX physics processing unit. In the processor realm, the IBM Cell Broadband Engine (a.k.a., “Cell BE” or simply, “Cell”) is the best example of an entirely heterogeneous multi-core processor. The difference today is packaging: these processor systems are delivered as systems-on-a-chip (SOC). The heterogeneous multi-core SOC integration trend is very likely to continue in the future if IBM's Cell or the AMD/ATI merger in the GPGPU domain are indications of commercial trends.

Heterogeneous Multi-Core Challenges

The challenges facing heterogeneous multi-core software development are entirely more interesting than those faced by homogeneous multi-core. At a very general level, homogeneous multi-core systems don't require much, if any, code modification to make existing software work. Code for these systems often requires refinement and tweaking when performance is not as expected, such as the thundering herd hot lock contention that can be experienced on the Sun Microsystems' UltraSparc T1 processors. Making spin locks adaptive, as Sun suggests, remedies the problem. Obviously, poorly implemented code won't run better on homogeneous multi-core, but it suffices to say that the porting challenges are less than would be experienced on heterogeneous multi-core systems.

On the other hand, the software ecosystem for heterogeneous multi-core has several stages of evolution to progress through — and, hopefully, learning by making better mistakes along the way. The first evolutionary stage is making existing software work. As Rob Pike stated in Systems Software Research Is Irrelevant[1], “To be a viable computer system, one must honor a huge list of large, and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode, POSIX, NFS, SMB, MIME, POP, IMAP, X,… A huge amount of work, but if you don't honor the standards you're marginalized.” In the HPC arena, it's at least OpenMP, MPI and potentially PVM, as well as toolkits such as LAPACK, LAPACK++, BLAS, FFTW, VSIPL, VSIPL++, etc.

Task-level parallelism and workload partitioning have been and continue to be the dominant software development issues for multi-core platforms, heterogeneous and homogeneous alike. These issues are more acute on heterogeneous multi-core, since the specialized processors may have additional constraints. The IBM Cell is a good example, in which the symbiotic (or synergistic) processor units (SPUs) have a 256K local store memory. The SPU's local store holds all of the code and data. Consequently, message orchestration becomes another resource management task to keep the SPUs executing close to peak throughput. Another interesting feature of the IBM Cell is the SPU register set that contains 128, 128-bit vector registers (“AltiVec on steroids”). Data orchestration and organization is yet another software developer task required to ensure that the SPU's capabilities are used to maximal advantage. In particular, data orchestration devolves into organizing a problem's data such that it is properly aligned within the vector registers and minimizing the data shuffle overhead (i.e., data movement or realignment within vector registers). Neither data nor message orchestration are insurmountable problems, but they do require an amount of design and forethought to implement properly.

Improved compiler technology is the second evolutionary stage, where the message and data orchestration burden is shifted from the software developer's shoulders and onto the compiler. Progress on this front is being made in a research version of IBM's xlC compiler that implements OpenMP directives and features automatic SIMD vectorization (see Optimizing Compiler For The Cell Processor[2]). It isn't clear whether this compiler will become a commercial product and what it will cost, if and when it does become available. Consequently, many IBM Cell developers will be stuck with the GNU gcc compiler, which only recently added support for OpenMP directives and does not support automatic SIMD vectorization. gcc does support SIMD vector types and operations, but it has a ways to go before it rivals the Cray compilers that recognize triple-for-loop matrix multiplication and replace the loops with a high performance library function call. The Reservoir Labs' R-Stream compiler is a commercial compiler infrastructure that bears mentioning because it targets embedded heterogeneous and homogeneous multi-core systems such as the MIT RAW processor, and can potentially target the IBM Cell. In the open source arena, the Low Level Virtual Machine is a promising compiler optimization infrastructure to which an auto-vectorization pass could be added with the additional benefit of serving as a code analysis tool.

Code analysis tools are compiler technology's “kissing cousins.” A compiler's optimizer and code generator are pattern matchers; code analysis tools can be thought of as compiler backends that explain why optimizations failed and sub-optimal code generation occur (i.e., why patterns failed to match.) Code analysis is important to both the novice and experienced HPC software developers because languages like C and C++ do not reorder the data placement defined in structures and classes. Code analysis tools can suggest data reorderings that enable the compiler to generate better code, thereby improving overall problem throughput. Another desirable feature in a code analysis tool is catching constructs where a developer attempts to be more clever than the compiler or attempts to predict a compiler's code generation behavior. More often than not, attempting to outwit the compiler requires making a sequence of assumptions that causes the compiler to match a sequence of patterns resulting in sub-optimal code generation. As the “Rules of Optimization” attributed to M. A. Jackson says, “Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet.”

A third evolutionary front in the heterogeneous multi-core ecosystem is language development. A significant amount of work has been done on parallelized algebraic languages (i.e., C, C++ and FORTRAN) such as Ken Kennedy's work at Rice University, Monica Lam's at Stanford and Mary Hall's at USC/ISI, to name but a few. Consequently, this existing body of work can be adapted to heterogeneous multi-core. But the problem at the heart of algebraic languages is the developer-directed parallelism, of which OpenMP is an example. Embedded languages offer a hybrid approach to identifying task-level parallelism, which were originally developed for GPGPU and stream-oriented computation. RapidMind, Inc. and PeakStream are two examples of this approach. The embedded language approach replaces the original C or C++ numerically intensive code with an inline version written in a functional “stream” language that is better suited for expressing the input problem on a GPU or heterogeneous multi-core processor like the IBM Cell. An API and on-the-fly code generator translate the inline embedded language to the target GPU or multi-core processor. Thus, functional languages are also poised to make a comeback, above and beyond the current embedded stream processing languages.

Continual Learning…

Multi-core processors, both homogeneous and heterogeneous, are experiencing a healthy revival commercially and in the research community. Unfortunately, the cynic can quickly point out that there's nothing new in computer science these days, merely a rehashing of previous concepts. This completely bypasses the point that multi-core systems are now more feasible than they were in the past. And, multi-core systems are exciting because they are poised to unleash the computing power to attack what once were considered to be hard problems and remove simplifying assumptions that once constrained their solutions. What makes the overall multi-core landscape even more interesting is that while some of today's multi-core processors are geared toward high performance computing, like the IBM Cell BE, other multi-core processors, like the Sun UltraSparc T1 and T2, are geared toward specific application acceleration such as Web services delivery.

General-purpose GPU computing led and continues to lead the heterogeneous multi-core research community. Innovative concepts such as using embedded languages to exploit parallelism and coping with numerical stability, given floating point units that truncate results, originated in GPGPU research. Thus, the ultimate intent embedded in the “General-Purpose GPU Computing: Practice And Experience” workshop is that the continual learning process and application of historical lessons learned will move the combined GPGPU and multi-core ecosystem forward.

References

1. Pike, R. “Systems Software Research Is Irrelevant”. http://herpolhode.com/rob/utah2000.pdf (2000).
2. Eichenberger, A., et. al. “Optimizing Compiler For The Cell Processor”. In proceedings of 14th International Conference on Parallel Architectures and Compilation Techniques.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire