Book Review: Principles of Parallel Programming

By John West

January 29, 2009

In Principles of Parallel Programming (Addison Wesley, 2008), Lawrence Snyder and Calvin Lin have created a book that is useful for students and new practitioners alike. The text assumes that readers already have an understanding of basic computer science principles (how to write a correct sequential program, the basics of algorithmic complexity, and the like), and from that foundation provide a thorough treatment of the core principles of parallel programming.

As they go along, Lin and Snyder are careful to link the principles to the current parallel hardware and software landscape, including discussions of timely topics like GPUs, the Cell processor, and the languages being developed as part of DARPA’s High Productivity Computer Systems project. The text builds on computer science fundamentals in a way that will keep readers confident as they progress through the text, while at the same time keeping the energy high and the pace brisk. The result is a fresh, practical, pragmatic book that makes for an engaging, even fun, read.

Principles of Parallel Programming starts from the assumption that parallelism is everywhere, and successful programmers — whether they are writing software today for the two cores on a typical desktop computer or the 130,000 cores in Roadrunner — need to understand how to take advantage of this parallelism. The authors take this position one step further and provide tools throughout the text to enable readers to create scalable parallel software, reflecting a core belief that the degree of parallelism in all forms of computing hardware will continue to increase, and that it’s better to modify a code once now than repeatedly over time.

The first three chapters establish the foundations of parallel programming. The authors start off highlighting the complexities that contribute to the current reality that writing correct, scalable parallel applications is substantially harder than writing correct sequential applications. Using the seemingly simple example of parallelizing the summation of a series of numbers, Lin and Snyder are able to demonstrate the difficult decisions programmers must make as they reason about correctness, scalability and performance, laying the groundwork for the more in-depth discussions to come.

Something that I very much appreciate about this book is that it is frank and direct — the authors are clearly in favor of parallel computing, but do not make any attempt to disguise the things that make it difficult.

We could solve the portability problem by simply setting a high enough level of abstraction that none of these difference is visible; then a compiler will map the high level specification to the platform. The strategy will make our programs insensitive to the parallel hardware, but it’s not a good idea. Generally, though compilers can perform the mapping, they will usually introduce software layers to implement the abstractions; the added software hides the performance features of the hardware, making it difficult for programmers to know how their code behaves. We cannot divorce ourselves entirely from the underlying hardware if we want high performance.

Here we see that they are able to keep the conversation about the tensions between portability and performance honest without becoming bogged down in the minutiae of specific individual languages or hardware (though we do see details later). Throughout this book the authors are able to crystalize the essence of an argument with just enough context to allow the readers to grasp the important first- and second-order effects that drive a decision, but without burdening the reader with information he doesn’t need to properly assess the issue.

A brief review of hardware follows that includes all of the current major flavors of parallel hardware currently available. The hardware review is a great example of one of the real hallmarks of this book: its timeliness. Discussions about hardware, languages, libraries, compilers, and applications throughout this book focus on what is available in the community today, making this a very practical resource for folks who are already out of school and need to understand the lay of the modern parallel landscape in context. Examples used include Intel and AMD multicore chips, the Sun Fire E25K, the Cell processor, HP’s Cluster Platform 6000, and Blue Gene.

As the authors develop a picture of the performance relationship between algorithms and architectures, they are careful to frame the discussion in terms of principles that are general enough to be broadly applicable but specific enough to be actionable. Rather than mounting an architecture-specific discussion about how to optimize cache hits for various levels of the Intel memory hierarchy, and then repeating that discussion for other major architectures, Lin and Snyder broadly discuss the differences in the major hardware designs and encapsulate all the various ways that programmers can optimize for those platforms in a single rule:

Fast programs tend to maximize the number of local memory references and minimize the number of non-local memory references.

As another example:

Parallel programs are more scalable when they emphasize blocks of computation — typically the larger the block the better — that minimize the inter-thread dependences.

This approach is used throughout the book to generalize broadly-applicable principles that have been motivated by a high level (but effective) discussion of the specifics of several individual example cases.

Lin and Snyder also emphasize good scientific method and sound approaches to programming and testing. For example, in the performance discussions in chapters 3 and 11, much attention is given to measuring speedup, what is being measured (speedup, relative speedup, scaled speedup, and so on), and how to report the result so that others studying your measurements clearly understand how those numbers were gathered and what they mean. This is sound programming practice, which translates into ethical programmer behavior.

Once the foundations are laid, the authors go on to present the major types of parallelism (fixed, unlimited and scalable) along with the major abstractions (reduce, scan, etc.) that contribute to reasoning about, and constructing, effective parallel applications. This general conversation then moves into specific techniques for building scalable parallel codes, with emphasis on creating effective patterns of communication and computation (including allocation of work, batching of work and communication, and data distribution). Throughout this section of the book, the discussion is made more concrete through examples, but following their style of presenting only enough detail to illuminate a point, the examples are presented in a pseudocode that eliminates focus on specific languages.

At this point Synder and Lin have established a foundation of principles, performance, and the building blocks of scalable applications. The next stage of the book examines the languages available to programmers to translate those concepts into working code. In a series of four chapters, they present the major attributes and performance characteristics of threading languages (including POSIX, Java, and OpenMP), MPI (and other local view languages like Co-array Fortran), and global view languages like ZPL. This section closes with a chapter that looks at the languages as a group, assessing them in terms of their ability to enable the developer to reason about performance and ensure correctness while creating scalable, portable codes.

The book closes with a look at future directions in both languages and hardware, bringing in some of the most recent directions arising from DARPA’s HPCS effort and others, and includes a few notes on useful capstone projects for those who might use the text as part of a class.

Principles of Parallel Programming is an especially valuable resource for up-and-coming HPC practitioners, and those still in school or just recently in the workplace will find a lot here that will put what they’ve already learned on a solid, intensely practical foundation. The book is honest and engaging, and throughout the book the authors remain steadfast in their commitment to parallelism while recognizing the (many) challenges and open issues that the community faces.

Principles is also very timely, comparing and contrasting the features of most of the major hardware and software platforms parallel programmers are likely to encounter in their careers today. This timeliness also means that these portions of the book will age and become irrelevant over time, reducing part of its charm. I hope that the authors are able to refresh the text from time to time to maintain the complete depth of its current relevance.

The authors do an outstanding job of not assuming too much knowledge but still not boring the reader to tears with CS fundamentals. There were a few spots containing jargon or references that might trip up an especially inexperienced reader — for example, the impact of “speed of light delays” is discussed with no explanation of what these delays might be, and the authors assume a basic understanding of the analysis of algorithms. But these issues are rare and the text remains accessible.

In a few places I found myself wanting more references than the authors provide. For example, on page 46 the authors talk briefly about the PRAM computer model and mention a wealth of literature available, but don’t provide any references. Surely the reader can go look these up herself, but a few choice references selected for their particular relevance or insight by the authors would provide some additional value.

The book focuses on scalability, performance and portability throughout, but this discussion tends to be built on the presumption that the ultimate goal is the best performance from the machine being used. There is some concern with the degree to which a particular abstraction supports the developer’s ability to reason about correctness and these other quantities, but the productivity or efficiency of the programmer is not a core issue for this book (notwithstanding the degree to which a scalable program with portable performance supports programmer productivity by avoiding future changes to source code). This isn’t necessarily a problem, but it is something to keep in mind.

Principles of Parallel Programming

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire