Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 5, 2013

Optimizing Performance in Parallel Programming

Tiffany Trader

Parallel computing refers to the simultaneous use of multiple processing elements to solve a computational problem. Large jobs are segmented into smaller parts, which are then solved concurrently. For most of the history of computing, serial computation was practiced; one instruction set would execute, then the next. Parallel computing arose in response to the constraints of frequency scaling and to help mitigate power consumption and heat generation issues.

Over the last decade or so, parallel computing has grown to become the dominant computing paradigm. The switch from serial to parallel computing – in tandem with the shift from single core to multicore processors – has been key to accommodating the relentless demand for more performant machines.

A recent article at Dr. Dobbs, written by Michael McCool, Arch Robison and James Reinders, explores the primary concepts of performance theory in parallel programming, guided by two essential laws of parallel performance, Amdahl’s Law and Gustafson-Barsis’ Law. These laws illustrate the balancing act that is parallel optimization and provide a reference point for strong versus weak scaling.

Optimizing parallel performance involves three main variables: reducing latency, increasing throughput, and reducing CPU power consumption. The factors are interrelated such that an improvement in one can lead to a worsening in another. Therefore, it falls to the developer to balance all three elements to achieve maximum efficiency. Developers can chart the latency of specific computational problems as the number of processors is increased by tracking “speed-up,” the ratio of the runtime before optimization to the runtime afterwards. Getting a program to run faster with the same workload is reflected in Amdahl’s Law (this is also called strong scaling), and running a program in the same time period with a larger workload is the basis of Gustafson-Barsis’ Law (this is also a demonstration of weak scaling).

Ideally, the speed-up from parallelization would be linear – with each doubling of processing elements resulting in a halving of the run-time – yet few algorithms can realize this goal due to the extra steps that are involved. Generally, there is an almost linear speedup initially, when the number of processors is still low, but this flattens out as more processors are added. Although sublinear speedup is the norm, on occasion, a program will demonstrate superlinear speedup, i.e., an efficiency greater than 100 percent. Various reasons are given for this phenomenon.

“Speedup, not efficiency, is what you see in advertisements for parallel computers, because speedups can be large impressive numbers,” the authors remark. “Efficiencies, except in unusual circumstances, do not exceed 100% and often sound depressingly low. A speedup of 100 sounds better than an efficiency of 10%, even if both are for the same program and same machine with 1000 cores.”

Amdahl’s Law addresses the potential speedup of an algorithm on a parallel platform. Proposed by Gene Amdahl in 1967, the law states that the overall speedups of an optimization are limited by the non-optimized portion of the application’s runtime. From this Amdahl deduced: “the effort expended on achieving high parallel processing rates is wasted unless it is accompanied by achievements in sequential processing rates of very nearly the same magnitude.”

Whereas Amdahl’s Law assumes a fixed problem size, Gustafson-Barsis’ Law, put forth by John L. Gustafson in 1988, quantifies speedup if the problem size is allowed to increase. In John Gustafson’s own words: “…speedup should be measured by scaling the problem to the number of processors, not by fixing the problem size.”

The article, which is adapted from sections of the authors’ book, “Structured Parallel Programming: Patterns for Efficient Computation,” is recommended for anyone needing a refresher course on the basics of parallelism. The authors, all highly respected luminaries in the field, present the information and related equations in a precise, clear manner.