Book Review: Parallel Algorithms
Parallel Algorithms by Henri Casanova, Arnaud Legrand, and Yves Robert (CRC Press, 2009) is a text meant for those with a desire to understand the theoretical underpinnings of parallelism from a computer science perspective. As the authors themselves point out, this is not a high performance computing book — there is no real attention given to HPC architectures or practical scientific computing. You also won’t leave this book a competent parallel programmer ready to implement an application. But you will have the tools you need to continue on a rigorous research track into the computer science aspects of parallel computing.
The preface describes the text as aimed at graduate students and postgraduate researchers in computer science, and this is dead on. The book is very general, and very theoretical, with proofs, theorems, lemmas, complexity analysis, and the whole nine yards. In the quest to maintain generality and build a theoretical framework for understanding research aspects of parallel algorithms, you don’t get much further than matrix-matrix multiplication and basic stencil computation in terms of practical discussion of algorithms. Each chapter includes a thorough problem set that extends the topics covered in the chapter; solutions are provided for select problems.
The book is organized into three sections: models, parallel algorithms, and scheduling. The models section begins with (chapters 1-2) coverage of classic theoretical models of computing in parallel, PRAM and sorting networks. Chapter 3 is about the models of communications networks needed to reason about the complexity and general effectiveness of algorithms when implemented on specific hardware. This chapter talks about topologies like cliques, rings, grids, and variants of the torus and hypercube, and also touches on models for peer-to-peer computing networks.
Chapters 4 and 5 discuss parallel algorithms on rings and grids of processors. The algorithmic discussion is a foundation upon which to develop the theoretical tools for reasoning about the performance and complexity of parallel algorithms in general. These chapters are not meant to be useful implementation guides for those developing applications. The authors examine matrix-vector and matrix-matrix multiplication as well as basic stencil computations and LU factorization. Basic data distribution patterns are also discussed (block, cyclic, etc.), and are used in conjunction with an analysis of the algorithms in the context of the communication network models to understand the theoretical performance advantages and general relevance of data distribution to effective parallel computation. These sections of chapters 4 and 5 establish theoretical foundations for some of the rules of thumb we have in HPC, explaining why they work, and they are used to establish some generalities about the virtues and vices of some of the topologies with respect to one another.
The remaining chapters are about workload management. Chapter 6 addresses load balancing within an application running on a heterogenous platform, e.g., a cluster with some fast and some slow(er) processors. The chapter builds the fundamental discussion based on one-dimensional data distributions for which there are accessible solutions and examines those in the context of stencil computations and LU factorization. Then the authors address the difficulties with balancing load in two-dimensional data distributions. Chapters 7 and 8 address task graph scheduling algorithms. Chapter 7 addresses the fundamentals and provides the definitions and theorems needed to prove characteristics of task graph scheduling approaches. Chapter 8 advances this discussion and addresses scheduling of divisible load applications, throughput optimization for master-worker applications in steady-state, scheduling of independent tasks, and loop nest scheduling.
Parallel Algorithms is a book you study, not a book you read. Those well past their CS finals or long out of the research aspects of computer science may find portions of the discussion inaccessible. But those motivated to work through the text will be rewarded with a solid foundation for the study of parallel algorithms.