Visit additional Tabor Communication Publications
January 29, 2009
In Principles of Parallel Programming (Addison Wesley, 2008), Lawrence Snyder and Calvin Lin have created a book that is useful for students and new practitioners alike. The text assumes that readers already have an understanding of basic computer science principles (how to write a correct sequential program, the basics of algorithmic complexity, and the like), and from that foundation provide a thorough treatment of the core principles of parallel programming.
As they go along, Lin and Snyder are careful to link the principles to the current parallel hardware and software landscape, including discussions of timely topics like GPUs, the Cell processor, and the languages being developed as part of DARPA's High Productivity Computer Systems project. The text builds on computer science fundamentals in a way that will keep readers confident as they progress through the text, while at the same time keeping the energy high and the pace brisk. The result is a fresh, practical, pragmatic book that makes for an engaging, even fun, read.
Principles of Parallel Programming starts from the assumption that parallelism is everywhere, and successful programmers -- whether they are writing software today for the two cores on a typical desktop computer or the 130,000 cores in Roadrunner -- need to understand how to take advantage of this parallelism. The authors take this position one step further and provide tools throughout the text to enable readers to create scalable parallel software, reflecting a core belief that the degree of parallelism in all forms of computing hardware will continue to increase, and that it's better to modify a code once now than repeatedly over time.
The first three chapters establish the foundations of parallel programming. The authors start off highlighting the complexities that contribute to the current reality that writing correct, scalable parallel applications is substantially harder than writing correct sequential applications. Using the seemingly simple example of parallelizing the summation of a series of numbers, Lin and Snyder are able to demonstrate the difficult decisions programmers must make as they reason about correctness, scalability and performance, laying the groundwork for the more in-depth discussions to come.
Something that I very much appreciate about this book is that it is frank and direct -- the authors are clearly in favor of parallel computing, but do not make any attempt to disguise the things that make it difficult.
We could solve the portability problem by simply setting a high enough level of abstraction that none of these difference is visible; then a compiler will map the high level specification to the platform. The strategy will make our programs insensitive to the parallel hardware, but it's not a good idea. Generally, though compilers can perform the mapping, they will usually introduce software layers to implement the abstractions; the added software hides the performance features of the hardware, making it difficult for programmers to know how their code behaves. We cannot divorce ourselves entirely from the underlying hardware if we want high performance.
Here we see that they are able to keep the conversation about the tensions between portability and performance honest without becoming bogged down in the minutiae of specific individual languages or hardware (though we do see details later). Throughout this book the authors are able to crystalize the essence of an argument with just enough context to allow the readers to grasp the important first- and second-order effects that drive a decision, but without burdening the reader with information he doesn't need to properly assess the issue.
A brief review of hardware follows that includes all of the current major flavors of parallel hardware currently available. The hardware review is a great example of one of the real hallmarks of this book: its timeliness. Discussions about hardware, languages, libraries, compilers, and applications throughout this book focus on what is available in the community today, making this a very practical resource for folks who are already out of school and need to understand the lay of the modern parallel landscape in context. Examples used include Intel and AMD multicore chips, the Sun Fire E25K, the Cell processor, HP's Cluster Platform 6000, and Blue Gene.
As the authors develop a picture of the performance relationship between algorithms and architectures, they are careful to frame the discussion in terms of principles that are general enough to be broadly applicable but specific enough to be actionable. Rather than mounting an architecture-specific discussion about how to optimize cache hits for various levels of the Intel memory hierarchy, and then repeating that discussion for other major architectures, Lin and Snyder broadly discuss the differences in the major hardware designs and encapsulate all the various ways that programmers can optimize for those platforms in a single rule:
Fast programs tend to maximize the number of local memory references and minimize the number of non-local memory references.
As another example:
Parallel programs are more scalable when they emphasize blocks of computation -- typically the larger the block the better -- that minimize the inter-thread dependences.
This approach is used throughout the book to generalize broadly-applicable principles that have been motivated by a high level (but effective) discussion of the specifics of several individual example cases.
Lin and Snyder also emphasize good scientific method and sound approaches to programming and testing. For example, in the performance discussions in chapters 3 and 11, much attention is given to measuring speedup, what is being measured (speedup, relative speedup, scaled speedup, and so on), and how to report the result so that others studying your measurements clearly understand how those numbers were gathered and what they mean. This is sound programming practice, which translates into ethical programmer behavior.
Once the foundations are laid, the authors go on to present the major types of parallelism (fixed, unlimited and scalable) along with the major abstractions (reduce, scan, etc.) that contribute to reasoning about, and constructing, effective parallel applications. This general conversation then moves into specific techniques for building scalable parallel codes, with emphasis on creating effective patterns of communication and computation (including allocation of work, batching of work and communication, and data distribution). Throughout this section of the book, the discussion is made more concrete through examples, but following their style of presenting only enough detail to illuminate a point, the examples are presented in a pseudocode that eliminates focus on specific languages.
At this point Synder and Lin have established a foundation of principles, performance, and the building blocks of scalable applications. The next stage of the book examines the languages available to programmers to translate those concepts into working code. In a series of four chapters, they present the major attributes and performance characteristics of threading languages (including POSIX, Java, and OpenMP), MPI (and other local view languages like Co-array Fortran), and global view languages like ZPL. This section closes with a chapter that looks at the languages as a group, assessing them in terms of their ability to enable the developer to reason about performance and ensure correctness while creating scalable, portable codes.
The book closes with a look at future directions in both languages and hardware, bringing in some of the most recent directions arising from DARPA's HPCS effort and others, and includes a few notes on useful capstone projects for those who might use the text as part of a class.
Principles of Parallel Programming is an especially valuable resource for up-and-coming HPC practitioners, and those still in school or just recently in the workplace will find a lot here that will put what they've already learned on a solid, intensely practical foundation. The book is honest and engaging, and throughout the book the authors remain steadfast in their commitment to parallelism while recognizing the (many) challenges and open issues that the community faces.
Principles is also very timely, comparing and contrasting the features of most of the major hardware and software platforms parallel programmers are likely to encounter in their careers today. This timeliness also means that these portions of the book will age and become irrelevant over time, reducing part of its charm. I hope that the authors are able to refresh the text from time to time to maintain the complete depth of its current relevance.
The authors do an outstanding job of not assuming too much knowledge but still not boring the reader to tears with CS fundamentals. There were a few spots containing jargon or references that might trip up an especially inexperienced reader -- for example, the impact of "speed of light delays" is discussed with no explanation of what these delays might be, and the authors assume a basic understanding of the analysis of algorithms. But these issues are rare and the text remains accessible.
In a few places I found myself wanting more references than the authors provide. For example, on page 46 the authors talk briefly about the PRAM computer model and mention a wealth of literature available, but don't provide any references. Surely the reader can go look these up herself, but a few choice references selected for their particular relevance or insight by the authors would provide some additional value.
The book focuses on scalability, performance and portability throughout, but this discussion tends to be built on the presumption that the ultimate goal is the best performance from the machine being used. There is some concern with the degree to which a particular abstraction supports the developer's ability to reason about correctness and these other quantities, but the productivity or efficiency of the programmer is not a core issue for this book (notwithstanding the degree to which a scalable program with portable performance supports programmer productivity by avoiding future changes to source code). This isn't necessarily a problem, but it is something to keep in mind.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.