The most widely used computer programming languages today were not designed as parallel programming languages. But retrofitting existing programming languages for parallel programming is underway. We can compare and contrast retrofits by looking at four key features, five key qualities, and the various implementation approaches.
In this article, I focus on the features and qualities, leaving the furious debates over best approaches (language vs. library vs. directives, and abstract and portable vs. low-level with lots of controls) for another day.
Four features we need
Features that any parallel programming solution, including retrofits, should include a defined memory model, synchronization, tasks, and data parallelism.
Memory model
Defining how changes in shared data are observable by different tasks had been an under-appreciated problem. Hans-J. Boehm wrote a report in 2004, titled Threads Cannot Be Implemented As a Library, which explains these issues. Having a well-defined ordering among accesses to distinct variables, and enabling the independence of updates to distinct variables, is so important that they have been addressed in Java, C11 and C++11. Without these retrofits, every parallel program sits on a crumbling foundation.
Synchronization
The need for portable and efficient synchronization is substantial. Boost libraries, Intel’s Threading Building Blocks (TBB) and OpenMP offer solutions that are widely utilized. C++11 and C11 now offer support. Beyond these, the concept of transactions is a topic worth exploring in a future article. Synchronization retrofitting is helping portability. Substantial opportunities remain for helping efficiency.
Tasks, not threads
Programming should be an exercise in writing tasks that can run concurrently, without the programmer specifying the precise mapping of tasks onto hardware threads. An introduction to this challenge is The Problem with Threads by Edward A. Lee.
Mapping should be the job of tools, including run-time schedulers, not explicit programming. This philosophy is being well supported by retrofits like OpenMP, TBB, Cilk Plus, Microsoft’s Parallel Patterns Library (PPL) and Apple’s Grand Central Dispatch (GCD). The need to assert some control over task to thread mapping to maximize performance is still present when using such systems today, but not always supported.
Nevertheless, programming directly to native threads (e.g., pthreads) in applications is something that should be completely avoided. Retrofits are sufficient today to make tasks the method of choice.
Data parallel support
It should be reasonably straightforward to write a portable program that takes advantage of data parallel hardware. Ideally, data parallel support should be able to utilize vector and task parallel capabilities without a programmer having to explicitly code the division between the two.
Unfortunately, no such solution is in wide spread use today even for vectorization alone. Effective auto-parallelization is very dependent on highly optimizing compilers. Compiler intrinsics lock code into a particular vector width (MMX=64, SSE=128, AVX=256, etc.). Elemental functions in CUDA, OpenCL, and Cilk Plus offer a glimpse into possible retrofits. Intel proposes we adopt the vectorization benefits of Fortran 90 array notations into C and C++ as part of the Cilk Plus project.
Vector hardware is increasingly important in processors, GPUs and co-processors. OpenCL and OpenMP wrestle today with how to embrace data parallel hardware and how tightly tied programming will be to it. Microsoft C++ AMP has similar challenges when it comes to market with the next Microsoft Visual Studio. Standard, abstract, portable and effective solutions wanted!
Five qualities we should desire
Five key qualities that are desirable, for parallel programming, include composability, sequential reasoning, communication minimization, performance portability and safety.
All of these qualities are unobtainable, in an absolute sense, whether as retrofits in an old language or with a clean slate and a new language. That is why we cannot call them features. The more of these qualities we obtain the better off we are. That makes them very important to keep in mind.
Composability
Composability is a well-known concept in programming, offering rules for combining different things together (functions, objects, modules, etc.) so that it is easy to compose (think: combine in unanticipated ways). It is important to think of composability in terms of both correctness and performance.
OpenCL, largely because it is less abstract, has low composability on both accounts. OpenMP and OpenCL have very serious performance composability unless they are used very carefully. New and abstract retrofits (TBB, Cilk, PPL, GCD) are much more tolerant and able to deliver high composability.
Self-composability is an essential first step, but the ability to compose multiple retrofits together is essential in the long run as well. A welcome solution for tool vendors, Microsoft’s Concurrency Runtime has allowed retrofits from multiple vendors to coexist with increased composability. Parallel programming without the ability to mix and match freely, is undesirable and counterproductive.
Composability deserves more attention than it gets.
Sequential reasoning
Sequential reasoning, the norm for reading a serial implementation, can apply with an appropriately expressed parallel program. OpenMP uses hints to create the use of parallelism instead of code changes. This allows the intent of a program to remain evident in the code. TBB and PPL emphasize relaxed sequential semantics to provide parallelism as an accelerator without making it mandatory for correctness. Writing a program in a sequentially consistent fashion is permitted and encouraged.
An explicit goal of Cilk Plus is to offer sequential semantics to set it apart from other retrofits. The serial elision (or C elision) of a Cilk program is touted in papers from MIT. Programming that preserves sequential semantics has received praise as easier to learn and use. The elemental functions in OpenCL, CUDA and Cilk Plus have similar objectives.
It is fair to say that programming in a manner that requires understanding parallel semantics, in order to understand intent, is both unpopular and out of vogue today. Such mandatory parallelism is harder to understand and to debug.
Sequential reasoning can be extended to debuggers too. A hot area to watch here is debuggers working to present a debugging experience more akin to sequential experiences, with features like Rogue Wave’s replay capabilities in the Totalview debugger.
Instead of sequential reasoning being a retrofit, it is more accurate to think of sequential reasoning as often being purposefully sought and preserved in a parallel world.
Communication minimization
Performance tuning on parallel systems often focuses on ensuring data is local when you use it and minimizing the need to move it around. Data motion means communication of some sort, and communication is generally expensive. Decisions in the design and implementation of retrofits, as well as the application programming itself, often impact performance dramatically. The task stealing algorithms of TBB, Cilk, PPL and GCD all have cache reuse strongly in mind in their designs. Retrofits to help, with communication minimization, are a tricky business and could use more attention.
Performance portability
The goal here is that a tuned program on one piece of hardware performs reasonably well on another piece of hardware. It is desirable to be able to describe data and tasks in such a way that performance scales as parallelism increases (number of cores, or size of vectors, or cache size, etc.). Nothing is ever fully performance portable, but more abstract retrofits tend to hold up better.
Unfortunately, implementations of abstractions can struggle to offer peak performance. It took years for compilers to offer performance for MMX or SSE that was competitive with assembly language programming. Use of cache-agnostic algorithms generally increase performance portability. Today, competing on performance with carefully-crafted CUDA and OpenCL code can be challenging because the coding is low level enough to encourage, or even require, the program structure to match the hardware. The lack of performance portability of such code is frequently shown, but effective alternatives remain works-in-progress. Language design, algorithm choices and programming style can affect performance portability a great deal.
Safety
The freedom from deadlocks and race conditions, may be the most difficult to provide via a retrofit. No method to add complete safety to C or C++ has gained wide popularity. Safety has not been incorporated into non-managed languages easily, despite some valiant efforts to do so.
To make a language safe, pointers have to be removed or severely restricted. Meanwhile, tools are maturing to help us cope with safety despite lack of direct language support, and safer coding style and safer retrofits appear to help as well. Perhaps safety comes via a combination of “good enough” and “we can cope using tools.”
A journey ahead, together
There are at least four key programming problems that any parallel programming solution should address, and five key qualities that can make a programming model, retrofit or otherwise, more desirable. Evolution in hardware will help as well.
—–
About the author
James Reinders has helped develop supercomputers, microprocessors and software tools for 25 years. He is a senior engineer for Intel in Hillsboro Oregon.