Visit additional Tabor Communication Publications
September 02, 2008
Look at all the current research projects aimed at exactly that: making parallel programming easy!
The NESL language aims to "make parallel programming easy and portable." More precisely, its goal is to allow programmers "to write parallel code as concisely and clearly as sequential code, while achieving close to peak performance."
Microsoft's Parallel Computing Platform team aims to provide a runtime that provides support for parallelism, models, libraries and tools that "make it easy for developers to construct correct, efficient, maintainable and scalable parallel programs."
The CxC (C by C) language has one-sided communication operations, "which makes parallel programming easy and efficient."
My alma mater, the University of Illinois, has a new Universal Parallel Computing Research Center, whose goal is simply "Making parallel programming easy." Marc Snir, the Computer Science Department head, said the goal for this effort is to "make 'parallel programming' synonymous with 'programming'."
The Parallel Computing Lab at the University of California at Berkeley has adopted the goal to "make it easy to write correct programs that run efficiently on manycore systems, and which scale as the number of cores doubles every two years."
Intel's Threaded Building Blocks (TBB) extends C++ for parallelism in an easy to use and efficient manner.
Tim Mattson (Intel) points out that in "our quest to find that perfect language to make parallel programming easy," we have come up with an alarming array of parallel programming choices: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmem, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, Windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL, Split-C, and on and on.
Tim and I are colleagues on the OpenMP Language committee, which recently finalized the OpenMP 3.0 standard. OpenMP's mission is to define a "portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer." Is simple even easier than easy?
Every time I see someone claiming they've come up with a method to make parallel programming easy, I can't take them seriously. First, "making parallel programming easy" must be harder than "making programming easy," and I don't think we've reached that first milestone yet. Yes, I can (and do) knock off quick programs to search or compute something simple, but then I can knock off a quick project to build a bridge over the seasonal stream in our backyard, too. I wouldn't trust that bridge to freeway traffic, and I wouldn't trust that quick program to solve anyone else's problem, either.
All this is folly. I agree with Andrew Tanenbaum, quoted at the June 2008 Usenix conference: "Sequential programming is really hard, and parallel programming is a step beyond that." Yes, programming is really hard. Look at the programming effort it takes to produce a well-supported world-class application. If it were easy, Microsoft wouldn't need legions of programmers to develop and support its software. If sequential programming were a solved problem, we wouldn't have the relatively recent introductions of new and successful languages, such as Java, and C#.
Consider any software development project. Typically it begins with discussion of interfaces (with other software, with data, with users), standards (programming language, operating system, formats), and requirements (functionality and performance). Often, a prototype implementation is developed and tested. One of the tests will involve performance, especially at the limits of the expected input. If the performance is satisfactory, no further tuning is necessary. If not, the team will look at alternative implementations, different algorithms, data structures, shortcuts, etc.; one of the schemes will be to use more parallelism.
In fact, the ONLY reason to consider parallelism is for better performance. Parallelism by itself doesn't deliver any new features or functionality; it may allow you to deliver new functionality because of improved performance, but the parallelism itself didn't create the functionality. Most of the programs I write don't need parallelism because they don't take long enough to matter. Only those with large datasets or lots of computation are even candidates. I'd be wasting my time and my employer's resources adding complexity to my program in order to use parallelism that I don't need.
Moreover, parallelism does not equate to performance. The focus needs to be not on parallelism, but on performance, where parallelism is one of the tools to get it.
Time and again we hear advocates claiming that if you just use their compiler, language, library, tool, methodology, etc., they will guarantee good performance now and forevermore. Yet each time around, the methods are limited to the technology of the day. Let's look at an example algorithm, matrix multiplication:
for i in 1:n
for j in 1:n
for k in 1:n
c(i,j) = c(i,j) + a(i,k)*b(k,j)
Early compilers were tuned to optimize just such programs. As pipelined functional units were introduced, libraries were written to replace the inner kernel loops, such as STACKLIB and the BLAS; we could write this loop using a DAXPY call:
for j in 1:n
for k in 1:n
daxpy( n, b(k,j), a(1,k), 1, c(1,j), 1 )
! equivalent to:
! for i in 1:n
! c(i,j) = c(i,j) + b(k,j)*a(i,k)
The library routines were rewritten in vector mode in the 1970s. However, it was soon realized that you could achieve even higher vector performance, called supervector performance, if you optimized the inner two loops to take advantage of vector register locality. Thus came the level-2 BLAS, implementing matrix-vector operations. Matrix multiplication turned into a loop around a DGEMV call:
for j in 1:n
dgemv( 'n', n, n, 1.0, a(1,1), n, b(1,j), n, 1.0, c(1,j), 1 )
! equivalent to:
! for k in 1:n
! for i in 1:n
! c(i,j) = 1.0*c(i,j) + 1.0*b(k,j)*a(i,k)
This was sufficient until the microprocessor revolution, where cache behavior dominated the processor performance. We then were given the level-3 BLAS, implementing matrix multiplication in a single call to DGEMM, which could then be appropriately tiled, unrolled, vectorized, and optimized for each machine. Now we're in the multicore or manycore era, and who knows where it will lead in the next decade or two? Given how well we've predicted what the machines of the future will look like, can we design the universal programming model today?
One aspect of computer science is algorithm analysis, studying the computational and memory requirements of an algorithm. We learn how to reverse a linked list in linear time and constant space. We learn about the differences between bubble sort (O(N^3)), quicksort (O(N log N) on average, easily parallelizable) and heapsort (O(N log N) worst case, and my personal favorite). We learn to pay attention to the hidden constants in the big-O notation, since the constant factor can dominate the computation cost for reasonable inputs; thus, O(N^3) is almost always worse than O(N^2), but O(N log N) with a small constant may be better than O(N) with a big constant, for the expected cases.
For instance, in compilers, no one ever uses the linear time algorithm for computing dominators, because the constant factor is too big. We learn that the analysis usually translates directly to the actual performance. And we learn tricks, like hash table lookup, to work around difficult performance problems. None of this depends on the particular programming language or processor; the von Neumann model has served us well.
Algorithm analysis for parallel computing studies parallelism models, such as Bulk Synchronous Parallel (BSP), Communicating Sequential Processes (CSP), Data-Flow, and so on. These models make assumptions about the cost (or lack thereof) for synchronization and communication. Until we have machines that implement these models more closely, we need to take into account the cost of the virtualization as well. It doesn't matter how efficient an algorithm is in some abstract machine model if the implementation of the abstract model is itself expensive.
The current "parallelism crisis" can only be resolved by three things. First, we need to develop and, more importantly, teach a range of parallel algorithms. When we teach sorting, we take into account in-core vs. out-of-core, the cost of doing a comparison, and the cost of copying the data versus using indices. Different sort algorithms work better with different dataset characteristics. Similarly, we must teach a range of parallel algorithms, so a programmer knows what dataset and system characteristics will affect the performance, and hence the choice of algorithm.
Second, we need to expand algorithm analysis to include different parallelism styles. It's not enough to focus on just the BSP or SIMD or any other model; we must understand several models and how they map onto the target systems. We've become lazy; the sequential von Neumann model has been sufficiently universal that we think we can use a single model to cover all computing. Yes, this sounds like work, and it sounds like we're going to have to think for a living.
Finally, we need to learn how to analyze and tune actual parallel programs. Sequential performance tuning has largely been reduced to (at most) profiling to find the computationally-intensive region of code, and either choosing a more efficient algorithm, choosing a different set of compiler flags or different compiler, or using a canned library routine. Advanced analysis may consider cache bandwidth and interference or operating system jitter. Parallel performance analysis is subject to many more pitfalls. Threads or processes may be delayed due to synchronization or communication delays; shared cache usage may cause additional interference; NUMA memory accesses may require more careful attention to memory allocation strategies. Program analysis tools can help here.
Stanford's Computer Science Department Chair, Bill Dally, is quoted as saying "parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fueled the computing industry, and several related industries, for the last 40 years." David Patterson, former ACM president and head of the Parallel Computing Laboratory at Cal Berkeley, said that "in order for parallelism to succeed, it has to result in better productivity, efficiency, and accuracy."
Parallel processing is an obstacle, but then so is sequential processing. Parallel computing can result in better productivity, efficiency, and accuracy in the scientific process overall, but it's silly to think that it will result in better productivity and efficiency in the programming process itself. The best we can hope for is to make parallel programming not much harder than sequential programming. Dally himself is giving a keynote speech at the International Conference on Parallel Processing in Portland in September titled "Streams: Parallel Programming Made Simple." There's that simple word yet again.
That's not to say there aren't problems to be solved. There are, and parallel programming is going to continue to be a problem. However, unlike the doomsayers and the simplifiers, I think these are hard problems, but not impossible. We solve hard problems every day. We've already developed several successful, widely-used parallel programming paradigms, including MPI and OpenMP. They may not be perfect, universal models, but we should learn what worked. And we have as much to learn from proposed models that did not succeed or survive, such as High Performance Fortran.
And, frankly, I have confidence in the applications programmer's ability to develop algorithms and approaches to using parallelism. Apparently, many in the computer science community do not.
Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.