Compilers and More: Parallel Programming Made Easy?

By Michael Wolfe

September 2, 2008

Look at all the current research projects aimed at exactly that: making parallel programming easy!

The NESL language aims to “make parallel programming easy and portable.” More precisely, its goal is to allow programmers “to write parallel code as concisely and clearly as sequential code, while achieving close to peak performance.”

Microsoft’s Parallel Computing Platform team aims to provide a runtime that provides support for parallelism, models, libraries and tools that “make it easy for developers to construct correct, efficient, maintainable and scalable parallel programs.”

The CxC (C by C) language has one-sided communication operations, “which makes parallel programming easy and efficient.”

My alma mater, the University of Illinois, has a new Universal Parallel Computing Research Center, whose goal is simply “Making parallel programming easy.” Marc Snir, the Computer Science Department head, said the goal for this effort is to “make ‘parallel programming’ synonymous with ‘programming’.”

The Parallel Computing Lab at the University of California at Berkeley has adopted the goal to “make it easy to write correct programs that run efficiently on manycore systems, and which scale as the number of cores doubles every two years.”

Intel’s Threaded Building Blocks (TBB) extends C++ for parallelism in an easy to use and efficient manner.

Tim Mattson (Intel) points out that in “our quest to find that perfect language to make parallel programming easy,” we have come up with an alarming array of parallel programming choices: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmem, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, Windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL, Split-C, and on and on.

Tim and I are colleagues on the OpenMP Language committee, which recently finalized the OpenMP 3.0 standard. OpenMP’s mission is to define a “portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer.” Is simple even easier than easy?

Every time I see someone claiming they’ve come up with a method to make parallel programming easy, I can’t take them seriously. First, “making parallel programming easy” must be harder than “making programming easy,” and I don’t think we’ve reached that first milestone yet. Yes, I can (and do) knock off quick programs to search or compute something simple, but then I can knock off a quick project to build a bridge over the seasonal stream in our backyard, too. I wouldn’t trust that bridge to freeway traffic, and I wouldn’t trust that quick program to solve anyone else’s problem, either.

All this is folly. I agree with Andrew Tanenbaum, quoted at the June 2008 Usenix conference: “Sequential programming is really hard, and parallel programming is a step beyond that.” Yes, programming is really hard. Look at the programming effort it takes to produce a well-supported world-class application. If it were easy, Microsoft wouldn’t need legions of programmers to develop and support its software. If sequential programming were a solved problem, we wouldn’t have the relatively recent introductions of new and successful languages, such as Java, and C#.

Consider any software development project. Typically it begins with discussion of interfaces (with other software, with data, with users), standards (programming language, operating system, formats), and requirements (functionality and performance). Often, a prototype implementation is developed and tested. One of the tests will involve performance, especially at the limits of the expected input. If the performance is satisfactory, no further tuning is necessary. If not, the team will look at alternative implementations, different algorithms, data structures, shortcuts, etc.; one of the schemes will be to use more parallelism.

In fact, the ONLY reason to consider parallelism is for better performance. Parallelism by itself doesn’t deliver any new features or functionality; it may allow you to deliver new functionality because of improved performance, but the parallelism itself didn’t create the functionality. Most of the programs I write don’t need parallelism because they don’t take long enough to matter. Only those with large datasets or lots of computation are even candidates. I’d be wasting my time and my employer’s resources adding complexity to my program in order to use parallelism that I don’t need.

Moreover, parallelism does not equate to performance. The focus needs to be not on parallelism, but on performance, where parallelism is one of the tools to get it.

Time and again we hear advocates claiming that if you just use their compiler, language, library, tool, methodology, etc., they will guarantee good performance now and forevermore. Yet each time around, the methods are limited to the technology of the day. Let’s look at an example algorithm, matrix multiplication:

   for i in 1:n
     for j in 1:n
       for k in 1:n
         c(i,j) = c(i,j) + a(i,k)*b(k,j)

Early compilers were tuned to optimize just such programs. As pipelined functional units were introduced, libraries were written to replace the inner kernel loops, such as STACKLIB and the BLAS; we could write this loop using a DAXPY call:

   for j in 1:n
     for k in 1:n
       daxpy( n, b(k,j), a(1,k), 1, c(1,j), 1 )
       ! equivalent to:
       ! for i in 1:n
       !   c(i,j) = c(i,j) + b(k,j)*a(i,k)

The library routines were rewritten in vector mode in the 1970s. However, it was soon realized that you could achieve even higher vector performance, called supervector performance, if you optimized the inner two loops to take advantage of vector register locality. Thus came the level-2 BLAS, implementing matrix-vector operations. Matrix multiplication turned into a loop around a DGEMV call:

   for j in 1:n
     dgemv( ‘n’, n, n, 1.0, a(1,1), n, b(1,j), n, 1.0, c(1,j), 1 )
     ! equivalent to:
     ! for k in 1:n
     !   for i in 1:n
     !     c(i,j) = 1.0*c(i,j) + 1.0*b(k,j)*a(i,k)

This was sufficient until the microprocessor revolution, where cache behavior dominated the processor performance. We then were given the level-3 BLAS, implementing matrix multiplication in a single call to DGEMM, which could then be appropriately tiled, unrolled, vectorized, and optimized for each machine. Now we’re in the multicore or manycore era, and who knows where it will lead in the next decade or two? Given how well we’ve predicted what the machines of the future will look like, can we design the universal programming model today?

One aspect of computer science is algorithm analysis, studying the computational and memory requirements of an algorithm. We learn how to reverse a linked list in linear time and constant space. We learn about the differences between bubble sort (O(N^3)), quicksort (O(N log N) on average, easily parallelizable) and heapsort (O(N log N) worst case, and my personal favorite). We learn to pay attention to the hidden constants in the big-O notation, since the constant factor can dominate the computation cost for reasonable inputs; thus, O(N^3) is almost always worse than O(N^2), but O(N log N) with a small constant may be better than O(N) with a big constant, for the expected cases.

For instance, in compilers, no one ever uses the linear time algorithm for computing dominators, because the constant factor is too big. We learn that the analysis usually translates directly to the actual performance. And we learn tricks, like hash table lookup, to work around difficult performance problems. None of this depends on the particular programming language or processor; the von Neumann model has served us well.

Algorithm analysis for parallel computing studies parallelism models, such as Bulk Synchronous Parallel (BSP), Communicating Sequential Processes (CSP), Data-Flow, and so on. These models make assumptions about the cost (or lack thereof) for synchronization and communication. Until we have machines that implement these models more closely, we need to take into account the cost of the virtualization as well. It doesn’t matter how efficient an algorithm is in some abstract machine model if the implementation of the abstract model is itself expensive.

The current “parallelism crisis” can only be resolved by three things. First, we need to develop and, more importantly, teach a range of parallel algorithms. When we teach sorting, we take into account in-core vs. out-of-core, the cost of doing a comparison, and the cost of copying the data versus using indices. Different sort algorithms work better with different dataset characteristics. Similarly, we must teach a range of parallel algorithms, so a programmer knows what dataset and system characteristics will affect the performance, and hence the choice of algorithm.

Second, we need to expand algorithm analysis to include different parallelism styles. It’s not enough to focus on just the BSP or SIMD or any other model; we must understand several models and how they map onto the target systems. We’ve become lazy; the sequential von Neumann model has been sufficiently universal that we think we can use a single model to cover all computing. Yes, this sounds like work, and it sounds like we’re going to have to think for a living.

Finally, we need to learn how to analyze and tune actual parallel programs. Sequential performance tuning has largely been reduced to (at most) profiling to find the computationally-intensive region of code, and either choosing a more efficient algorithm, choosing a different set of compiler flags or different compiler, or using a canned library routine. Advanced analysis may consider cache bandwidth and interference or operating system jitter. Parallel performance analysis is subject to many more pitfalls. Threads or processes may be delayed due to synchronization or communication delays; shared cache usage may cause additional interference; NUMA memory accesses may require more careful attention to memory allocation strategies. Program analysis tools can help here.

Stanford’s Computer Science Department Chair, Bill Dally, is quoted as saying “parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fueled the computing industry, and several related industries, for the last 40 years.” David Patterson, former ACM president and head of the Parallel Computing Laboratory at Cal Berkeley, said that “in order for parallelism to succeed, it has to result in better productivity, efficiency, and accuracy.”

Parallel processing is an obstacle, but then so is sequential processing. Parallel computing can result in better productivity, efficiency, and accuracy in the scientific process overall, but it’s silly to think that it will result in better productivity and efficiency in the programming process itself. The best we can hope for is to make parallel programming not much harder than sequential programming. Dally himself is giving a keynote speech at the International Conference on Parallel Processing in Portland in September titled “Streams: Parallel Programming Made Simple.” There’s that simple word yet again.

That’s not to say there aren’t problems to be solved. There are, and parallel programming is going to continue to be a problem. However, unlike the doomsayers and the simplifiers, I think these are hard problems, but not impossible. We solve hard problems every day. We’ve already developed several successful, widely-used parallel programming paradigms, including MPI and OpenMP. They may not be perfect, universal models, but we should learn what worked. And we have as much to learn from proposed models that did not succeed or survive, such as High Performance Fortran.

And, frankly, I have confidence in the applications programmer’s ability to develop algorithms and approaches to using parallelism. Apparently, many in the computer science community do not.

—–

Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire