Compilers and More: What Makes Performance Portable?

By Michael Wolfe

April 19, 2016

The term performance portability has appeared many times in the past few years. What does it mean? When did we first start worrying about performance portability? Why is it more relevant now than before? Why are so many discussions about performance portability so depressing? Is there any hope? I’ll address these questions and more in this article.

Is performance portability a new idea?

No. If you do an Internet search on the exact phrase performance portability, you’ll find a vast number of hits, most (though not all) relevant to HPC. Some of the highlights from over the years:

So performance portability has been around as a concept and a goal in parallel computing for many years. It’s interesting to look at the challenges addressed in these papers over the past two decades. The 1995 Spectral Models technical memo described experiments to find the best algorithm variants and aspect ratios within the variants across different numbers of processors on a Cray T3D, IBM SP2 and Intel Paragon. The 2001 WRF paper points out that “performance and portability are important but conflicting concerns,” and that we need to develop “a quantitative understanding” of how “data and looping structures affect performance” and then engineer the code “to enable flexible tuning of these aspects across a variety of different platforms in a single source code.”

The 2005 EARTH paper states “the performance portability of parallel programs is becoming increasingly important and should be considered when designing parallel execution models, APIs, and runtime system software.” The 2012 paper proposes the Chapel language to enable performance portability. I find it interesting that this selection of articles gradually shifts the burden of performance portability from the application and the algorithm to the program to the language and tools.

In truth, any solution to performance portability needs all of the above. An unscalable algorithm won’t deliver good performance on a more highly parallel target machine. An inappropriate data structure can affect performance almost as much as an inappropriate algorithm. And we need a way to express the program such that the compilers, tools and runtimes can exploit the features of a variety of target systems.

So, What Is Performance Portability?
If it’s not obvious, a performance portable implementation of an application or algorithm is one that will achieve high performance across a variety of target systems. High performance is relative, but we mean high performance for that target system.

An important question is the variety of the target systems. Twenty years ago, the Spectral Models paper was looking at parallelism across 32 to 1024 nodes of three different types of systems, where each node used only a single processor (the Intel Paragon had two processors per node, but the second processor was used as a message coprocessor in these experiments). Interestingly, only one of those systems used MPI, which was still rather new. MPI is now the standard for managing communication between nodes, though PGAS language features like Fortran coarrays and HPC++ distributed arrays have a growing user community.

Modern supercomputers all use multicore processors at each node, sometimes 32 or more cores per node. That number is rising, with systems based on Intel Knights Landing Xeon Phi (KNL) systems supporting over 256 hardware threads per node. We also have systems with attached accelerators. Today, that is most commonly a GPU accelerator, but in the recent past other programmable accelerators have been explored, such as DSPs. In some application areas, FPGAs are being used successfully as well. However, FPGAs have serious limitations for general purpose HPC use, including the high cost of programming (including place-and-route), the high cost of dynamically reconfiguring the network, the relatively sparse logic and low clock rate. For this article, we include multicore, manycore and GPU accelerated nodes, and we allow for other possible future programmable accelerators, but we do not consider FPGAs.

Is Performance Portability Still Important?
TOP500 the list graphic
If we look at the top four systems in the November, 2015 TOP500 supercomputer list, we see four distinct node architectures:

  1. Tianhe-2, 16000 nodes with two 12-core Intel Ivy Bridge processors and three 57-core Intel Knights Corner Xeon Phi coprocessors per node.
  2. Titan, 18688 nodes with one 16-core AMD Interlagos processor and one NVIDIA Tesla K20X GPU per node.
  3. Sequoia, 98304 nodes with one 16-core IBM PowerPC A2 processor per node.
  4. The K computer, 88128 nodes with one 8-core Fujitsu SPARC VIIIfx per node.

Four different processor vendors, two with accelerated fat nodes, two with over 50,000 thinner nodes.

We see similar variety in smaller HPC procurements around the globe. Many systems are designed around single-socket multicore x86 or POWER processors; many more include GPU accelerators; a number will be built with the Intel Knights Landing Xeon Phi processors. There are also experimental efforts that may affect future designs.

The upcoming DOE CORAL supercomputers have similar variety. The IBM+NVIDIA+Mellanox Summit machine at Oak Ridge will have about 3500 nodes with multiple IBM OpenPOWER processors and multiple NVIDIA Volta GPUs at each node, whereas the Intel+Cray Aurora machine at Argonne will have over 50,000 nodes with a single highly parallel Intel Knights Hill Xeon Phi processor at each node. The contracts for these systems include Centers of Excellence that are implementing multi-year efforts to prepare and modernize codes for the new, highly parallel computer systems. Imagine if those applications had been written with performance portability in mind to begin with, how much less effort it would be to move them to a new system, and across the variety of systems.

Is Performance Portability More Important Now?
Is performance portability more important today than it was in the past? Not really. Supercomputing has gone through several phases. Those of us with more grey hair remember the efforts in the 1970s to port programs from sequential Control Data 6600 and 7600 machines to the vector Cray-1. Moving from single processors to multiple processors was another significant effort, and eventually, after more than ten years, resulted in the OpenMP specification. Moving from shared memory to scalable message-passing networks was yet another big effort, which produced MPI much more quickly (it’s easier to standardize a library than a language).

Performance portability receives attention now because the node programming model is changing (again). In the past 20 years, many programmers reverted to writing sequential node programs connected by message passing. When the messaging was well structured and hidden, the programmer really only thought about the sequential node program. However, today’s machines have very highly parallel nodes, and not all the parallelism on the node can be effectively exploited with MPI. Vector or SIMD parallelism, for instance, must be exploited by the compiler, so we’re going back to 1975, having to think about vector algorithms. Memory hierarchy management is being exposed, with data movement between system memory and high bandwidth memory, so we have to optimize for that, the way we optimized for virtual memory and caches in the 1980s. Utilizing shared memory can be more efficient than passing messages, so we have to look at hybrid parallelism, M+X (where M is usually MPI), which is something new for many programmers and applications.

Accelerators add yet another level of programming complexity. We’ve had accelerators in some form or another for more than 40 years, dating back to at least the IBM 2938 Array Processor and the Floating Point Systems AP-120B. Today’s accelerators are different, in that they are relatively inexpensive and can be easily added to node design for a significant performance boost. Yet, in spite of the long history, there has been no dominant or de facto accelerator programming model.

Do Current Programming Models Support Performance Portability?
Does MPI support or provide performance portability? I’m not going to say that MPI, or any programming language, model or style, will give or guarantee performance portability. Programming with message passing, as in MPI, does a good job in forcing the programmer to organize the application so most computation and most data accesses are local to a node, minimizing the frequency and volume of data communication. A well designed program using MPI is likely to get pretty good performance across a wide range of systems. So, yes, for that aspect of the parallelism, MPI promotes performance portability quite well.

Do libraries give performance portability? If you find a well-supported library that solves your problem, that’s definitely the way to go. When someone else writes that library and optimizes it for the different systems you want to use, you save time and effort by building on the expertise of the library developers. This is the best kind of productivity.

message-passing-interface-mpi-300x280Does OpenMP provide performance portability? For multiprocessor and multicore shared memory workstations, servers and nodes, OpenMP has done a very good job of supporting performance portability, across a range of operating systems, processor architectures, and compiler providers. The question is whether it can handle today’s variety of node architectures. Recent presentations are not encouraging. The new OpenMPCon this past September had a presentation that reported on a recent DOE Workshop on Portability.

One of the take-away messages from that workshop, according to representatives from some of the DOE labs, was that “Most people are resigned to having different sources for different platforms, with simple #ifdef or other mechanisms.” I find this very disappointing, and depressing. Another slide in that presentation promotes the view that “Having a common code base using a portable programming environment, even if you must fill the code with if-defs or have architectural specific versions of kernels … is the only way to support maintainability.” This seems to be the complete antithesis to performance portability, that portability is unachievable, unnecessary, and perhaps undesirable.

Some claim that OpenCL provides performance portability. This is interesting, since OpenCL was initially designed to, and has continuously claimed to provide an “efficient, close-to-the-metal programming interface” and to “form the foundation layer of a parallel computing ecosystem,” quoting from the OpenCL specification (from any version to date). This also seems to be the complete opposite of performance portability. It’s more of a performance possibility language, a standard language that allows you to get to the lowest level of each target. See the HPCwire article on OpenCL, which states “True, we may need to write a new version of our kernel to get the best performance on Architecture A, but isn’t this what we actually want?” The answer to that question, in the HPC space, is emphatically no.

OpenACCThere are a number of new models being proposed as solutions to the node-level performance portability problem. Chapel comes from the Cray HPCS (High Productivity) effort, and is proposed as a solution for the whole parallel programming problem, including across nodes, across cores and across accelerators. OpenACC started as a directive API for programming accelerators, but was designed to support multicore parallelism as well, and there is some evidence to support this now. RAJA and Kokkos are two C++ class library solutions to the programming problem, similar in some respects to the efforts to standardize parallel programming in the next C++ revision.

I have a strong preference for compiler-based solutions, partly because I am a compiler guy, but also because a compiler is the only tool that sees both the program and the target architecture. A C++ class library can provide a solution that syntactically resembles something like a parallel loop, but in fact it’s a method invocation that hopefully gets the body of the “loop” inlined for performance. The expert writing the parallel loop class has no knowledge of the body of the loop and can’t take advantage of anything like memory reference patterns. But I remain willing to be convinced.

Is There Hope For Performance Portability?
I believe there is hope, even in the relative near term. PGI has been working on OpenACC compilers, tuning them for accelerated computing as well as multicore parallelism. The main elements of OpenACC, as with the OpenMP 4 target additions, are data management and parallelism management. For accelerated computing, the data management is used to control or optimize data movement between the system memory and the high bandwidth memory of the accelerator, and the parallelism is used to generate parallel code for the accelerator. For multicore targets, the data management is mostly ignored, and the parallelism is used to generate multicore code, pretty much equivalent to OpenMP parallel loops.

PGI is investigating support for the Intel Knights Landing Xeon Phi processor in the future, where the parallelism will again generate multicore code, and the data management will be used to control or optimize data movement between the system (far) memory and the high bandwidth (near) memory. The data management in OpenMP is more or less isomorphic to the data management in OpenACC, from which it borrowed the ideas. The compute management in OpenMP is much more strict than in OpenACC. As discussed in an article last year, OpenMP is thread-centric. A parallel loop directive in OpenMP doesn’t tell the compiler that the loop can be safely run in parallel; it tells the compiler that it must spread the iterations of the loop across the OpenMP threads.

In OpenACC, a parallel loop directive does declare that the loop must be data-race free, and that the iterations can be run in parallel on any parallel mechanism that is available in the target system. This is incredibly powerful, and I believe is key to achieving performance portability in any programming model. The programmer declares the parallelism, and the implementation then exploits it. No amount of compiler analysis will create parallelism that isn’t in the program, and no amount of programmer effort can predict what new machines will look like in five years or more.

It seems clear that future architectures will be more parallel, and we must be writing our applications in a way that can exploit all that parallelism. Moreover, we should demand that our programming models have a path to performance portability at whatever granularity that model supports. When choosing or designing a parallel programming method, a lesson you need to apply from experience with OpenACC is to make your model as descriptive and declarative as possible, to allow for exploitation on future targets that you can’t even envision now.

Conclusions
I don’t claim that OpenACC or any programming model is the final solution. An application is a collection of algorithms, and there will always be alternate algorithms or data layouts where one version runs better on one type of system and another version runs better on others. This goes back (at least) to the 1995 technical memo cited above. Many algorithms have tuning parameters, such as aspect ratios or tile sizes, where runtime auto-tuning can be used to great effect. Task-based runtime systems break the common bulk-synchronous programming model, allowing for more latency-tolerant communication. All of these approaches need to be explored and exploited. As I write this, the DOE Centers of Excellence are organizing a Workshop on Performance Portability for later this month, which I sadly cannot attend. I wonder what conclusions will be reached there.

About the Author
Michael Wolfe has been a compiler developer for over 40 years in both academia and industry, and has been working on the PGI compilers for the past 20 years. The opinions stated here are those of the author, and do not represent opinions of NVIDIA. Follow Michael’s tweek (weekly tweet) @pgicompilers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This