Compilers and More: What Makes Performance Portable?

By Michael Wolfe

April 19, 2016

The term performance portability has appeared many times in the past few years. What does it mean? When did we first start worrying about performance portability? Why is it more relevant now than before? Why are so many discussions about performance portability so depressing? Is there any hope? I’ll address these questions and more in this article.

Is performance portability a new idea?

No. If you do an Internet search on the exact phrase performance portability, you’ll find a vast number of hits, most (though not all) relevant to HPC. Some of the highlights from over the years:

So performance portability has been around as a concept and a goal in parallel computing for many years. It’s interesting to look at the challenges addressed in these papers over the past two decades. The 1995 Spectral Models technical memo described experiments to find the best algorithm variants and aspect ratios within the variants across different numbers of processors on a Cray T3D, IBM SP2 and Intel Paragon. The 2001 WRF paper points out that “performance and portability are important but conflicting concerns,” and that we need to develop “a quantitative understanding” of how “data and looping structures affect performance” and then engineer the code “to enable flexible tuning of these aspects across a variety of different platforms in a single source code.”

The 2005 EARTH paper states “the performance portability of parallel programs is becoming increasingly important and should be considered when designing parallel execution models, APIs, and runtime system software.” The 2012 paper proposes the Chapel language to enable performance portability. I find it interesting that this selection of articles gradually shifts the burden of performance portability from the application and the algorithm to the program to the language and tools.

In truth, any solution to performance portability needs all of the above. An unscalable algorithm won’t deliver good performance on a more highly parallel target machine. An inappropriate data structure can affect performance almost as much as an inappropriate algorithm. And we need a way to express the program such that the compilers, tools and runtimes can exploit the features of a variety of target systems.

So, What Is Performance Portability?
If it’s not obvious, a performance portable implementation of an application or algorithm is one that will achieve high performance across a variety of target systems. High performance is relative, but we mean high performance for that target system.

An important question is the variety of the target systems. Twenty years ago, the Spectral Models paper was looking at parallelism across 32 to 1024 nodes of three different types of systems, where each node used only a single processor (the Intel Paragon had two processors per node, but the second processor was used as a message coprocessor in these experiments). Interestingly, only one of those systems used MPI, which was still rather new. MPI is now the standard for managing communication between nodes, though PGAS language features like Fortran coarrays and HPC++ distributed arrays have a growing user community.

Modern supercomputers all use multicore processors at each node, sometimes 32 or more cores per node. That number is rising, with systems based on Intel Knights Landing Xeon Phi (KNL) systems supporting over 256 hardware threads per node. We also have systems with attached accelerators. Today, that is most commonly a GPU accelerator, but in the recent past other programmable accelerators have been explored, such as DSPs. In some application areas, FPGAs are being used successfully as well. However, FPGAs have serious limitations for general purpose HPC use, including the high cost of programming (including place-and-route), the high cost of dynamically reconfiguring the network, the relatively sparse logic and low clock rate. For this article, we include multicore, manycore and GPU accelerated nodes, and we allow for other possible future programmable accelerators, but we do not consider FPGAs.

Is Performance Portability Still Important?
TOP500 the list graphic
If we look at the top four systems in the November, 2015 TOP500 supercomputer list, we see four distinct node architectures:

  1. Tianhe-2, 16000 nodes with two 12-core Intel Ivy Bridge processors and three 57-core Intel Knights Corner Xeon Phi coprocessors per node.
  2. Titan, 18688 nodes with one 16-core AMD Interlagos processor and one NVIDIA Tesla K20X GPU per node.
  3. Sequoia, 98304 nodes with one 16-core IBM PowerPC A2 processor per node.
  4. The K computer, 88128 nodes with one 8-core Fujitsu SPARC VIIIfx per node.

Four different processor vendors, two with accelerated fat nodes, two with over 50,000 thinner nodes.

We see similar variety in smaller HPC procurements around the globe. Many systems are designed around single-socket multicore x86 or POWER processors; many more include GPU accelerators; a number will be built with the Intel Knights Landing Xeon Phi processors. There are also experimental efforts that may affect future designs.

The upcoming DOE CORAL supercomputers have similar variety. The IBM+NVIDIA+Mellanox Summit machine at Oak Ridge will have about 3500 nodes with multiple IBM OpenPOWER processors and multiple NVIDIA Volta GPUs at each node, whereas the Intel+Cray Aurora machine at Argonne will have over 50,000 nodes with a single highly parallel Intel Knights Hill Xeon Phi processor at each node. The contracts for these systems include Centers of Excellence that are implementing multi-year efforts to prepare and modernize codes for the new, highly parallel computer systems. Imagine if those applications had been written with performance portability in mind to begin with, how much less effort it would be to move them to a new system, and across the variety of systems.

Is Performance Portability More Important Now?
Is performance portability more important today than it was in the past? Not really. Supercomputing has gone through several phases. Those of us with more grey hair remember the efforts in the 1970s to port programs from sequential Control Data 6600 and 7600 machines to the vector Cray-1. Moving from single processors to multiple processors was another significant effort, and eventually, after more than ten years, resulted in the OpenMP specification. Moving from shared memory to scalable message-passing networks was yet another big effort, which produced MPI much more quickly (it’s easier to standardize a library than a language).

Performance portability receives attention now because the node programming model is changing (again). In the past 20 years, many programmers reverted to writing sequential node programs connected by message passing. When the messaging was well structured and hidden, the programmer really only thought about the sequential node program. However, today’s machines have very highly parallel nodes, and not all the parallelism on the node can be effectively exploited with MPI. Vector or SIMD parallelism, for instance, must be exploited by the compiler, so we’re going back to 1975, having to think about vector algorithms. Memory hierarchy management is being exposed, with data movement between system memory and high bandwidth memory, so we have to optimize for that, the way we optimized for virtual memory and caches in the 1980s. Utilizing shared memory can be more efficient than passing messages, so we have to look at hybrid parallelism, M+X (where M is usually MPI), which is something new for many programmers and applications.

Accelerators add yet another level of programming complexity. We’ve had accelerators in some form or another for more than 40 years, dating back to at least the IBM 2938 Array Processor and the Floating Point Systems AP-120B. Today’s accelerators are different, in that they are relatively inexpensive and can be easily added to node design for a significant performance boost. Yet, in spite of the long history, there has been no dominant or de facto accelerator programming model.

Do Current Programming Models Support Performance Portability?
Does MPI support or provide performance portability? I’m not going to say that MPI, or any programming language, model or style, will give or guarantee performance portability. Programming with message passing, as in MPI, does a good job in forcing the programmer to organize the application so most computation and most data accesses are local to a node, minimizing the frequency and volume of data communication. A well designed program using MPI is likely to get pretty good performance across a wide range of systems. So, yes, for that aspect of the parallelism, MPI promotes performance portability quite well.

Do libraries give performance portability? If you find a well-supported library that solves your problem, that’s definitely the way to go. When someone else writes that library and optimizes it for the different systems you want to use, you save time and effort by building on the expertise of the library developers. This is the best kind of productivity.

message-passing-interface-mpi-300x280Does OpenMP provide performance portability? For multiprocessor and multicore shared memory workstations, servers and nodes, OpenMP has done a very good job of supporting performance portability, across a range of operating systems, processor architectures, and compiler providers. The question is whether it can handle today’s variety of node architectures. Recent presentations are not encouraging. The new OpenMPCon this past September had a presentation that reported on a recent DOE Workshop on Portability.

One of the take-away messages from that workshop, according to representatives from some of the DOE labs, was that “Most people are resigned to having different sources for different platforms, with simple #ifdef or other mechanisms.” I find this very disappointing, and depressing. Another slide in that presentation promotes the view that “Having a common code base using a portable programming environment, even if you must fill the code with if-defs or have architectural specific versions of kernels … is the only way to support maintainability.” This seems to be the complete antithesis to performance portability, that portability is unachievable, unnecessary, and perhaps undesirable.

Some claim that OpenCL provides performance portability. This is interesting, since OpenCL was initially designed to, and has continuously claimed to provide an “efficient, close-to-the-metal programming interface” and to “form the foundation layer of a parallel computing ecosystem,” quoting from the OpenCL specification (from any version to date). This also seems to be the complete opposite of performance portability. It’s more of a performance possibility language, a standard language that allows you to get to the lowest level of each target. See the HPCwire article on OpenCL, which states “True, we may need to write a new version of our kernel to get the best performance on Architecture A, but isn’t this what we actually want?” The answer to that question, in the HPC space, is emphatically no.

OpenACCThere are a number of new models being proposed as solutions to the node-level performance portability problem. Chapel comes from the Cray HPCS (High Productivity) effort, and is proposed as a solution for the whole parallel programming problem, including across nodes, across cores and across accelerators. OpenACC started as a directive API for programming accelerators, but was designed to support multicore parallelism as well, and there is some evidence to support this now. RAJA and Kokkos are two C++ class library solutions to the programming problem, similar in some respects to the efforts to standardize parallel programming in the next C++ revision.

I have a strong preference for compiler-based solutions, partly because I am a compiler guy, but also because a compiler is the only tool that sees both the program and the target architecture. A C++ class library can provide a solution that syntactically resembles something like a parallel loop, but in fact it’s a method invocation that hopefully gets the body of the “loop” inlined for performance. The expert writing the parallel loop class has no knowledge of the body of the loop and can’t take advantage of anything like memory reference patterns. But I remain willing to be convinced.

Is There Hope For Performance Portability?
I believe there is hope, even in the relative near term. PGI has been working on OpenACC compilers, tuning them for accelerated computing as well as multicore parallelism. The main elements of OpenACC, as with the OpenMP 4 target additions, are data management and parallelism management. For accelerated computing, the data management is used to control or optimize data movement between the system memory and the high bandwidth memory of the accelerator, and the parallelism is used to generate parallel code for the accelerator. For multicore targets, the data management is mostly ignored, and the parallelism is used to generate multicore code, pretty much equivalent to OpenMP parallel loops.

PGI is investigating support for the Intel Knights Landing Xeon Phi processor in the future, where the parallelism will again generate multicore code, and the data management will be used to control or optimize data movement between the system (far) memory and the high bandwidth (near) memory. The data management in OpenMP is more or less isomorphic to the data management in OpenACC, from which it borrowed the ideas. The compute management in OpenMP is much more strict than in OpenACC. As discussed in an article last year, OpenMP is thread-centric. A parallel loop directive in OpenMP doesn’t tell the compiler that the loop can be safely run in parallel; it tells the compiler that it must spread the iterations of the loop across the OpenMP threads.

In OpenACC, a parallel loop directive does declare that the loop must be data-race free, and that the iterations can be run in parallel on any parallel mechanism that is available in the target system. This is incredibly powerful, and I believe is key to achieving performance portability in any programming model. The programmer declares the parallelism, and the implementation then exploits it. No amount of compiler analysis will create parallelism that isn’t in the program, and no amount of programmer effort can predict what new machines will look like in five years or more.

It seems clear that future architectures will be more parallel, and we must be writing our applications in a way that can exploit all that parallelism. Moreover, we should demand that our programming models have a path to performance portability at whatever granularity that model supports. When choosing or designing a parallel programming method, a lesson you need to apply from experience with OpenACC is to make your model as descriptive and declarative as possible, to allow for exploitation on future targets that you can’t even envision now.

Conclusions
I don’t claim that OpenACC or any programming model is the final solution. An application is a collection of algorithms, and there will always be alternate algorithms or data layouts where one version runs better on one type of system and another version runs better on others. This goes back (at least) to the 1995 technical memo cited above. Many algorithms have tuning parameters, such as aspect ratios or tile sizes, where runtime auto-tuning can be used to great effect. Task-based runtime systems break the common bulk-synchronous programming model, allowing for more latency-tolerant communication. All of these approaches need to be explored and exploited. As I write this, the DOE Centers of Excellence are organizing a Workshop on Performance Portability for later this month, which I sadly cannot attend. I wonder what conclusions will be reached there.

About the Author
Michael Wolfe has been a compiler developer for over 40 years in both academia and industry, and has been working on the PGI compilers for the past 20 years. The opinions stated here are those of the author, and do not represent opinions of NVIDIA. Follow Michael’s tweek (weekly tweet) @pgicompilers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NSF Budget Approved for $8.3B in 2020, a 2.5% Increase

January 16, 2020

The National Science Foundation (NSF) has been spared a President Trump-proposed budget cut that would have rolled back its funding to 2012 levels. Congress passed legislation last month that sets the budget at $8.3 bill Read more…

By Staff report

NOAA Updates Its Massive, Supercomputer-Generated Climate Dataset

January 15, 2020

As Australia burns, understanding and mitigating the climate crisis is more urgent than ever. Now, by leveraging the computing resources at the National Energy Research Scientific Computing Center (NERSC), the U.S. National Oceanic and Atmospheric Administration (NOAA) has updated its 20th Century Reanalysis Project (20CR) dataset... Read more…

By Oliver Peckham

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of the countries in Europe, has signed a four-year, $89-million Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, the gold standard programming languages for fast performance Read more…

By John Russell

Quantum Computing, ML Drive 2019 Patent Awards

January 14, 2020

The dizzying pace of technology innovation often fueled by the growing availability of computing horsepower is underscored by the race to develop unique designs and application that can be patented. Among the goals of ma Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Andrew Jones Joins Microsoft Azure HPC Team

January 13, 2020

Andrew Jones announced today he is joining Microsoft as part of the Azure HPC engineering & product team in early February. Jones makes the move after nearly 12 years at the UK HPC consultancy Numerical Algorithms Gr Read more…

By Staff report

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

HPCwire Awards Highlight Supercomputing Achievements in the Sciences

January 7, 2020

In November at SC19 in Denver, the HPCwire Readers’ and Editors’ Choice awards program celebrated its 16th year of honoring remarkable achievements in high-performance computing. With categories ranging from Best Use of HPC in Energy to Top HPC-Enabled Scientific Achievement, many of the winners contributed to groundbreaking developments in the sciences. This editorial highlights those awards. Read more…

By Oliver Peckham

Blasts from the (Recent) Past and Hopes for the Future

December 23, 2019

What does 2020 look like to you? What did 2019 look like? Lots happened but the main trends were carryovers from 2018 – AI messaging again blanketed everything; the roll-out of new big machines and exascale announcements continued; processor diversity and system disaggregation kicked up a notch; hyperscalers continued flexing their muscles (think AWS and its Graviton2 processor); and the U.S. and China continued their awkward trade war. Read more…

By John Russell

ARPA-E Applies ML to Power Generation Designs

December 19, 2019

The U.S. Energy Department’s research arm is leveraging machine learning technologies to simplify the design process for energy systems ranging from photovolt Read more…

By George Leopold

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This