Compilers and More: What Makes Performance Portable?

By Michael Wolfe

April 19, 2016

The term performance portability has appeared many times in the past few years. What does it mean? When did we first start worrying about performance portability? Why is it more relevant now than before? Why are so many discussions about performance portability so depressing? Is there any hope? I’ll address these questions and more in this article.

Is performance portability a new idea?

No. If you do an Internet search on the exact phrase performance portability, you’ll find a vast number of hits, most (though not all) relevant to HPC. Some of the highlights from over the years:

So performance portability has been around as a concept and a goal in parallel computing for many years. It’s interesting to look at the challenges addressed in these papers over the past two decades. The 1995 Spectral Models technical memo described experiments to find the best algorithm variants and aspect ratios within the variants across different numbers of processors on a Cray T3D, IBM SP2 and Intel Paragon. The 2001 WRF paper points out that “performance and portability are important but conflicting concerns,” and that we need to develop “a quantitative understanding” of how “data and looping structures affect performance” and then engineer the code “to enable flexible tuning of these aspects across a variety of different platforms in a single source code.”

The 2005 EARTH paper states “the performance portability of parallel programs is becoming increasingly important and should be considered when designing parallel execution models, APIs, and runtime system software.” The 2012 paper proposes the Chapel language to enable performance portability. I find it interesting that this selection of articles gradually shifts the burden of performance portability from the application and the algorithm to the program to the language and tools.

In truth, any solution to performance portability needs all of the above. An unscalable algorithm won’t deliver good performance on a more highly parallel target machine. An inappropriate data structure can affect performance almost as much as an inappropriate algorithm. And we need a way to express the program such that the compilers, tools and runtimes can exploit the features of a variety of target systems.

So, What Is Performance Portability?
If it’s not obvious, a performance portable implementation of an application or algorithm is one that will achieve high performance across a variety of target systems. High performance is relative, but we mean high performance for that target system.

An important question is the variety of the target systems. Twenty years ago, the Spectral Models paper was looking at parallelism across 32 to 1024 nodes of three different types of systems, where each node used only a single processor (the Intel Paragon had two processors per node, but the second processor was used as a message coprocessor in these experiments). Interestingly, only one of those systems used MPI, which was still rather new. MPI is now the standard for managing communication between nodes, though PGAS language features like Fortran coarrays and HPC++ distributed arrays have a growing user community.

Modern supercomputers all use multicore processors at each node, sometimes 32 or more cores per node. That number is rising, with systems based on Intel Knights Landing Xeon Phi (KNL) systems supporting over 256 hardware threads per node. We also have systems with attached accelerators. Today, that is most commonly a GPU accelerator, but in the recent past other programmable accelerators have been explored, such as DSPs. In some application areas, FPGAs are being used successfully as well. However, FPGAs have serious limitations for general purpose HPC use, including the high cost of programming (including place-and-route), the high cost of dynamically reconfiguring the network, the relatively sparse logic and low clock rate. For this article, we include multicore, manycore and GPU accelerated nodes, and we allow for other possible future programmable accelerators, but we do not consider FPGAs.

Is Performance Portability Still Important?
TOP500 the list graphic
If we look at the top four systems in the November, 2015 TOP500 supercomputer list, we see four distinct node architectures:

  1. Tianhe-2, 16000 nodes with two 12-core Intel Ivy Bridge processors and three 57-core Intel Knights Corner Xeon Phi coprocessors per node.
  2. Titan, 18688 nodes with one 16-core AMD Interlagos processor and one NVIDIA Tesla K20X GPU per node.
  3. Sequoia, 98304 nodes with one 16-core IBM PowerPC A2 processor per node.
  4. The K computer, 88128 nodes with one 8-core Fujitsu SPARC VIIIfx per node.

Four different processor vendors, two with accelerated fat nodes, two with over 50,000 thinner nodes.

We see similar variety in smaller HPC procurements around the globe. Many systems are designed around single-socket multicore x86 or POWER processors; many more include GPU accelerators; a number will be built with the Intel Knights Landing Xeon Phi processors. There are also experimental efforts that may affect future designs.

The upcoming DOE CORAL supercomputers have similar variety. The IBM+NVIDIA+Mellanox Summit machine at Oak Ridge will have about 3500 nodes with multiple IBM OpenPOWER processors and multiple NVIDIA Volta GPUs at each node, whereas the Intel+Cray Aurora machine at Argonne will have over 50,000 nodes with a single highly parallel Intel Knights Hill Xeon Phi processor at each node. The contracts for these systems include Centers of Excellence that are implementing multi-year efforts to prepare and modernize codes for the new, highly parallel computer systems. Imagine if those applications had been written with performance portability in mind to begin with, how much less effort it would be to move them to a new system, and across the variety of systems.

Is Performance Portability More Important Now?
Is performance portability more important today than it was in the past? Not really. Supercomputing has gone through several phases. Those of us with more grey hair remember the efforts in the 1970s to port programs from sequential Control Data 6600 and 7600 machines to the vector Cray-1. Moving from single processors to multiple processors was another significant effort, and eventually, after more than ten years, resulted in the OpenMP specification. Moving from shared memory to scalable message-passing networks was yet another big effort, which produced MPI much more quickly (it’s easier to standardize a library than a language).

Performance portability receives attention now because the node programming model is changing (again). In the past 20 years, many programmers reverted to writing sequential node programs connected by message passing. When the messaging was well structured and hidden, the programmer really only thought about the sequential node program. However, today’s machines have very highly parallel nodes, and not all the parallelism on the node can be effectively exploited with MPI. Vector or SIMD parallelism, for instance, must be exploited by the compiler, so we’re going back to 1975, having to think about vector algorithms. Memory hierarchy management is being exposed, with data movement between system memory and high bandwidth memory, so we have to optimize for that, the way we optimized for virtual memory and caches in the 1980s. Utilizing shared memory can be more efficient than passing messages, so we have to look at hybrid parallelism, M+X (where M is usually MPI), which is something new for many programmers and applications.

Accelerators add yet another level of programming complexity. We’ve had accelerators in some form or another for more than 40 years, dating back to at least the IBM 2938 Array Processor and the Floating Point Systems AP-120B. Today’s accelerators are different, in that they are relatively inexpensive and can be easily added to node design for a significant performance boost. Yet, in spite of the long history, there has been no dominant or de facto accelerator programming model.

Do Current Programming Models Support Performance Portability?
Does MPI support or provide performance portability? I’m not going to say that MPI, or any programming language, model or style, will give or guarantee performance portability. Programming with message passing, as in MPI, does a good job in forcing the programmer to organize the application so most computation and most data accesses are local to a node, minimizing the frequency and volume of data communication. A well designed program using MPI is likely to get pretty good performance across a wide range of systems. So, yes, for that aspect of the parallelism, MPI promotes performance portability quite well.

Do libraries give performance portability? If you find a well-supported library that solves your problem, that’s definitely the way to go. When someone else writes that library and optimizes it for the different systems you want to use, you save time and effort by building on the expertise of the library developers. This is the best kind of productivity.

message-passing-interface-mpi-300x280Does OpenMP provide performance portability? For multiprocessor and multicore shared memory workstations, servers and nodes, OpenMP has done a very good job of supporting performance portability, across a range of operating systems, processor architectures, and compiler providers. The question is whether it can handle today’s variety of node architectures. Recent presentations are not encouraging. The new OpenMPCon this past September had a presentation that reported on a recent DOE Workshop on Portability.

One of the take-away messages from that workshop, according to representatives from some of the DOE labs, was that “Most people are resigned to having different sources for different platforms, with simple #ifdef or other mechanisms.” I find this very disappointing, and depressing. Another slide in that presentation promotes the view that “Having a common code base using a portable programming environment, even if you must fill the code with if-defs or have architectural specific versions of kernels … is the only way to support maintainability.” This seems to be the complete antithesis to performance portability, that portability is unachievable, unnecessary, and perhaps undesirable.

Some claim that OpenCL provides performance portability. This is interesting, since OpenCL was initially designed to, and has continuously claimed to provide an “efficient, close-to-the-metal programming interface” and to “form the foundation layer of a parallel computing ecosystem,” quoting from the OpenCL specification (from any version to date). This also seems to be the complete opposite of performance portability. It’s more of a performance possibility language, a standard language that allows you to get to the lowest level of each target. See the HPCwire article on OpenCL, which states “True, we may need to write a new version of our kernel to get the best performance on Architecture A, but isn’t this what we actually want?” The answer to that question, in the HPC space, is emphatically no.

OpenACCThere are a number of new models being proposed as solutions to the node-level performance portability problem. Chapel comes from the Cray HPCS (High Productivity) effort, and is proposed as a solution for the whole parallel programming problem, including across nodes, across cores and across accelerators. OpenACC started as a directive API for programming accelerators, but was designed to support multicore parallelism as well, and there is some evidence to support this now. RAJA and Kokkos are two C++ class library solutions to the programming problem, similar in some respects to the efforts to standardize parallel programming in the next C++ revision.

I have a strong preference for compiler-based solutions, partly because I am a compiler guy, but also because a compiler is the only tool that sees both the program and the target architecture. A C++ class library can provide a solution that syntactically resembles something like a parallel loop, but in fact it’s a method invocation that hopefully gets the body of the “loop” inlined for performance. The expert writing the parallel loop class has no knowledge of the body of the loop and can’t take advantage of anything like memory reference patterns. But I remain willing to be convinced.

Is There Hope For Performance Portability?
I believe there is hope, even in the relative near term. PGI has been working on OpenACC compilers, tuning them for accelerated computing as well as multicore parallelism. The main elements of OpenACC, as with the OpenMP 4 target additions, are data management and parallelism management. For accelerated computing, the data management is used to control or optimize data movement between the system memory and the high bandwidth memory of the accelerator, and the parallelism is used to generate parallel code for the accelerator. For multicore targets, the data management is mostly ignored, and the parallelism is used to generate multicore code, pretty much equivalent to OpenMP parallel loops.

PGI is investigating support for the Intel Knights Landing Xeon Phi processor in the future, where the parallelism will again generate multicore code, and the data management will be used to control or optimize data movement between the system (far) memory and the high bandwidth (near) memory. The data management in OpenMP is more or less isomorphic to the data management in OpenACC, from which it borrowed the ideas. The compute management in OpenMP is much more strict than in OpenACC. As discussed in an article last year, OpenMP is thread-centric. A parallel loop directive in OpenMP doesn’t tell the compiler that the loop can be safely run in parallel; it tells the compiler that it must spread the iterations of the loop across the OpenMP threads.

In OpenACC, a parallel loop directive does declare that the loop must be data-race free, and that the iterations can be run in parallel on any parallel mechanism that is available in the target system. This is incredibly powerful, and I believe is key to achieving performance portability in any programming model. The programmer declares the parallelism, and the implementation then exploits it. No amount of compiler analysis will create parallelism that isn’t in the program, and no amount of programmer effort can predict what new machines will look like in five years or more.

It seems clear that future architectures will be more parallel, and we must be writing our applications in a way that can exploit all that parallelism. Moreover, we should demand that our programming models have a path to performance portability at whatever granularity that model supports. When choosing or designing a parallel programming method, a lesson you need to apply from experience with OpenACC is to make your model as descriptive and declarative as possible, to allow for exploitation on future targets that you can’t even envision now.

Conclusions
I don’t claim that OpenACC or any programming model is the final solution. An application is a collection of algorithms, and there will always be alternate algorithms or data layouts where one version runs better on one type of system and another version runs better on others. This goes back (at least) to the 1995 technical memo cited above. Many algorithms have tuning parameters, such as aspect ratios or tile sizes, where runtime auto-tuning can be used to great effect. Task-based runtime systems break the common bulk-synchronous programming model, allowing for more latency-tolerant communication. All of these approaches need to be explored and exploited. As I write this, the DOE Centers of Excellence are organizing a Workshop on Performance Portability for later this month, which I sadly cannot attend. I wonder what conclusions will be reached there.

About the Author
Michael Wolfe has been a compiler developer for over 40 years in both academia and industry, and has been working on the PGI compilers for the past 20 years. The opinions stated here are those of the author, and do not represent opinions of NVIDIA. Follow Michael’s tweek (weekly tweet) @pgicompilers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip using standard CMOS fabrication. At Hot Chips 31 in Stanfor Read more…

By Tiffany Trader

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Trump Administration and NIST Issue AI Standards Development Plan

August 14, 2019

Efforts to develop AI are gathering steam fast. On Monday, the White House issued a federal plan to help develop technical standards for AI following up on a mandate contained in the Administration’s AI Executive Order Read more…

By John Russell

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Cloudy with a Chance of Mainframes

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

Rapid rates of change sometimes result in unexpected bedfellows. Read more…

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a good understanding of the early universe, its fate billions Read more…

By Rob Johnson

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This