Compilers and More: What Makes Performance Portable?

By Michael Wolfe

April 19, 2016

The term performance portability has appeared many times in the past few years. What does it mean? When did we first start worrying about performance portability? Why is it more relevant now than before? Why are so many discussions about performance portability so depressing? Is there any hope? I’ll address these questions and more in this article.

Is performance portability a new idea?

No. If you do an Internet search on the exact phrase performance portability, you’ll find a vast number of hits, most (though not all) relevant to HPC. Some of the highlights from over the years:

So performance portability has been around as a concept and a goal in parallel computing for many years. It’s interesting to look at the challenges addressed in these papers over the past two decades. The 1995 Spectral Models technical memo described experiments to find the best algorithm variants and aspect ratios within the variants across different numbers of processors on a Cray T3D, IBM SP2 and Intel Paragon. The 2001 WRF paper points out that “performance and portability are important but conflicting concerns,” and that we need to develop “a quantitative understanding” of how “data and looping structures affect performance” and then engineer the code “to enable flexible tuning of these aspects across a variety of different platforms in a single source code.”

The 2005 EARTH paper states “the performance portability of parallel programs is becoming increasingly important and should be considered when designing parallel execution models, APIs, and runtime system software.” The 2012 paper proposes the Chapel language to enable performance portability. I find it interesting that this selection of articles gradually shifts the burden of performance portability from the application and the algorithm to the program to the language and tools.

In truth, any solution to performance portability needs all of the above. An unscalable algorithm won’t deliver good performance on a more highly parallel target machine. An inappropriate data structure can affect performance almost as much as an inappropriate algorithm. And we need a way to express the program such that the compilers, tools and runtimes can exploit the features of a variety of target systems.

So, What Is Performance Portability?
If it’s not obvious, a performance portable implementation of an application or algorithm is one that will achieve high performance across a variety of target systems. High performance is relative, but we mean high performance for that target system.

An important question is the variety of the target systems. Twenty years ago, the Spectral Models paper was looking at parallelism across 32 to 1024 nodes of three different types of systems, where each node used only a single processor (the Intel Paragon had two processors per node, but the second processor was used as a message coprocessor in these experiments). Interestingly, only one of those systems used MPI, which was still rather new. MPI is now the standard for managing communication between nodes, though PGAS language features like Fortran coarrays and HPC++ distributed arrays have a growing user community.

Modern supercomputers all use multicore processors at each node, sometimes 32 or more cores per node. That number is rising, with systems based on Intel Knights Landing Xeon Phi (KNL) systems supporting over 256 hardware threads per node. We also have systems with attached accelerators. Today, that is most commonly a GPU accelerator, but in the recent past other programmable accelerators have been explored, such as DSPs. In some application areas, FPGAs are being used successfully as well. However, FPGAs have serious limitations for general purpose HPC use, including the high cost of programming (including place-and-route), the high cost of dynamically reconfiguring the network, the relatively sparse logic and low clock rate. For this article, we include multicore, manycore and GPU accelerated nodes, and we allow for other possible future programmable accelerators, but we do not consider FPGAs.

Is Performance Portability Still Important?
TOP500 the list graphic
If we look at the top four systems in the November, 2015 TOP500 supercomputer list, we see four distinct node architectures:

  1. Tianhe-2, 16000 nodes with two 12-core Intel Ivy Bridge processors and three 57-core Intel Knights Corner Xeon Phi coprocessors per node.
  2. Titan, 18688 nodes with one 16-core AMD Interlagos processor and one NVIDIA Tesla K20X GPU per node.
  3. Sequoia, 98304 nodes with one 16-core IBM PowerPC A2 processor per node.
  4. The K computer, 88128 nodes with one 8-core Fujitsu SPARC VIIIfx per node.

Four different processor vendors, two with accelerated fat nodes, two with over 50,000 thinner nodes.

We see similar variety in smaller HPC procurements around the globe. Many systems are designed around single-socket multicore x86 or POWER processors; many more include GPU accelerators; a number will be built with the Intel Knights Landing Xeon Phi processors. There are also experimental efforts that may affect future designs.

The upcoming DOE CORAL supercomputers have similar variety. The IBM+NVIDIA+Mellanox Summit machine at Oak Ridge will have about 3500 nodes with multiple IBM OpenPOWER processors and multiple NVIDIA Volta GPUs at each node, whereas the Intel+Cray Aurora machine at Argonne will have over 50,000 nodes with a single highly parallel Intel Knights Hill Xeon Phi processor at each node. The contracts for these systems include Centers of Excellence that are implementing multi-year efforts to prepare and modernize codes for the new, highly parallel computer systems. Imagine if those applications had been written with performance portability in mind to begin with, how much less effort it would be to move them to a new system, and across the variety of systems.

Is Performance Portability More Important Now?
Is performance portability more important today than it was in the past? Not really. Supercomputing has gone through several phases. Those of us with more grey hair remember the efforts in the 1970s to port programs from sequential Control Data 6600 and 7600 machines to the vector Cray-1. Moving from single processors to multiple processors was another significant effort, and eventually, after more than ten years, resulted in the OpenMP specification. Moving from shared memory to scalable message-passing networks was yet another big effort, which produced MPI much more quickly (it’s easier to standardize a library than a language).

Performance portability receives attention now because the node programming model is changing (again). In the past 20 years, many programmers reverted to writing sequential node programs connected by message passing. When the messaging was well structured and hidden, the programmer really only thought about the sequential node program. However, today’s machines have very highly parallel nodes, and not all the parallelism on the node can be effectively exploited with MPI. Vector or SIMD parallelism, for instance, must be exploited by the compiler, so we’re going back to 1975, having to think about vector algorithms. Memory hierarchy management is being exposed, with data movement between system memory and high bandwidth memory, so we have to optimize for that, the way we optimized for virtual memory and caches in the 1980s. Utilizing shared memory can be more efficient than passing messages, so we have to look at hybrid parallelism, M+X (where M is usually MPI), which is something new for many programmers and applications.

Accelerators add yet another level of programming complexity. We’ve had accelerators in some form or another for more than 40 years, dating back to at least the IBM 2938 Array Processor and the Floating Point Systems AP-120B. Today’s accelerators are different, in that they are relatively inexpensive and can be easily added to node design for a significant performance boost. Yet, in spite of the long history, there has been no dominant or de facto accelerator programming model.

Do Current Programming Models Support Performance Portability?
Does MPI support or provide performance portability? I’m not going to say that MPI, or any programming language, model or style, will give or guarantee performance portability. Programming with message passing, as in MPI, does a good job in forcing the programmer to organize the application so most computation and most data accesses are local to a node, minimizing the frequency and volume of data communication. A well designed program using MPI is likely to get pretty good performance across a wide range of systems. So, yes, for that aspect of the parallelism, MPI promotes performance portability quite well.

Do libraries give performance portability? If you find a well-supported library that solves your problem, that’s definitely the way to go. When someone else writes that library and optimizes it for the different systems you want to use, you save time and effort by building on the expertise of the library developers. This is the best kind of productivity.

message-passing-interface-mpi-300x280Does OpenMP provide performance portability? For multiprocessor and multicore shared memory workstations, servers and nodes, OpenMP has done a very good job of supporting performance portability, across a range of operating systems, processor architectures, and compiler providers. The question is whether it can handle today’s variety of node architectures. Recent presentations are not encouraging. The new OpenMPCon this past September had a presentation that reported on a recent DOE Workshop on Portability.

One of the take-away messages from that workshop, according to representatives from some of the DOE labs, was that “Most people are resigned to having different sources for different platforms, with simple #ifdef or other mechanisms.” I find this very disappointing, and depressing. Another slide in that presentation promotes the view that “Having a common code base using a portable programming environment, even if you must fill the code with if-defs or have architectural specific versions of kernels … is the only way to support maintainability.” This seems to be the complete antithesis to performance portability, that portability is unachievable, unnecessary, and perhaps undesirable.

Some claim that OpenCL provides performance portability. This is interesting, since OpenCL was initially designed to, and has continuously claimed to provide an “efficient, close-to-the-metal programming interface” and to “form the foundation layer of a parallel computing ecosystem,” quoting from the OpenCL specification (from any version to date). This also seems to be the complete opposite of performance portability. It’s more of a performance possibility language, a standard language that allows you to get to the lowest level of each target. See the HPCwire article on OpenCL, which states “True, we may need to write a new version of our kernel to get the best performance on Architecture A, but isn’t this what we actually want?” The answer to that question, in the HPC space, is emphatically no.

OpenACCThere are a number of new models being proposed as solutions to the node-level performance portability problem. Chapel comes from the Cray HPCS (High Productivity) effort, and is proposed as a solution for the whole parallel programming problem, including across nodes, across cores and across accelerators. OpenACC started as a directive API for programming accelerators, but was designed to support multicore parallelism as well, and there is some evidence to support this now. RAJA and Kokkos are two C++ class library solutions to the programming problem, similar in some respects to the efforts to standardize parallel programming in the next C++ revision.

I have a strong preference for compiler-based solutions, partly because I am a compiler guy, but also because a compiler is the only tool that sees both the program and the target architecture. A C++ class library can provide a solution that syntactically resembles something like a parallel loop, but in fact it’s a method invocation that hopefully gets the body of the “loop” inlined for performance. The expert writing the parallel loop class has no knowledge of the body of the loop and can’t take advantage of anything like memory reference patterns. But I remain willing to be convinced.

Is There Hope For Performance Portability?
I believe there is hope, even in the relative near term. PGI has been working on OpenACC compilers, tuning them for accelerated computing as well as multicore parallelism. The main elements of OpenACC, as with the OpenMP 4 target additions, are data management and parallelism management. For accelerated computing, the data management is used to control or optimize data movement between the system memory and the high bandwidth memory of the accelerator, and the parallelism is used to generate parallel code for the accelerator. For multicore targets, the data management is mostly ignored, and the parallelism is used to generate multicore code, pretty much equivalent to OpenMP parallel loops.

PGI is investigating support for the Intel Knights Landing Xeon Phi processor in the future, where the parallelism will again generate multicore code, and the data management will be used to control or optimize data movement between the system (far) memory and the high bandwidth (near) memory. The data management in OpenMP is more or less isomorphic to the data management in OpenACC, from which it borrowed the ideas. The compute management in OpenMP is much more strict than in OpenACC. As discussed in an article last year, OpenMP is thread-centric. A parallel loop directive in OpenMP doesn’t tell the compiler that the loop can be safely run in parallel; it tells the compiler that it must spread the iterations of the loop across the OpenMP threads.

In OpenACC, a parallel loop directive does declare that the loop must be data-race free, and that the iterations can be run in parallel on any parallel mechanism that is available in the target system. This is incredibly powerful, and I believe is key to achieving performance portability in any programming model. The programmer declares the parallelism, and the implementation then exploits it. No amount of compiler analysis will create parallelism that isn’t in the program, and no amount of programmer effort can predict what new machines will look like in five years or more.

It seems clear that future architectures will be more parallel, and we must be writing our applications in a way that can exploit all that parallelism. Moreover, we should demand that our programming models have a path to performance portability at whatever granularity that model supports. When choosing or designing a parallel programming method, a lesson you need to apply from experience with OpenACC is to make your model as descriptive and declarative as possible, to allow for exploitation on future targets that you can’t even envision now.

Conclusions
I don’t claim that OpenACC or any programming model is the final solution. An application is a collection of algorithms, and there will always be alternate algorithms or data layouts where one version runs better on one type of system and another version runs better on others. This goes back (at least) to the 1995 technical memo cited above. Many algorithms have tuning parameters, such as aspect ratios or tile sizes, where runtime auto-tuning can be used to great effect. Task-based runtime systems break the common bulk-synchronous programming model, allowing for more latency-tolerant communication. All of these approaches need to be explored and exploited. As I write this, the DOE Centers of Excellence are organizing a Workshop on Performance Portability for later this month, which I sadly cannot attend. I wonder what conclusions will be reached there.

About the Author
Michael Wolfe has been a compiler developer for over 40 years in both academia and industry, and has been working on the PGI compilers for the past 20 years. The opinions stated here are those of the author, and do not represent opinions of NVIDIA. Follow Michael’s tweek (weekly tweet) @pgicompilers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

HPE Extreme Performance Solutions

Object Storage is the Ideal Storage Method for CME Companies

The communications, media, and entertainment (CME) sector is experiencing a massive paradigm shift driven by rising data volumes and the demand for high-performance data analytics. Read more…

Weekly Twitter Roundup (Feb. 16, 2017)

February 16, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Alexander Named Dep. Dir. of Brookhaven Computational Initiative

February 15, 2017

Francis Alexander, a physicist with extensive management and leadership experience in computational science research, has been named Deputy Director of the Computational Science Initiative at the U.S. Read more…

Here’s What a Neural Net Looks Like On the Inside

February 15, 2017

Ever wonder what the inside of a machine learning model looks like? Today Graphcore released fascinating images that show how the computational graph concept maps to a new graph processor and graph programming framework it’s creating. Read more…

By Alex Woodie

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Cray Posts Best-Ever Quarter, Visibility Still Limited

February 10, 2017

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. Read more…

By Tiffany Trader

HPC Cloud Startup Launches ‘App Store’ for HPC Workflows

February 9, 2017

“Civilization advances by extending the number of important operations which we can perform without thinking about them,” Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This