Compilers and More: What Makes Performance Portable?

By Michael Wolfe

April 19, 2016

The term performance portability has appeared many times in the past few years. What does it mean? When did we first start worrying about performance portability? Why is it more relevant now than before? Why are so many discussions about performance portability so depressing? Is there any hope? I’ll address these questions and more in this article.

Is performance portability a new idea?

No. If you do an Internet search on the exact phrase performance portability, you’ll find a vast number of hits, most (though not all) relevant to HPC. Some of the highlights from over the years:

So performance portability has been around as a concept and a goal in parallel computing for many years. It’s interesting to look at the challenges addressed in these papers over the past two decades. The 1995 Spectral Models technical memo described experiments to find the best algorithm variants and aspect ratios within the variants across different numbers of processors on a Cray T3D, IBM SP2 and Intel Paragon. The 2001 WRF paper points out that “performance and portability are important but conflicting concerns,” and that we need to develop “a quantitative understanding” of how “data and looping structures affect performance” and then engineer the code “to enable flexible tuning of these aspects across a variety of different platforms in a single source code.”

The 2005 EARTH paper states “the performance portability of parallel programs is becoming increasingly important and should be considered when designing parallel execution models, APIs, and runtime system software.” The 2012 paper proposes the Chapel language to enable performance portability. I find it interesting that this selection of articles gradually shifts the burden of performance portability from the application and the algorithm to the program to the language and tools.

In truth, any solution to performance portability needs all of the above. An unscalable algorithm won’t deliver good performance on a more highly parallel target machine. An inappropriate data structure can affect performance almost as much as an inappropriate algorithm. And we need a way to express the program such that the compilers, tools and runtimes can exploit the features of a variety of target systems.

So, What Is Performance Portability?
If it’s not obvious, a performance portable implementation of an application or algorithm is one that will achieve high performance across a variety of target systems. High performance is relative, but we mean high performance for that target system.

An important question is the variety of the target systems. Twenty years ago, the Spectral Models paper was looking at parallelism across 32 to 1024 nodes of three different types of systems, where each node used only a single processor (the Intel Paragon had two processors per node, but the second processor was used as a message coprocessor in these experiments). Interestingly, only one of those systems used MPI, which was still rather new. MPI is now the standard for managing communication between nodes, though PGAS language features like Fortran coarrays and HPC++ distributed arrays have a growing user community.

Modern supercomputers all use multicore processors at each node, sometimes 32 or more cores per node. That number is rising, with systems based on Intel Knights Landing Xeon Phi (KNL) systems supporting over 256 hardware threads per node. We also have systems with attached accelerators. Today, that is most commonly a GPU accelerator, but in the recent past other programmable accelerators have been explored, such as DSPs. In some application areas, FPGAs are being used successfully as well. However, FPGAs have serious limitations for general purpose HPC use, including the high cost of programming (including place-and-route), the high cost of dynamically reconfiguring the network, the relatively sparse logic and low clock rate. For this article, we include multicore, manycore and GPU accelerated nodes, and we allow for other possible future programmable accelerators, but we do not consider FPGAs.

Is Performance Portability Still Important?
TOP500 the list graphic
If we look at the top four systems in the November, 2015 TOP500 supercomputer list, we see four distinct node architectures:

  1. Tianhe-2, 16000 nodes with two 12-core Intel Ivy Bridge processors and three 57-core Intel Knights Corner Xeon Phi coprocessors per node.
  2. Titan, 18688 nodes with one 16-core AMD Interlagos processor and one NVIDIA Tesla K20X GPU per node.
  3. Sequoia, 98304 nodes with one 16-core IBM PowerPC A2 processor per node.
  4. The K computer, 88128 nodes with one 8-core Fujitsu SPARC VIIIfx per node.

Four different processor vendors, two with accelerated fat nodes, two with over 50,000 thinner nodes.

We see similar variety in smaller HPC procurements around the globe. Many systems are designed around single-socket multicore x86 or POWER processors; many more include GPU accelerators; a number will be built with the Intel Knights Landing Xeon Phi processors. There are also experimental efforts that may affect future designs.

The upcoming DOE CORAL supercomputers have similar variety. The IBM+NVIDIA+Mellanox Summit machine at Oak Ridge will have about 3500 nodes with multiple IBM OpenPOWER processors and multiple NVIDIA Volta GPUs at each node, whereas the Intel+Cray Aurora machine at Argonne will have over 50,000 nodes with a single highly parallel Intel Knights Hill Xeon Phi processor at each node. The contracts for these systems include Centers of Excellence that are implementing multi-year efforts to prepare and modernize codes for the new, highly parallel computer systems. Imagine if those applications had been written with performance portability in mind to begin with, how much less effort it would be to move them to a new system, and across the variety of systems.

Is Performance Portability More Important Now?
Is performance portability more important today than it was in the past? Not really. Supercomputing has gone through several phases. Those of us with more grey hair remember the efforts in the 1970s to port programs from sequential Control Data 6600 and 7600 machines to the vector Cray-1. Moving from single processors to multiple processors was another significant effort, and eventually, after more than ten years, resulted in the OpenMP specification. Moving from shared memory to scalable message-passing networks was yet another big effort, which produced MPI much more quickly (it’s easier to standardize a library than a language).

Performance portability receives attention now because the node programming model is changing (again). In the past 20 years, many programmers reverted to writing sequential node programs connected by message passing. When the messaging was well structured and hidden, the programmer really only thought about the sequential node program. However, today’s machines have very highly parallel nodes, and not all the parallelism on the node can be effectively exploited with MPI. Vector or SIMD parallelism, for instance, must be exploited by the compiler, so we’re going back to 1975, having to think about vector algorithms. Memory hierarchy management is being exposed, with data movement between system memory and high bandwidth memory, so we have to optimize for that, the way we optimized for virtual memory and caches in the 1980s. Utilizing shared memory can be more efficient than passing messages, so we have to look at hybrid parallelism, M+X (where M is usually MPI), which is something new for many programmers and applications.

Accelerators add yet another level of programming complexity. We’ve had accelerators in some form or another for more than 40 years, dating back to at least the IBM 2938 Array Processor and the Floating Point Systems AP-120B. Today’s accelerators are different, in that they are relatively inexpensive and can be easily added to node design for a significant performance boost. Yet, in spite of the long history, there has been no dominant or de facto accelerator programming model.

Do Current Programming Models Support Performance Portability?
Does MPI support or provide performance portability? I’m not going to say that MPI, or any programming language, model or style, will give or guarantee performance portability. Programming with message passing, as in MPI, does a good job in forcing the programmer to organize the application so most computation and most data accesses are local to a node, minimizing the frequency and volume of data communication. A well designed program using MPI is likely to get pretty good performance across a wide range of systems. So, yes, for that aspect of the parallelism, MPI promotes performance portability quite well.

Do libraries give performance portability? If you find a well-supported library that solves your problem, that’s definitely the way to go. When someone else writes that library and optimizes it for the different systems you want to use, you save time and effort by building on the expertise of the library developers. This is the best kind of productivity.

message-passing-interface-mpi-300x280Does OpenMP provide performance portability? For multiprocessor and multicore shared memory workstations, servers and nodes, OpenMP has done a very good job of supporting performance portability, across a range of operating systems, processor architectures, and compiler providers. The question is whether it can handle today’s variety of node architectures. Recent presentations are not encouraging. The new OpenMPCon this past September had a presentation that reported on a recent DOE Workshop on Portability.

One of the take-away messages from that workshop, according to representatives from some of the DOE labs, was that “Most people are resigned to having different sources for different platforms, with simple #ifdef or other mechanisms.” I find this very disappointing, and depressing. Another slide in that presentation promotes the view that “Having a common code base using a portable programming environment, even if you must fill the code with if-defs or have architectural specific versions of kernels … is the only way to support maintainability.” This seems to be the complete antithesis to performance portability, that portability is unachievable, unnecessary, and perhaps undesirable.

Some claim that OpenCL provides performance portability. This is interesting, since OpenCL was initially designed to, and has continuously claimed to provide an “efficient, close-to-the-metal programming interface” and to “form the foundation layer of a parallel computing ecosystem,” quoting from the OpenCL specification (from any version to date). This also seems to be the complete opposite of performance portability. It’s more of a performance possibility language, a standard language that allows you to get to the lowest level of each target. See the HPCwire article on OpenCL, which states “True, we may need to write a new version of our kernel to get the best performance on Architecture A, but isn’t this what we actually want?” The answer to that question, in the HPC space, is emphatically no.

OpenACCThere are a number of new models being proposed as solutions to the node-level performance portability problem. Chapel comes from the Cray HPCS (High Productivity) effort, and is proposed as a solution for the whole parallel programming problem, including across nodes, across cores and across accelerators. OpenACC started as a directive API for programming accelerators, but was designed to support multicore parallelism as well, and there is some evidence to support this now. RAJA and Kokkos are two C++ class library solutions to the programming problem, similar in some respects to the efforts to standardize parallel programming in the next C++ revision.

I have a strong preference for compiler-based solutions, partly because I am a compiler guy, but also because a compiler is the only tool that sees both the program and the target architecture. A C++ class library can provide a solution that syntactically resembles something like a parallel loop, but in fact it’s a method invocation that hopefully gets the body of the “loop” inlined for performance. The expert writing the parallel loop class has no knowledge of the body of the loop and can’t take advantage of anything like memory reference patterns. But I remain willing to be convinced.

Is There Hope For Performance Portability?
I believe there is hope, even in the relative near term. PGI has been working on OpenACC compilers, tuning them for accelerated computing as well as multicore parallelism. The main elements of OpenACC, as with the OpenMP 4 target additions, are data management and parallelism management. For accelerated computing, the data management is used to control or optimize data movement between the system memory and the high bandwidth memory of the accelerator, and the parallelism is used to generate parallel code for the accelerator. For multicore targets, the data management is mostly ignored, and the parallelism is used to generate multicore code, pretty much equivalent to OpenMP parallel loops.

PGI is investigating support for the Intel Knights Landing Xeon Phi processor in the future, where the parallelism will again generate multicore code, and the data management will be used to control or optimize data movement between the system (far) memory and the high bandwidth (near) memory. The data management in OpenMP is more or less isomorphic to the data management in OpenACC, from which it borrowed the ideas. The compute management in OpenMP is much more strict than in OpenACC. As discussed in an article last year, OpenMP is thread-centric. A parallel loop directive in OpenMP doesn’t tell the compiler that the loop can be safely run in parallel; it tells the compiler that it must spread the iterations of the loop across the OpenMP threads.

In OpenACC, a parallel loop directive does declare that the loop must be data-race free, and that the iterations can be run in parallel on any parallel mechanism that is available in the target system. This is incredibly powerful, and I believe is key to achieving performance portability in any programming model. The programmer declares the parallelism, and the implementation then exploits it. No amount of compiler analysis will create parallelism that isn’t in the program, and no amount of programmer effort can predict what new machines will look like in five years or more.

It seems clear that future architectures will be more parallel, and we must be writing our applications in a way that can exploit all that parallelism. Moreover, we should demand that our programming models have a path to performance portability at whatever granularity that model supports. When choosing or designing a parallel programming method, a lesson you need to apply from experience with OpenACC is to make your model as descriptive and declarative as possible, to allow for exploitation on future targets that you can’t even envision now.

Conclusions
I don’t claim that OpenACC or any programming model is the final solution. An application is a collection of algorithms, and there will always be alternate algorithms or data layouts where one version runs better on one type of system and another version runs better on others. This goes back (at least) to the 1995 technical memo cited above. Many algorithms have tuning parameters, such as aspect ratios or tile sizes, where runtime auto-tuning can be used to great effect. Task-based runtime systems break the common bulk-synchronous programming model, allowing for more latency-tolerant communication. All of these approaches need to be explored and exploited. As I write this, the DOE Centers of Excellence are organizing a Workshop on Performance Portability for later this month, which I sadly cannot attend. I wonder what conclusions will be reached there.

About the Author
Michael Wolfe has been a compiler developer for over 40 years in both academia and industry, and has been working on the PGI compilers for the past 20 years. The opinions stated here are those of the author, and do not represent opinions of NVIDIA. Follow Michael’s tweek (weekly tweet) @pgicompilers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings (John Wiley & Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel Ships Drives Based on 3-D XPoint Non-volatile Memory

March 20, 2017

Intel Corp. has begun shipping new storage drives based on its 3-D XPoint non-volatile memory technology as it targets data-driven workloads. Read more…

By George Leopold

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This