Compilers and More: Exascale Programming Requirements

By Michael Wolfe

April 14, 2011

Programming at Exascale, Part 3

In an earlier column, I discussed six levels of parallelism that we’ll have in exascale systems: node, socket, core, vector, instruction, and pipeline levels, and said that to reach exascale performance, we need to take advantage of all these levels, since the final performance is the product of them all. In my most recent column, I argued that to be successful at that, we need to effectively expose, express, and exploit parallelism: expose it in the application and algorithms, express it in the language and program, and exploit it in the generated code and at runtime. Exposing parallelism is mostly a creative task, and thus must be done by humans. Expressing parallelism is where we mostly get sidetracked: what language, what kind of parallelism, how will it work with legacy software? Since parallel programming is all about performance, we need to focus on those aspects that would hinder performance, specifically locality and synchronization. Finally, successfully exploiting parallelism means mapping the parallelism exposed in the application and expressed in the program to the parallelism in the hardware. I discussed five dimensions of flexibility: scalability, dynamic parallelism, composability, load balancing, and productivity. In this column, the last of a three-part series, I’ll give my views on what programming at the exascale level is likely to require, and how we can get there from where we are today. My belief is that it will take some work, but it’s not a wholesale rewrite of 50 years of high performance expertise.

Exascale Programming: What It Won’t Be

What are the characteristics of a programming strategy for the coming exascale computers? It’s easier to say what it isn’t.

It’s not a library. Encapsulation is a well-known, often used, and important technique to building large systems. By design, encapsulation hides information about the implementation of the encapsulated object (data structure, algorithm, service) from the user of that object. Encapsulation will continue to be important for many reasons. But information hiding obscures not just the algorithm and data structures, but performance aspects, such as what kinds of parallelism are used within the encapsulated object and how that interacts with parallelism of the user of that object, or low level information such as how the data is laid out and how that affects locality in an algorithm. In particular, opaque low-level libraries (e.g., MPI for data distribution and message passing) hide too much information from the system, preventing any system-level tuning. That’s not to say a useful system won’t be built using MPI as the transport layer, but MPI or POSIX threads or other low-level libraries should not be directly used in the application.

It’s not a C++ class hierarchy or template library. Here, I’m again going out on a limb; there have been and continue to be many sets of useful C++ class libraries intended to raise the level of application programming. Take the C++ standard template library for vector; the intent of such a template is to allow a user to define a data structure and get the benefit of reusing any routines in the STL or from elsewhere built on the vector template. But you don’t really understand the performance of the vector datatype; that information hiding means you don’t know if accesses to vector V; are efficient or not. Compare that to an array access in a loop, with the corresponding vector access V[i]; the array access can often be optimized down to two instructions: load, and increment the pointer to the next address. Moreover, two-dimensional objects using the vector type (vector>) become even more opaque.

Or take Thrust, an STL-like implementation providing a high-level interface to GPU programming, built on CUDA. You can define two vectors in Thrust as

   using namespace thrust;    device_vector x(1000);    device_vector y(1000); 

Multiplying two such vectors and then accumulating the result can be done as:

   transform( x.begin(), x.end(), y.begin(), z.begin(), multiplies() );    r = reduce( z.begin(), z.end(), 0, plus() );

This is certainly easier (more productive?) to write than the equivalent CUDA C (or CUDA Fortran) code, but it’s still far easier to write the Fortran:

   r = sum( x(:) * y(:) ) 

Moreover, when the constructs are part of the language, the compiler can compose and optimize them together. As mentioned in my last column, in the Fortran case, the compiler can generate code for the multiply then accumulate the result without requiring an intermediate vector result. With the C++ library, the code for the transform method doesn’t know that its result will immediately be accumulated, so the method or (as in this case) the user has to provide a result vector. The only tool the compiler has to optimize class library calls is inlining, and it’s simply not enough to recover the performance lost by the abstraction. There have been some efforts to use run-time code generation, building the expression tree from the method calls, then generating the optimized (and composed) code from the whole expression tree; this was the technology behind Rapidmind, which is now being used in Intel’s Array Building Blocks (ArBB). Such mechanisms are promising, but what we really want is a way to define new data types and describe operations to the compiler in a way that the compiler can reason about them, compose them, reorder them, and so on; currently, the definition is basically in terms of C code, which is not expressive enough. There’s a research project just waiting to happen.

It’s not a domain-specific language. I really like the idea of DSLs, of embedding domain knowledge in the language and using that knowledge when generating and optimizing the code. However, languages, real languages, are big project; DSLs are (by definition) specialized, and hence don’t have a large enough user community to support production, maintenance and continuing development of the language and all the tools needed to support a language. We can’t expect language implementors (like PGI) to take on the development and continuing support of a plethora of languages, any more than we should expect user communities to each design, implement, and then continue to update, enhance, tune and optimize the language implementation with each new processor release from Intel. A possible alternative approach would be to implement a language to support DSLs, supported by a language vendor, including interfacing to debuggers, performance tools, editors, and so on. The various user communities would then be somewhat insulated from the details of a performance-oriented solution, and the vendor would avoid falling into the many-languages trap. There’s another potential research project.

It’s not OpenCL. OpenCL may be a necessary step towards heterogeneous programming, but it’s not the final answer. It’s very low level, “close to the metal”, as even the language designers admit. As with MPI, we may be able to build on OpenCL, but it’s not sufficient.

It’s not a whole new language. New languages have a high barrier to entry; most programmers avoid adopting a new language for fear that it will die, unless the language meets some need better than anything else, or until it has survived along enough to ameliorate the fear. But I think a new language is not called for here. We may benefit from some new features in existing languages, and maybe new ways to make programs in those languages, but most new languages really don’t add semantically much beyond managed memory.

It’s not easy. I’ve argued before that parallel programming is not easy, won’t be, and can’t be made easy. The idea of making parallel programming easy is silly.

It’s not just parallelism. Parallelism is an important aspect, perhaps the dominant aspect, but the key isn’t parallelism, it’s performance. A bad parallel algorithm doesn’t run fast just because it’s parallel. A bad implementation of a good parallel algorithm will also be slow. It’s quite easy to write slow parallel programs; this was the key failure (my opinion) of High Performance Fortran. So our programming mechanism will focus on performance, where parallelism is one aspect (locality and synchronization being two more).

Exascale Programming: What It Is

So what do we want and need when programming at exascale from whatever programming environment we get? Here is my bucket list:

  • It supports all levels of parallelism, from node parallelism down to vector and pipeline parallelism, effectively. Support is a big word here; it has to allow for a programming model that an application developer can use to think about what kinds of parallelism will map well at different levels, that a programmer can use to write a program that can be mapped well at different levels, and that the implementation (compiler and runtime) can use to exploit the parallelism. We have this today, clumsily, with different mechanisms for different levels; a bit more integration would take us a long way.
  • It can map an expression of program parallelism (a parallel loop, say) to different levels of hardware parallelism (across nodes, or to a vector unit) depending on the target. This will make it scalable up and down, from exascale to laptop. There was a great deal of work on the SISAL language to efficiently scalarize an implicitly parallel language, which turned out to be largely the dual of the parallelizing compiler problem. Such work will be part of this parallelism remapping. Remapping node-level parallelism may require changing the data distribution per node; today, this is done at the application level. We should be able to specify what parameters of the program depend on which aspects of the target machine, so the system can do the remapping.
  • It supports the programmer with lots of feedback. Vectorizing compilers have been very successful for over 35 years in delivering good vector performance from sequential loops because the compilers tell the programmer when they are successful, and more importantly, when and why they fail. This is essentially performance feedback. We are in the business of developing high performance applications, and we should be notified when we are using constructs that will restrict our performance. Static feedback and useful dynamic feedback will both be critical.
  • It supports dynamic parallelism, creating parallel tasks and threads when needed. There are many successful and useful implementations of dynamic parallelism, some limited (OpenMP) and some more aggressive (Cilk). Dynamic parallelism is somewhat at odds with locality and synchronization optimization. Using a work-stealing scheduler, an idle worker will steal a work item from the queue of another worker. However, that work item may have been placed on that worker’s queue because that’s where its data is, or because that work item depends on some other work item also assigned to that worker. However, without constructs for dynamic parallelism, we end up micromanaging thread-level parallelism in the constructs we do have.
  • It efficiently composes abstract operations, as I discussed in my previous column; whether these are native to the language, or abstract operations defined by a user or in a library, the implementation must be able to combine them naturally. Perhaps, when we define abstract operations, we need a mechanism to describe how they can compose with others. Many now-standard compiler optimizations fall into composition, such as loop vectorization and loop fusion. We need more investigation about what composing abstract operations means, beyond simply inlining.
  • It is self-balancing and self-tuning. This involves runtime introspection and behavior modification, and means the parameters or data and work distribution must be exposed to the system in order to be modified. Examples include changing the tile sizes for tiled nested loops when optimized for cache locality, or changing the data distributions when the work load is not uniform across the domain. Such behavior modification has been demonstrated in many systems, though not many integrated with the programming language and its implementation.
  • It must be resilient. The big systems are, many believe, going to be in partial failure mode much of the time. This presents challenges for the system manager and programmer. Expecting the entire system to be working, taking checkpoints and restoring from a failure point will not be efficient if failures are the norm. Some of the necessary features must be supported by the hardware (getting data off a node with a failed processor; early failure detection). Other features could be supported by some of the runtime features we develop for other reasons (redistributing data to working nodes; reserving some nodes to serve as online replacements). Such a system can survive and continue beyond many failures.

Most of these points (except for the last) have been researched and implemented in some form already, and could be reproduced with current technology (and enough motivation) in Fortran, C++, or whatever language you want. We have to extend the programming model to expose performance aspects and perhaps resilience aspects, so the user can guide how the system (compiler plus runtime) implements the program. We often get focused on either abstracting away so much that we lose sight of performance (as happened with High Performance Fortran), or we get so tied up with performance that we focus too much on details of each target machine (as happens today with OpenCL and CUDA). We need to let the programmer do the creative parts, and let the system do the mechanical work.

Final Note: This series of columns is an expanded form of the material from the PGI Exhibitor Forum presentation at SC10 in New Orleans. If you were there, you can tell me whether it’s more informative (or entertaining) in written or verbal form.

About the Author

Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scientists Conduct First Quantum Simulation of Atomic Nucleus

May 23, 2018

OAK RIDGE, Tenn., May 23, 2018—Scientists at the Department of Energy’s Oak Ridge National Laboratory are the first to successfully simulate an atomic nucleus using a quantum computer. The results, published in Ph Read more…

By Rachel Harken, ORNL

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Francisco, one would be tempted to dismiss its claims of inventing Read more…

By John Russell

Intel, Micro Debut Quad-Level Cell NAND Flash

May 22, 2018

Chipmakers continue to gear designs toward AI and other demanding cloud workloads that take advantage of datacenter flash storage capacity. To that end, memory specialist Micron Technology Inc. began shipping compact sol Read more…

By George Leopold

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combined peak computing capacity, the new systems will extend the a Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Leading Solution Providers

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This