Compilers and More: Exascale Programming Requirements

By Michael Wolfe

April 14, 2011

Programming at Exascale, Part 3

In an earlier column, I discussed six levels of parallelism that we’ll have in exascale systems: node, socket, core, vector, instruction, and pipeline levels, and said that to reach exascale performance, we need to take advantage of all these levels, since the final performance is the product of them all. In my most recent column, I argued that to be successful at that, we need to effectively expose, express, and exploit parallelism: expose it in the application and algorithms, express it in the language and program, and exploit it in the generated code and at runtime. Exposing parallelism is mostly a creative task, and thus must be done by humans. Expressing parallelism is where we mostly get sidetracked: what language, what kind of parallelism, how will it work with legacy software? Since parallel programming is all about performance, we need to focus on those aspects that would hinder performance, specifically locality and synchronization. Finally, successfully exploiting parallelism means mapping the parallelism exposed in the application and expressed in the program to the parallelism in the hardware. I discussed five dimensions of flexibility: scalability, dynamic parallelism, composability, load balancing, and productivity. In this column, the last of a three-part series, I’ll give my views on what programming at the exascale level is likely to require, and how we can get there from where we are today. My belief is that it will take some work, but it’s not a wholesale rewrite of 50 years of high performance expertise.

Exascale Programming: What It Won’t Be

What are the characteristics of a programming strategy for the coming exascale computers? It’s easier to say what it isn’t.

It’s not a library. Encapsulation is a well-known, often used, and important technique to building large systems. By design, encapsulation hides information about the implementation of the encapsulated object (data structure, algorithm, service) from the user of that object. Encapsulation will continue to be important for many reasons. But information hiding obscures not just the algorithm and data structures, but performance aspects, such as what kinds of parallelism are used within the encapsulated object and how that interacts with parallelism of the user of that object, or low level information such as how the data is laid out and how that affects locality in an algorithm. In particular, opaque low-level libraries (e.g., MPI for data distribution and message passing) hide too much information from the system, preventing any system-level tuning. That’s not to say a useful system won’t be built using MPI as the transport layer, but MPI or POSIX threads or other low-level libraries should not be directly used in the application.

It’s not a C++ class hierarchy or template library. Here, I’m again going out on a limb; there have been and continue to be many sets of useful C++ class libraries intended to raise the level of application programming. Take the C++ standard template library for vector; the intent of such a template is to allow a user to define a data structure and get the benefit of reusing any routines in the STL or from elsewhere built on the vector template. But you don’t really understand the performance of the vector datatype; that information hiding means you don’t know if accesses to vector V; are efficient or not. Compare that to an array access in a loop, with the corresponding vector access V[i]; the array access can often be optimized down to two instructions: load, and increment the pointer to the next address. Moreover, two-dimensional objects using the vector type (vector>) become even more opaque.

Or take Thrust, an STL-like implementation providing a high-level interface to GPU programming, built on CUDA. You can define two vectors in Thrust as

   using namespace thrust;    device_vector x(1000);    device_vector y(1000); 

Multiplying two such vectors and then accumulating the result can be done as:

   transform( x.begin(), x.end(), y.begin(), z.begin(), multiplies() );    r = reduce( z.begin(), z.end(), 0, plus() );

This is certainly easier (more productive?) to write than the equivalent CUDA C (or CUDA Fortran) code, but it’s still far easier to write the Fortran:

   r = sum( x(:) * y(:) ) 

Moreover, when the constructs are part of the language, the compiler can compose and optimize them together. As mentioned in my last column, in the Fortran case, the compiler can generate code for the multiply then accumulate the result without requiring an intermediate vector result. With the C++ library, the code for the transform method doesn’t know that its result will immediately be accumulated, so the method or (as in this case) the user has to provide a result vector. The only tool the compiler has to optimize class library calls is inlining, and it’s simply not enough to recover the performance lost by the abstraction. There have been some efforts to use run-time code generation, building the expression tree from the method calls, then generating the optimized (and composed) code from the whole expression tree; this was the technology behind Rapidmind, which is now being used in Intel’s Array Building Blocks (ArBB). Such mechanisms are promising, but what we really want is a way to define new data types and describe operations to the compiler in a way that the compiler can reason about them, compose them, reorder them, and so on; currently, the definition is basically in terms of C code, which is not expressive enough. There’s a research project just waiting to happen.

It’s not a domain-specific language. I really like the idea of DSLs, of embedding domain knowledge in the language and using that knowledge when generating and optimizing the code. However, languages, real languages, are big project; DSLs are (by definition) specialized, and hence don’t have a large enough user community to support production, maintenance and continuing development of the language and all the tools needed to support a language. We can’t expect language implementors (like PGI) to take on the development and continuing support of a plethora of languages, any more than we should expect user communities to each design, implement, and then continue to update, enhance, tune and optimize the language implementation with each new processor release from Intel. A possible alternative approach would be to implement a language to support DSLs, supported by a language vendor, including interfacing to debuggers, performance tools, editors, and so on. The various user communities would then be somewhat insulated from the details of a performance-oriented solution, and the vendor would avoid falling into the many-languages trap. There’s another potential research project.

It’s not OpenCL. OpenCL may be a necessary step towards heterogeneous programming, but it’s not the final answer. It’s very low level, “close to the metal”, as even the language designers admit. As with MPI, we may be able to build on OpenCL, but it’s not sufficient.

It’s not a whole new language. New languages have a high barrier to entry; most programmers avoid adopting a new language for fear that it will die, unless the language meets some need better than anything else, or until it has survived along enough to ameliorate the fear. But I think a new language is not called for here. We may benefit from some new features in existing languages, and maybe new ways to make programs in those languages, but most new languages really don’t add semantically much beyond managed memory.

It’s not easy. I’ve argued before that parallel programming is not easy, won’t be, and can’t be made easy. The idea of making parallel programming easy is silly.

It’s not just parallelism. Parallelism is an important aspect, perhaps the dominant aspect, but the key isn’t parallelism, it’s performance. A bad parallel algorithm doesn’t run fast just because it’s parallel. A bad implementation of a good parallel algorithm will also be slow. It’s quite easy to write slow parallel programs; this was the key failure (my opinion) of High Performance Fortran. So our programming mechanism will focus on performance, where parallelism is one aspect (locality and synchronization being two more).

Exascale Programming: What It Is

So what do we want and need when programming at exascale from whatever programming environment we get? Here is my bucket list:

  • It supports all levels of parallelism, from node parallelism down to vector and pipeline parallelism, effectively. Support is a big word here; it has to allow for a programming model that an application developer can use to think about what kinds of parallelism will map well at different levels, that a programmer can use to write a program that can be mapped well at different levels, and that the implementation (compiler and runtime) can use to exploit the parallelism. We have this today, clumsily, with different mechanisms for different levels; a bit more integration would take us a long way.
  • It can map an expression of program parallelism (a parallel loop, say) to different levels of hardware parallelism (across nodes, or to a vector unit) depending on the target. This will make it scalable up and down, from exascale to laptop. There was a great deal of work on the SISAL language to efficiently scalarize an implicitly parallel language, which turned out to be largely the dual of the parallelizing compiler problem. Such work will be part of this parallelism remapping. Remapping node-level parallelism may require changing the data distribution per node; today, this is done at the application level. We should be able to specify what parameters of the program depend on which aspects of the target machine, so the system can do the remapping.
  • It supports the programmer with lots of feedback. Vectorizing compilers have been very successful for over 35 years in delivering good vector performance from sequential loops because the compilers tell the programmer when they are successful, and more importantly, when and why they fail. This is essentially performance feedback. We are in the business of developing high performance applications, and we should be notified when we are using constructs that will restrict our performance. Static feedback and useful dynamic feedback will both be critical.
  • It supports dynamic parallelism, creating parallel tasks and threads when needed. There are many successful and useful implementations of dynamic parallelism, some limited (OpenMP) and some more aggressive (Cilk). Dynamic parallelism is somewhat at odds with locality and synchronization optimization. Using a work-stealing scheduler, an idle worker will steal a work item from the queue of another worker. However, that work item may have been placed on that worker’s queue because that’s where its data is, or because that work item depends on some other work item also assigned to that worker. However, without constructs for dynamic parallelism, we end up micromanaging thread-level parallelism in the constructs we do have.
  • It efficiently composes abstract operations, as I discussed in my previous column; whether these are native to the language, or abstract operations defined by a user or in a library, the implementation must be able to combine them naturally. Perhaps, when we define abstract operations, we need a mechanism to describe how they can compose with others. Many now-standard compiler optimizations fall into composition, such as loop vectorization and loop fusion. We need more investigation about what composing abstract operations means, beyond simply inlining.
  • It is self-balancing and self-tuning. This involves runtime introspection and behavior modification, and means the parameters or data and work distribution must be exposed to the system in order to be modified. Examples include changing the tile sizes for tiled nested loops when optimized for cache locality, or changing the data distributions when the work load is not uniform across the domain. Such behavior modification has been demonstrated in many systems, though not many integrated with the programming language and its implementation.
  • It must be resilient. The big systems are, many believe, going to be in partial failure mode much of the time. This presents challenges for the system manager and programmer. Expecting the entire system to be working, taking checkpoints and restoring from a failure point will not be efficient if failures are the norm. Some of the necessary features must be supported by the hardware (getting data off a node with a failed processor; early failure detection). Other features could be supported by some of the runtime features we develop for other reasons (redistributing data to working nodes; reserving some nodes to serve as online replacements). Such a system can survive and continue beyond many failures.

Most of these points (except for the last) have been researched and implemented in some form already, and could be reproduced with current technology (and enough motivation) in Fortran, C++, or whatever language you want. We have to extend the programming model to expose performance aspects and perhaps resilience aspects, so the user can guide how the system (compiler plus runtime) implements the program. We often get focused on either abstracting away so much that we lose sight of performance (as happened with High Performance Fortran), or we get so tied up with performance that we focus too much on details of each target machine (as happens today with OpenCL and CUDA). We need to let the programmer do the creative parts, and let the system do the mechanical work.

Final Note: This series of columns is an expanded form of the material from the PGI Exhibitor Forum presentation at SC10 in New Orleans. If you were there, you can tell me whether it’s more informative (or entertaining) in written or verbal form.

About the Author

Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

January 20, 2017

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. Read more…

By John Russell

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Jan. 19, 2017)

January 19, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This