Exascale Programming: Adapting What We Have Can (and Must) Work

By Michael A. Heroux, Sandia National Laboratories

January 14, 2016

The demands of massive concurrency and increased resilience required for effective exascale computing have led to claims that our existing approach to application programming must be replaced. In this article I argue that disruption is inevitable but will not require new languages or programming environments. Instead disruption will be in application design, introducing new control layers that will provide the concurrency, adaptability and resilience features we need in order to achieve effective exascale performance levels.

Before starting a discussion of parallel programming futures, we should state some concepts that are often vaguely or multiply defined. First, a model is an abstract system for reasoning about design and implementation, so programming models help us reason about programming, execution models help us reason about how a program will run, particularly when thinking about parallel execution. Programming and execution environments are concrete toolsets we use to implement and run a program. Finally, a computer language and its companion libraries (Fortran, C++, MPI, etc.) can be used to encode many programming models and can be compiled to run on many execution environments.

In this article I argue that programming and execution environments are changing to meet the future needs of exascale systems. Libraries (especially MPI) and OpenMP features are evolving to provide credible exascale programming environments for C++, C and Fortran, that are also portable and sustainable. In addition, C++ is evolving to make embedded domain specific language (EDSL) enhancements convenient to develop and use. EDSLs provide programming model and environment extensions to support parallel programming while still being portable, since the extensions are written in standard C++ syntax. EDSLs can also be targeted for further optimizations by the execution environment. EDSLs provide the most credible path to language support for new parallel programming models, syntax and execution.

Adapting What We Have Can (and Must) Work
C++ (with C as an important subset, and with OpenMP extensions) has become an essential element of scientific computing. C++ provides suitable abstractions and extensibility for defining EDSLs, while at the same time permitting explicit data references for efficient execution and interoperability with Fortran. It also has large user communities and an active standards committee whose interests align with high performance scientific computing. New features in the 2011, 2014 and 2017 standards position C++ to be even more effective for parallel computing and for extending EDSL capabilities. With the 2020 standard, C++ should contain features for the most common types of on-node parallelism including hierarchical and task-graph. Furthermore, the emergence of the modular, open compilation tools LLVM and Clang give research and vendor communities a rich platform for new R&D in production-quality compiler environments, enabling development of future language features, pluggable tools and custom optimization passes.

OpenMP is evolving to address tasking requirements, complemented by the OpenACC fork (whose capabilities are being integrated into OpenMP) to address accelerators. OpenMP has a committed two-year release cycle, with approved features published in alternate years. Simple sprinkling of OpenMP/OpenACC directives into an existing code has given these tools a bad reputation, but for fundamentally refactored applications, performance can be very good. OpenMP also promises better sustainability, especially compared to explicit pthread, CUDA or OpenCL programming approaches, which require retuning with each new generation of hardware.

ExascaleEditionThumbMPI is evolving as well and has always been more a portability layer than a programming API. For well-designed applications, explicit MPI calls are encapsulated in abstraction layers, e.g., exchangeGhostValues(), which most application developers call instead of MPI functions. Inclusion of asynchronous global and neighborhood collectives enables implementation of latency hiding algorithms, and MPI shared memory features enable use of shared memory between ranks, and thread-like shared memory programming.

Fortran is essential for exascale programming. The explicit policy of non-overlapping arrays (enabling compilers to more easily vectorize and parallelize automatically), simple loop syntax, longstanding support for robust real and complex arithmetic, along with the vast quantity of encoded scientific knowledge mean Fortran will remain the primary development language for many domain scientists who write software. Even as clean-slate Fortran development efforts decline, the value of the legacy Fortran software base and its ongoing refactoring and extension will be essential for decades to come.

Execution environments must undergo more substantial changes than programming languages and environments. Current runtime systems are very good at concurrent execution, but are not designed for lightweight threading (except GPUs) or locality-aware task mapping. Thread-scalable computing will require better, more transparent on-node thread parallel environments. Fortunately, we see much progress lately, underneath existing programming languages and environments.

Adding Tasking is Critical
Although programming and execution environments are evolving smoothly toward exascale capabilities, we do have disruptive changes ahead. Most scalable parallel applications today have simple data and work decompositions: Each MPI rank owns a static portion of large data objects, e.g., a subdomain of a large distributed global domain, and each rank executes its code sequentially (potentially vectorizing), or with modest thread-parallel capabilities. This approach works on existing NUMA multicore systems by assigning multiple MPI ranks to a node and using OpenMP across a handful of cores, but performance using this approach is not sustainable as core counts continue to increase.

Tasking, with work granularities sufficiently large to make effective use of one or a few cores, must be added to most applications in order to sustain performance improvement as concurrency demands increase. Specifically, tasking requires one or more levels of additional decomposition (at least logically) of data objects, e.g., create multiple patches or tiles from each MPI subdomain, and assign tasks to execute concurrently on these patches. Within a single shared memory node, tasks can in principle cooperate closely, executing dataflow patterns, sharing data and otherwise collaborating in lightweight parallel computation.

A tasking layer in an application enables portability across GPU (where the GPU gets a big patch and handles its task concurrency itself), multicore and manycore devices; and works with heterogeneous device combinations if task-executed code is written using OpenMP/OpenACC or uses compile-time abstraction layers such as the Kokkos library to compile to each specific device type. This tasking layer can also be implemented using a second layer of message passing. Furthermore, a tasking layer permits exploitation of new resources of parallelism. Fine grain functional parallelism, and pipeline, wavefront, and parallel-prefix execution patterns are feasible because of shared memory and lightweight control transfer. These new parallelism resources are essential as we exhaust traditional sources such as SPMD data parallelism and ensembles. It is worth noting that tasking designs within application codes do not impose use of particular parallel programming languages or environments and can in principle permit combinations of several approaches in a single executable program.

Tasking also supports new models and strategies for high bandwidth memory, resilience and load balance. Task work and data can be scoped to fit into a particular memory space. Also, since parent tasks have all necessary state to re-spawn child tasks, they can establish pre and post conditions on state data for child tasks, and timeout conditions, or simply re-spawn and re-queue for better execution flow. These attributes can protect against many failure sources, including reducing silent data corruption failures, and can improve execution time. All of the innovation required to support this kind of programming is already underway in the C++ language, programming and execution environments. No replacement is required.

The final important aspect of a tasking layer is that task-executed code is encapsulated within the tasking framework and itself has only modest parallel execution requirements: It should vectorize if possible to be able to execute efficiently on a small number of shared-cache cores. As a result, task-executed code can be written in any common HPC language, including Fortran, thus preserving our Fortran code base. In most instances, the task management layer is most effectively written in C++, but a well-designed application can insulate domain scientists from the details of task management and permit them to write new functionality at the task level in much the same way as they write code for MPI-based applications today. The only major added concern is how to encode inter-task dependencies. I think training programmers about futures concepts, which can be used to encapsulate control transfer logic, is perhaps the best way to portably provide this encoding.

The Reality of Starting Anew
With the explicit challenge of reaching multi-billion-way concurrency in order to reach exascale performance levels and beyond, many people have argued that we need a clean break from incrementally improving on our existing approaches to parallel programming, and such arguments have spurred development and exploration of new parallel programming languages. While such efforts are certainly interesting research, there is no evidence that the HPC community can bring a new language to market in a portable, sustainable way. The lack of traction gained by Chapel, X-10 and Fortress, the HPCS languages introduced more than a decade ago are one indication, but for an even more telling example, we simply have to look at the current state of Fortran to reach this conclusion.

trilinosWhile Fortran remains an important language for scientific computing, and new lines of Fortran code are still being written, the adoption of new Fortran features is very slow. In 2009 and 2010, the C++ based Trilinos project developed Fortran interface capabilities, called ForTrilinos. As an object-oriented (OO) collection of libraries, we assumed that the OO features of Fortran 2003 would provide us with natural mappings of Trilinos classes into Fortran equivalents. Over the two-year span of the ForTrilinos effort, we discovered that compiler support for 2003 features was very immature. ForTrilinos developers quickly came to know the handful of compiler developers who worked on these features and, despite close collaboration with them to complete and stabilize the implementation of Fortran 2003 features (in 2010), ForTrilinos stalled and is no longer developed.

The Fortran 2008 standard has similar issues. Co-arrays, an elegant approach for supporting SPMD parallel programming, first developed in the early 1990s, is part of the 2008 standard, but any application developer interested in portability cannot use them. In contrast, the C++ standards community is committed to producing a revised standard every three years, and features for the coming standard often appear in vendor compilers before or simultaneously with ratification of the standard.

Yes, Fortran is an important programming language for scientific computing, and it is a language our community owns, but the reality is that the use of new standard Fortran features is very restricted, if portability is paramount. Furthermore, it is the feature set of Fortran 95 that is most valuable to scientific computing. The recent announcement of a revitalized effort to produce a Fortran equivalent to Clang is exciting, especially if the resulting Fortran 95 features are solid and compiled code vectorizes well. However, the anemic adoption of new Fortran standards should serve as strong evidence that any prospects of a new scientific computing language are very slim. Also, the likelihood that the broader computing community will cooperate with us in establishing a new language is low. C++ is the broader community’s answer for high-performance concurrency.

My pessimism about new languages does not mean novelty is impossible. It means that novelty must be introduced as modest extensions to existing capabilities. CUDA took this route and other task-based embedded C++ DSLs show promise.

Summary
The exascale performance milestone is approaching, but reaching it requires disruptive changes at all levels of the computing ecosystem, driven by needs for massive concurrency. Scientific application design will require disruptive changes in software architecture, in the form of tasking, in order to address increasing hierarchies, take advantage of memory hierarchies, the commodity performance curves of thread-state count and vectorization, and address resilience. However, the practical realities of our production environments make the possibilities for brand-new software eco-systems extremely unlikely. We only have to look at the reality of our Fortran environments to see this.

Fortunately, our existing languages and environments are adapting to support the programming and execution models necessary for exascale performance. The disruption on the path to exascale is being contained to adapting our existing languages and environments, not replacing them.

Author Bio
Michael A. Heroux is a Distinguished Member of the Technical Staff at Sandia National Laboratories, working on new algorithm development, and robust parallel implementation of solver components for problems of interest to Sandia and the broader scientific and engineering community. He leads development of the Trilinos Project, an effort to provide state of the art solution methods in a state of the art software framework. Trilinos is an award-winning product, freely available as Open Source and actively developed by dozens of researchers. Dr. Heroux is also the lead developer and architect of the HPCG benchmark, intended as an alternative ranking for the TOP 500 computer systems. For more: http://www.sandia.gov/~maherou/biography.html

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings (John Wiley & Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel Ships Drives Based on 3D XPoint Non-volatile Memory

March 20, 2017

Intel Corp. has begun shipping new storage drives based on its 3D XPoint non-volatile memory technology as it targets data-driven workloads. Intel’s new Optane solid-state drives, designated P4800X, seek to combine the attributes of memory and storage in the same device. Read more…

By George Leopold

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This