Exascale Programming: Adapting What We Have Can (and Must) Work

By Michael A. Heroux, Sandia National Laboratories

January 14, 2016

The demands of massive concurrency and increased resilience required for effective exascale computing have led to claims that our existing approach to application programming must be replaced. In this article I argue that disruption is inevitable but will not require new languages or programming environments. Instead disruption will be in application design, introducing new control layers that will provide the concurrency, adaptability and resilience features we need in order to achieve effective exascale performance levels.

Before starting a discussion of parallel programming futures, we should state some concepts that are often vaguely or multiply defined. First, a model is an abstract system for reasoning about design and implementation, so programming models help us reason about programming, execution models help us reason about how a program will run, particularly when thinking about parallel execution. Programming and execution environments are concrete toolsets we use to implement and run a program. Finally, a computer language and its companion libraries (Fortran, C++, MPI, etc.) can be used to encode many programming models and can be compiled to run on many execution environments.

In this article I argue that programming and execution environments are changing to meet the future needs of exascale systems. Libraries (especially MPI) and OpenMP features are evolving to provide credible exascale programming environments for C++, C and Fortran, that are also portable and sustainable. In addition, C++ is evolving to make embedded domain specific language (EDSL) enhancements convenient to develop and use. EDSLs provide programming model and environment extensions to support parallel programming while still being portable, since the extensions are written in standard C++ syntax. EDSLs can also be targeted for further optimizations by the execution environment. EDSLs provide the most credible path to language support for new parallel programming models, syntax and execution.

Adapting What We Have Can (and Must) Work
C++ (with C as an important subset, and with OpenMP extensions) has become an essential element of scientific computing. C++ provides suitable abstractions and extensibility for defining EDSLs, while at the same time permitting explicit data references for efficient execution and interoperability with Fortran. It also has large user communities and an active standards committee whose interests align with high performance scientific computing. New features in the 2011, 2014 and 2017 standards position C++ to be even more effective for parallel computing and for extending EDSL capabilities. With the 2020 standard, C++ should contain features for the most common types of on-node parallelism including hierarchical and task-graph. Furthermore, the emergence of the modular, open compilation tools LLVM and Clang give research and vendor communities a rich platform for new R&D in production-quality compiler environments, enabling development of future language features, pluggable tools and custom optimization passes.

OpenMP is evolving to address tasking requirements, complemented by the OpenACC fork (whose capabilities are being integrated into OpenMP) to address accelerators. OpenMP has a committed two-year release cycle, with approved features published in alternate years. Simple sprinkling of OpenMP/OpenACC directives into an existing code has given these tools a bad reputation, but for fundamentally refactored applications, performance can be very good. OpenMP also promises better sustainability, especially compared to explicit pthread, CUDA or OpenCL programming approaches, which require retuning with each new generation of hardware.

ExascaleEditionThumbMPI is evolving as well and has always been more a portability layer than a programming API. For well-designed applications, explicit MPI calls are encapsulated in abstraction layers, e.g., exchangeGhostValues(), which most application developers call instead of MPI functions. Inclusion of asynchronous global and neighborhood collectives enables implementation of latency hiding algorithms, and MPI shared memory features enable use of shared memory between ranks, and thread-like shared memory programming.

Fortran is essential for exascale programming. The explicit policy of non-overlapping arrays (enabling compilers to more easily vectorize and parallelize automatically), simple loop syntax, longstanding support for robust real and complex arithmetic, along with the vast quantity of encoded scientific knowledge mean Fortran will remain the primary development language for many domain scientists who write software. Even as clean-slate Fortran development efforts decline, the value of the legacy Fortran software base and its ongoing refactoring and extension will be essential for decades to come.

Execution environments must undergo more substantial changes than programming languages and environments. Current runtime systems are very good at concurrent execution, but are not designed for lightweight threading (except GPUs) or locality-aware task mapping. Thread-scalable computing will require better, more transparent on-node thread parallel environments. Fortunately, we see much progress lately, underneath existing programming languages and environments.

Adding Tasking is Critical
Although programming and execution environments are evolving smoothly toward exascale capabilities, we do have disruptive changes ahead. Most scalable parallel applications today have simple data and work decompositions: Each MPI rank owns a static portion of large data objects, e.g., a subdomain of a large distributed global domain, and each rank executes its code sequentially (potentially vectorizing), or with modest thread-parallel capabilities. This approach works on existing NUMA multicore systems by assigning multiple MPI ranks to a node and using OpenMP across a handful of cores, but performance using this approach is not sustainable as core counts continue to increase.

Tasking, with work granularities sufficiently large to make effective use of one or a few cores, must be added to most applications in order to sustain performance improvement as concurrency demands increase. Specifically, tasking requires one or more levels of additional decomposition (at least logically) of data objects, e.g., create multiple patches or tiles from each MPI subdomain, and assign tasks to execute concurrently on these patches. Within a single shared memory node, tasks can in principle cooperate closely, executing dataflow patterns, sharing data and otherwise collaborating in lightweight parallel computation.

A tasking layer in an application enables portability across GPU (where the GPU gets a big patch and handles its task concurrency itself), multicore and manycore devices; and works with heterogeneous device combinations if task-executed code is written using OpenMP/OpenACC or uses compile-time abstraction layers such as the Kokkos library to compile to each specific device type. This tasking layer can also be implemented using a second layer of message passing. Furthermore, a tasking layer permits exploitation of new resources of parallelism. Fine grain functional parallelism, and pipeline, wavefront, and parallel-prefix execution patterns are feasible because of shared memory and lightweight control transfer. These new parallelism resources are essential as we exhaust traditional sources such as SPMD data parallelism and ensembles. It is worth noting that tasking designs within application codes do not impose use of particular parallel programming languages or environments and can in principle permit combinations of several approaches in a single executable program.

Tasking also supports new models and strategies for high bandwidth memory, resilience and load balance. Task work and data can be scoped to fit into a particular memory space. Also, since parent tasks have all necessary state to re-spawn child tasks, they can establish pre and post conditions on state data for child tasks, and timeout conditions, or simply re-spawn and re-queue for better execution flow. These attributes can protect against many failure sources, including reducing silent data corruption failures, and can improve execution time. All of the innovation required to support this kind of programming is already underway in the C++ language, programming and execution environments. No replacement is required.

The final important aspect of a tasking layer is that task-executed code is encapsulated within the tasking framework and itself has only modest parallel execution requirements: It should vectorize if possible to be able to execute efficiently on a small number of shared-cache cores. As a result, task-executed code can be written in any common HPC language, including Fortran, thus preserving our Fortran code base. In most instances, the task management layer is most effectively written in C++, but a well-designed application can insulate domain scientists from the details of task management and permit them to write new functionality at the task level in much the same way as they write code for MPI-based applications today. The only major added concern is how to encode inter-task dependencies. I think training programmers about futures concepts, which can be used to encapsulate control transfer logic, is perhaps the best way to portably provide this encoding.

The Reality of Starting Anew
With the explicit challenge of reaching multi-billion-way concurrency in order to reach exascale performance levels and beyond, many people have argued that we need a clean break from incrementally improving on our existing approaches to parallel programming, and such arguments have spurred development and exploration of new parallel programming languages. While such efforts are certainly interesting research, there is no evidence that the HPC community can bring a new language to market in a portable, sustainable way. The lack of traction gained by Chapel, X-10 and Fortress, the HPCS languages introduced more than a decade ago are one indication, but for an even more telling example, we simply have to look at the current state of Fortran to reach this conclusion.

trilinosWhile Fortran remains an important language for scientific computing, and new lines of Fortran code are still being written, the adoption of new Fortran features is very slow. In 2009 and 2010, the C++ based Trilinos project developed Fortran interface capabilities, called ForTrilinos. As an object-oriented (OO) collection of libraries, we assumed that the OO features of Fortran 2003 would provide us with natural mappings of Trilinos classes into Fortran equivalents. Over the two-year span of the ForTrilinos effort, we discovered that compiler support for 2003 features was very immature. ForTrilinos developers quickly came to know the handful of compiler developers who worked on these features and, despite close collaboration with them to complete and stabilize the implementation of Fortran 2003 features (in 2010), ForTrilinos stalled and is no longer developed.

The Fortran 2008 standard has similar issues. Co-arrays, an elegant approach for supporting SPMD parallel programming, first developed in the early 1990s, is part of the 2008 standard, but any application developer interested in portability cannot use them. In contrast, the C++ standards community is committed to producing a revised standard every three years, and features for the coming standard often appear in vendor compilers before or simultaneously with ratification of the standard.

Yes, Fortran is an important programming language for scientific computing, and it is a language our community owns, but the reality is that the use of new standard Fortran features is very restricted, if portability is paramount. Furthermore, it is the feature set of Fortran 95 that is most valuable to scientific computing. The recent announcement of a revitalized effort to produce a Fortran equivalent to Clang is exciting, especially if the resulting Fortran 95 features are solid and compiled code vectorizes well. However, the anemic adoption of new Fortran standards should serve as strong evidence that any prospects of a new scientific computing language are very slim. Also, the likelihood that the broader computing community will cooperate with us in establishing a new language is low. C++ is the broader community’s answer for high-performance concurrency.

My pessimism about new languages does not mean novelty is impossible. It means that novelty must be introduced as modest extensions to existing capabilities. CUDA took this route and other task-based embedded C++ DSLs show promise.

Summary
The exascale performance milestone is approaching, but reaching it requires disruptive changes at all levels of the computing ecosystem, driven by needs for massive concurrency. Scientific application design will require disruptive changes in software architecture, in the form of tasking, in order to address increasing hierarchies, take advantage of memory hierarchies, the commodity performance curves of thread-state count and vectorization, and address resilience. However, the practical realities of our production environments make the possibilities for brand-new software eco-systems extremely unlikely. We only have to look at the reality of our Fortran environments to see this.

Fortunately, our existing languages and environments are adapting to support the programming and execution models necessary for exascale performance. The disruption on the path to exascale is being contained to adapting our existing languages and environments, not replacing them.

Author Bio
Michael A. Heroux is a Distinguished Member of the Technical Staff at Sandia National Laboratories, working on new algorithm development, and robust parallel implementation of solver components for problems of interest to Sandia and the broader scientific and engineering community. He leads development of the Trilinos Project, an effort to provide state of the art solution methods in a state of the art software framework. Trilinos is an award-winning product, freely available as Open Source and actively developed by dozens of researchers. Dr. Heroux is also the lead developer and architect of the HPCG benchmark, intended as an alternative ranking for the TOP 500 computer systems. For more: http://www.sandia.gov/~maherou/biography.html

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Fortran the Best Programming Language? Asking ChatGPT

March 23, 2023

I recently wrote about my experience with interviewing ChatGPT here. As promised, in this follow-on and conclusion of my interview, I focus on Fortran and other languages. All in good fun. I hope you enjoy the conclusion of my interview. After my programming language questions, I conclude with a few notes... Read more…

Nvidia Doubling Down on China Market in the Face of Tightened US Export Controls

March 23, 2023

Chipmakers are tightlipped on China activities following a U.S. crackdown on hardware exports to the country. But Nvidia remains unfazed, and is doubling down on China being an important country for its computing hardwar Read more…

Intel’s Sapphire Rapids Comes to Australia’s Gadi Supercomputer

March 22, 2023

Until the launch of Pawsey’s Setonix system last year, NCI’s Gadi system – launched in 2020 – was Australia’s most powerful publicly ranked supercomputer. Now, the system has received a major boost powered by I Read more…

Nvidia Announces BlueField-3 GA, Oracle Cloud Is Early User

March 21, 2023

Nvidia today announced general availability for its BlueField-3 data processing unit (DPU) along with impressive early deployments including Oracle Cloud Infrastructure. First described in 2021 and now being delivered, B Read more…

Nvidia Announces ‘Tokyo-1’ Generative AI Supercomputer Amid Gradual H100 Rollout

March 21, 2023

Nvidia’s Hopper-generation H100 GPU is continuing its slow march toward “current-generation.” After Nvidia announced that the H100 was in “full production” last September, the chip made its formal debut in Nove Read more…

AWS Solution Channel

Shutterstock_2206622211

Install optimized software with Spack configs for AWS ParallelCluster

With AWS ParallelCluster, you can choose a computing architecture that best matches your HPC application. But, HPC applications are complex. That means they can be challenging to get working well. Read more…

 

Get the latest on AI innovation at NVIDIA GTC

Join Microsoft at NVIDIA GTC, a free online global technology conference, March 20 – 23 to learn how organizations of any size can power AI innovation with purpose-built cloud infrastructure from Microsoft. Read more…

DGX Cloud Is Here: Nvidia’s AI Factory Services Start at $37,000

March 21, 2023

If you are a die-hard Nvidia loyalist, be ready to pay a fortune to use its AI factories in the cloud. Renting the GPU company's DGX Cloud, which is an all-inclusive AI supercomputer in the cloud, starts at $36,999 per instance for a month. The rental includes access to a cloud computer with eight Nvidia H100 or A100 GPUs and 640GB... Read more…

Nvidia Doubling Down on China Market in the Face of Tightened US Export Controls

March 23, 2023

Chipmakers are tightlipped on China activities following a U.S. crackdown on hardware exports to the country. But Nvidia remains unfazed, and is doubling down o Read more…

Nvidia Announces BlueField-3 GA, Oracle Cloud Is Early User

March 21, 2023

Nvidia today announced general availability for its BlueField-3 data processing unit (DPU) along with impressive early deployments including Oracle Cloud Infras Read more…

Nvidia Announces ‘Tokyo-1’ Generative AI Supercomputer Amid Gradual H100 Rollout

March 21, 2023

Nvidia’s Hopper-generation H100 GPU is continuing its slow march toward “current-generation.” After Nvidia announced that the H100 was in “full producti Read more…

DGX Cloud Is Here: Nvidia’s AI Factory Services Start at $37,000

March 21, 2023

If you are a die-hard Nvidia loyalist, be ready to pay a fortune to use its AI factories in the cloud. Renting the GPU company's DGX Cloud, which is an all-inclusive AI supercomputer in the cloud, starts at $36,999 per instance for a month. The rental includes access to a cloud computer with eight Nvidia H100 or A100 GPUs and 640GB... Read more…

Quantum Bits: IBM-Cleveland Clinic Launch; D-Wave Adds Solver; DOE/AWS Offer QICK

March 20, 2023

IBM today launched the first installation of an IBM Quantum System One at a collaborator site in the U.S. – this one is at the Cleveland Clinic where IBM’s Read more…

SCA23: Pawsey’s Mark Stickells on Sustainable Australian Supercomputing

March 17, 2023

“While the need for supercomputing is great, we have, in my view, reached a tipping point,” said Mark Stickells, executive director of Australia’s Pawsey Read more…

Optical I/O Technology Needed for Zettascale, Say Top Chipmakers

March 16, 2023

Optical I/O is being singled out by top companies to push computing beyond exascale and into zettascale. The technology was singled out in a recent speech by AM Read more…

Tasty CHIPS – New MEC Program to Expand US Prototyping Capabilities Gains Steam

March 16, 2023

Sometime later this year, perhaps around July, the Department of Defense is expected to announce the sites and focus of up to nine hubs associated with the Micr Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire