Jack Dongarra et al. on Numerical Algorithms and Libraries at Exascale

By Jack Dongarra, Jakub Kurzak, Piotr Luszczek, Terry Moore, and Stanimire Tomov, University of Tennessee and Oak Ridge National Laboratory

October 19, 2015

Editors Foreword:

One of the great accomplishments of modern supercomputing has been the advent of scientific libraries that encompass the best common practices of core codes for scientific computation such as linear algebra. The future success of exascale computing to serve critical application areas in science, technology, medicine, commerce and society, and national security among other domains will depend significantly on extending the use of computational libraries to new classes of computing methods and architectures, whether by incremental or revolutionary approaches. As the authors suggest, the need to expose and exploit a wider range of parallelism may require new, more dynamic algorithms and corresponding implementations. Another change suggested for future exascale libraries is adapting methods of message-driven computation for asynchrony management and latency hiding. The migration path from the today’s scientific libraries to ones for exascale systems that provide equal numerical stability and accuracy is as yet uncertain. But the following contribution identifies and in some cases addresses key issues and approaches that head us in the right direction.

William Gropp

Thomas Sterling

The HPC software research community greeted this summer’s announcement of the President’s National Strategic Computing Initiative (NSCI) with tremendous enthusiasm. This response is easy to understand. More than twenty-five years have passed since a US administration last proposed such a coordinated, long term, multiagency strategy to improve the nation’s economic competitiveness and scientific research prowess by raising its high performance computing and data analysis capabilities to new and unprecedented heights. Moreover, like many other HPCwire readers, the HPC software community is keenly aware of (and welcomes) the imposing set of research problems that will have to be solved in order to elevate what the NSCI calls the “HPC ecosystem” to exascale. To reach that goal, some of the software denizens of this ecosystem will have to achieve billion-way parallelism, adapt to new forms of heterogeneity along various dimensions of the system, accept new and relatively inflexible constraints on energy and heat, manage and process truly colossal amounts of data, and find ways to address the requirements of a large variety of data analysis application domains that have developed independently of the traditional HPC model.

For most of HPC’s history, software research on mathematical libraries has been on the leading edge of the community’s effort to address such major changes in its ecosystem. There are at least two reasons for this. In the first place, math libraries are the low level workhorses that most science and engineering applications rely upon for their accuracy and high performance; their presence, in some relatively optimized form, is required to make any new system serviceable for applications that need the power it offers. Second, because of the intimate understanding that applied mathematicians have of the functions they encode, math libraries have proven to be outstanding exploratory vehicles for finding and implementing solutions to the problems that new architectures, operating conditions, and application algorithms pose. Accordingly, familiar libraries such as LAPACK, ScaLAPACK, PETSc, hypre, Trilinos, SuperLU, SUNDIALS, Chombo, BoxLib, and SAMRAI, have come to occupy important niches in the HPC ecosystem, and have amply justified the substantial public and private investments that have supported their ongoing evolution. Thus, with the major goals of the NSCI now in view, and as a harbinger of things to come as we start down that road, it seems useful to ask what comes next for the math library community.

When considering the future of research and development on math libraries, the two factors that should be looked at first are the projected changes in system architectures that will define the landscape on which HPC software will have to execute, and the nature of the applications and application communities that this software will ultimately have to serve. In the case of the former, we can get a fairly reasonable picture of the road that lies ahead by examining the platforms that will be brought online under the DOE’s CORAL initiative.

CORAL_imageIn 2018, the DOE aims to deploy three different CORAL platforms, each over 150 petaflops peak performance level. Two systems, named Summit and Sierra, based on the IBM OpenPOWER platform with NVIDIA GPU-accelerators, were selected for ORNL and LLNL; an Intel system, based on the Xeon Phi platform and named Aurora, was selected for ANL. Summit and Sierra will follow the hybrid computing model by coupling powerful latency-optimized processors with highly parallel throughput-optimized accelerators. They will rely on IBM Power9 CPUs, NVIDIA Volta GPUs, NVIDIA NVLink interconnect to connect the hybrid devices within each node, and a Mellanox Dual-Rail EDR Infiniband interconnect to connect the nodes. The Aurora system, on the other hand, will offer a more homogeneous model by utilizing the Knights Hill Xeon Phi architecture, which, unlike the current Knights Corner model, will be a stand-alone processor and not a slot-in coprocessor, and will also include integrated Omni-Path communication fabric. All platforms will benefit from recent advances in 3D-stacked memory technology.

Overall, both types of systems promise major performance improvements: CPU memory bandwidth is expected to be between 200 GB/s and 300 GB/s using HMC; GPU memory bandwidth is expected to approach 1 TB/s using HBM; GPU memory capacity is expected to reach 60 GB (NVIDIA Volta); NVLink is expected to deliver no less than 80 GB/s, and possibly as high at 200 GB/s, of CPU to GPU bandwidth. In terms of computing power, the Knights Hill is expected to be between 3.6 teraFLOPS and 9 teraFLOPS, while the NVIDIA Volta is expected to be around 10 teraFLOPS.

And yet, taking a wider perspective, the challenges are severe for library developers who have to extract performance from these systems. The hybrid computing model seems to be here to stay and memory systems will become ever more complicated. Perhaps more importantly, interconnection technology is seriously lagging behind computing power. Aurora’s nodes may have 5 teraFLOPS of computing power with a network injection bandwidth of 32 GB/s; Summit’s and Sierra’s nodes are expected to have 40 teraFLOPS of computing power and a network injection bandwidth of 23 GB/s. This creates a two orders of magnitude gap in the former case and three orders of magnitude gap in the latter case, which will leave data-heavy workloads in the lurch and motivate the search for algorithms that minimize data movement.

“It is widely agreed that math libraries should, wherever possible, embrace event-driven and message-driven execution models”

However, whatever else happens in the design of these systems, it is clear that support for parallelism is going to have to dramatically increase, going up by at least an order of magnitude for the CORAL systems and achieving billion-way parallelism at exascale. To meet this challenge, it is widely agreed that math libraries should, wherever possible, embrace event-driven and message-driven execution models, in which work is abstracted in the form of asynchronous tasks, whose completion can trigger additional computation tasks and data movements. One such model relies on expressing the computation in the form of a Direct Acyclic Graph (DAG), where the nodes represent work and the directed edges represent data dependencies/data movement. Although this model does not work for all application areas in HPC, it is very suitable for implementing dense linear algebra workloads; we have therefore adopted it in our PLASMA library for multicore CPUs and the DPLASMA library for large scale distributed systems with multicore CPUs and accelerators.

Although we expect to see DAG-based models widely adopted, changes in other parts of the software ecosystem will inevitably affect the way that that model is implemented. The appearance of DAG scheduling constructs in the OpenMP 4.0 standard offers a particularly important example of this point. Until now, libraries like PLASMA had to rely on custom built task schedulers; at Tennessee, we developed the QUARK scheduler for this purpose; but many other groups developed similar systems: SMPSs from the Barcelona Supercomputer Center, StarPU from INRIA, SuperGlue from Uppsala University, etc. Similar capabilities were also provided commercially, e.g. by Intel’s Threading Building Blocks. However, the inclusion of DAG scheduling constructs in the OpenMP standard, along with the rapid implementation of support for them (with excellent multithreading performance) in the GNU compiler suite, throws open the doors to widespread adoption of this model in academic and commercial applications for shared memory. We view OpenMP as the natural path forward for the PLASMA library and expect that others will see the same advantages to choosing this alternative.

The situation is more complicated in the case of distributed memory computing. Many new programming models are emerging here, but the only standards available still revolve around MPI. Yet DAG scheduling has also turned out to be a good option for dense linear algebra, which led us to develop a version of PLASMA for distributed memory—the DPLASMA library. In the absence of standards for dataflow approaches, taking this path means relying on a custom or specialized runtime scheduler for the foreseeable future. DPLASMA utilizes the PaRSEC scheduler, developed at the University of Tennessee. PaRSEC relies on an algebraic, size-independent, representation of the DAG, and execution-driven unrolling of the local portions of the DAG at runtime. The form of representation it uses is referred to as a Parametrized Task Graph (PTG). PaRSEC takes care of inter-node messaging, intra-node multithreading and offloading work to multiple accelerators. It also offers tools for translating (with some restrictions) serial codes into the PTGs.

Jack Dongarra
Jack Dongarra

Of course the other seismic force that’s shaking the road to exascale computing, and was accordingly flagged in the NSCI, is the rise of large-scale data analytics as fundamental for a wide range of application domains. Indeed, one of the most interesting developments in HPC math libraries is taking place at the intersection of numerical linear algebra and data analytics, where a new class of randomized algorithms is emerging for computing approximate matrix factorizations. These algorithms (random sampling, random projection, random mixing) prove to be powerful tools for solving both least squares and low-rank approximation problems, which are ubiquitous in large-scale data analytics and scientific computing. More importantly, however, these algorithms are playing a major role in the processing of the information that has previously lain fallow, or even been discarded, because meaningful analysis of it was simply infeasible—this is the so called “Dark Data phenomenon”. The advent of tools capable of usefully processing such vast amounts of data has brought new light to these previously darkened regions of the scientific data ecosystem.

Where it applies, approximate matrix factorization is a powerful component of this new tool set, and therefore randomized algorithms should be expected to play a major role in future convergence of HPC and data analytics. Randomized algorithms are not only fast, they are, as compared to traditional algorithms, easier to code, easier to map to modern computer architectures, and display higher levels of parallelism. Moreover, they often introduce implicit regularization, which makes them more numerically robust. Finally, they may produce results that are easier to interpret and analyze in downstream applications. While not a silver bullet for large-scale data analysis problems, random sampling and random projection algorithms may dramatically improve our ability to solve a wide range of large matrix problems, especially when they are coupled with domain expertise.

Another relevant part of the HPC ecosystem that is being transformed as we move towards exascale includes programming languages, among which Fortran is finally being driven from the entrenched position it occupied in the last century. Complexities associated with today’s hardware call for complex software to drive it efficiently. Our interest is in lower-level tools for library-writing rather than high-level approaches such as PGAS languages and new execution models such as academia’s ParalleX or AMD’s Heterogeneous System Architecture. While we recognize object-oriented and co-array features of Fortran 2008, or the new support for parallelism in C and C++, at the library level we need to cater to a broad spectrum of languages and deliver the features and performance in a language-agnostic fashion. In that context, we foresee further expansion of support for heterogeneous platforms in standards such as OpenMP 4 and OpenACC 2. As noted above, the former made serious inroads into complex DAG scheduling and is on track to add task priorities in the next iteration; the latter continues to support ease of programming for GPU accelerators and to be inclusive of optimized system libraries. The main concerns are fragmentation of the community and the lack of a coherent path for porting important scientific codes from one standard to the other.

“It is becoming increasingly difficult to hit the performance sweet spot with just a single version of a code”

Another trend that will become more widespread is runtime code generation. Mainstream programming languages—Java, C#, Javascript, etc.—are no strangers to this concept. HPC started enjoying intermediate representations with the introduction of CUDA in 2007; its PTX format is used to further refine the code for the specific NVIDIA GPUs. Runtime code generation is prevalent on OpenCL platforms due to the plethora of hardware variants, many of which sport an embedded form factor. Runtime code generation in HPC can now be achieved thanks to community-driven projects such as the LLVM compiler framework that can easily be retooled and packaged by vendors due to its permissive licensing terms. Code generation is slowly becoming a standard tool on the CPUs, with autotuning set to become prominent in the toolset of modern HPC developers. With ever more complex syntax and advancements in compilation state-of-the-art, it is becoming increasingly difficult to hit the performance sweet spot with just a single version of a code.

Finally, modern scientific applications include new software contexts that call for optimized implementations—a single function call syntax can no longer serve a more highly diverse spectrum of end-users. In the case of numerical libraries, batched routines represent a good illustration of this problem. What is needed in several important application domains are functions that repeat the same computation on a large batch of input data, running each case in parallel, embarrassingly parallel in this case; the individual inputs in such problems are too small for any reasonable library code to optimize because the overhead of multiple invocations is too large. Consequently, the library interface has to be extended to include the batch count, thereby giving the library an opportunity to optimize across the entire group. Some applications add complications that require even more complex variations on this theme: changing the sizes of the inputs within a batch; generating heterogeneous batches on a continuous and persistent basis; amending the interfacing with slight changes to the mathematical formulation; and so on. It is not possible to cater to each of these requirements, achieve satisfactory results, and yet remain generic enough to serve a wide range of applications; a single entry point can no longer do the job without substantial syntax bloat. The hope, however, is that by combining the principles of autotuning and the ability of generated code at the user site and/or at runtime, we can cater to these diverse new needs of the computational science community in a comprehensive manner with near-optimal performance.

National Strategic Computing Initiative (NSCI)While the problems we face today are similar to those we faced ten years ago, the solutions are more complicated and the consequences greater in terms of performance. For one thing, the size of the community to be served has increased and its composition has changed. The NSCI has, as one of its five objectives, “Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing,” which implies that this “technological base” is not coherent now. This claim is widely agreed to be true, although opinions differ on why it is so and how to improve it. The selection of software for general use requires complete performance evaluation, and good communication with “customers”—a much larger and more varied group than it used to be. Superb software is worthless unless computational scientists are persuaded to use it. Users are reluctant to modify running programs unless they are convinced that the software they are currently using is inferior enough to endanger their work and that the new software will remove that danger.

From the perspective of the computational scientist, numerical libraries are the workhorses of software infrastructure because they encode the underlying mathematical computations that their applications spend most of their time processing. Performance of these libraries tends to be the most critical factor in application performance. In addition to the architectural challenges they must address, their portability across platforms and different levels of scale is also essential to avoid interruptions and obstacles in the work of most research communities. Achieving the required portability means that future numerical libraries will not only need dramatic progress in areas such as autotuning, but also need to be able to build on standards—which do not currently exist—for things like power management, programming in heterogeneous environments, and fault tolerance.

Advancing to the next stage of growth for computational science, which will require (as noted) the convergence of HPC simulation and modeling with data analytics on a coherent technological base, will require us to solve basic research problems in Computer Science, Applied Mathematics, and Statistics. At the same time, going to exascale will clearly require the creation and promulgation of a new paradigm for the development of scientific software. To make progress on both fronts simultaneously will require a level of sustained, interdisciplinary collaboration among the core research communities that, in the past, has only been achieved by forming and supporting research centers dedicated to such a common purpose. A stronger effort is needed by both government and the research community to embrace such a broader vision.

We believe that the time has come for the leaders of the Computational Science movement to focus their energies on creating such software research centers to carry out this indispensable part of the mission.


Jack Dongarra is the director of the Innovative Computing Laboratory at the University of Tennessee and director of the Center for Information Technology Research.

Jakub Kurzak is a research director at the Innovative Computing Laboratory, University of Tennessee.

Piotr Luszczek is a research director at the Innovative Computing Laboratory, University of Tennessee.

Terry Moore is the associate director of the Innovative Computing Laboratory, University of Tennessee.

Stanimire Tomov is a research director at the Innovative Computing Laboratory (ICL), the University of Tennessee.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data West Brings Technology Leaders to SDSC

December 6, 2018

Data and technology enthusiasts from around the world descended upon the San Diego Supercomputing Center (SDSC) for the third annual Data West conference, which is taking place this week on the campus of the University o Read more…

By Alex Woodie

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are Read more…

By James Reinders

What’s New in HPC Research: Automatic Energy Efficiency, DNA Data Analysis, Post-Exascale & More

December 6, 2018

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

Five Steps to Building a Data Strategy for AI

Our data-centric world is driving many organizations to apply advanced analytics that use artificial intelligence (AI). AI provides intelligent answers to challenging business questions. AI also enables highly personalized user experiences, built when data scientists and analysts learn new information from data that would otherwise go undetected using traditional analytics methods. Read more…

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar conc Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

AWS Debuts Lustre as a Service, Accelerates Data Transfer

November 28, 2018

From the Amazon re:Invent main stage in Las Vegas today, Amazon Web Services CEO Andy Jassy introduced Amazon FSx for Lustre, citing a growing body of applicati Read more…

By Tiffany Trader

AWS Launches First Arm Cloud Instances

November 28, 2018

AWS, a macrocosm of the emerging high-performance technology landscape, wants to be everywhere you want to be and offer everything you want to use (or at least Read more…

By Doug Black

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

DOE Under Secretary for Science Paul Dabbar Interviewed at SC18

November 21, 2018

During the 30th annual SC conference in Dallas last week, SC18 hosted U.S. Department of Energy Under Secretary for Science Paul M. Dabbar. In attendance Nov. 13-14, Dabbar delivered remarks at the Top500 panel, met with a number of industry stakeholders and toured the show floor. He also met with HPCwire for an interview, where we discussed the role of the DOE in advancing leadership computing. Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This