ALCF’s Paul Messina on the Code Optimization Path to Exascale

By By Paul Messina, Director of Science, Argonne Leadership Computing Facility

September 14, 2015

To borrow a phrase from paleontology, the HPC community has historically evolved in punctuated equilibrium. In the 1970s we transitioned from serial to vector architectures. In the 1980s parallel architectures blossomed, and in the 1990s MIMD systems became the norm for most supercomputer architectures. From the 1990s until today we have been in a period of relative stasis in terms of system balance and tradeoffs. Now we’re entering a new phase of rapid evolutionary change triggered by the quest for exascale despite the end of Dennard scaling.

With clock speeds not increasing – in fact sometimes decreasing – the only way to go from petaflops to exaflops is to increase hardware parallelism. Nodes, the building blocks of systems, will have many more cores, more hardware threads, more vector units, and, in some cases, more accelerators or co-processors. Memory hierarchies will be deeper, further complicating the design of application software.

Having exascale hardware is only the beginning: mathematical models, algorithms, and how they are implemented need to be modified to use the hardware efficiently. Both community and commercial codes are constantly evolving in functionality and powerful new HPC systems are coming online whose architectures are more complex than we’ve had to deal with in the past. The increased complexity of the phenomena that codes simulate and the hardware architectures on which they will be deployed require modification of the software implementation.

Others may use the term “code modernization” to describe modifications to existing software to take advantage of new, highly parallel architectures. Intel has a new Modern Code Program that is about both updating old code using modern methods and new codes. To me, code modernization is an umbrella term that implies the need for HPC software to continually be evaluated and updated, as necessary, when new systems and architectures come along. It is a continual process and is most accurately used when discussing existing applications that need to be updated to take advantage of newer many-core systems.

The development of “modern code” is another level of discussion and refers not so much to modernizing or updating or optimizing existing codes, but rather refers to the development of new applications with a highly parallel perspective. Since all high-end HPC systems are highly parallel, one can refer to that as the modern perspective. So, to help ground the discussion, I’ll use the term “code optimization” in this article to refer to efforts to modify existing codes for peak performance on the newer, highly parallel architectures.

As Ray Bair, Chief Computational Scientist for Applications and leader of Argonne’s Laboratory Computing Resource Center, points out, “Since so many codes are used at Argonne’s computing facilities, sometimes there are popular codes where none of the developers are users at our facility. So we have the challenge of being aware of where the community is going in terms of adapting code to new architectures and determining what influence we can have on that process. This is also true regarding commercial software codes; our users sometimes bring their commercial code partners to us, and we provide access to our advanced systems and advice on porting and tuning.”

A good example of code evolution or optimization is NEK5000, a CFD code that has been around for a while – it won the Gordon Bell prize for algorithmic quality and sustained parallel performance in 1999. It continues to improve and perform well at scale: NEK has run efficiently on one million MPI ranks. It is widely used in research with more than 200 users active in the fields of reactor thermal-hydraulics, astrophysics, combustion, oceanography, and vascular flow modeling.

NEK is a key element of the Center for Exascale Simulation of Advanced Reactors (CESAR), a DOE exascale co-design center. NEK has achieved efficient strong scaling using one million processes on 524,288 cores of Argonne’s current supercomputer, Mira, a 10 petaflops IBM Blue Gene/Q system with 786,432 cores

Mira is operated by the ALCF, a US Department of Energy (DOE) Office of Science user facility for open computational science and engineering. Projects are awarded time based on peer reviewed proposals – therefore the specific codes and projects continually change and evolve. Domains typically represented include astrophysics, materials science, chemistry, CFD engineering of airframe, jet engine, and wind turbine design, climate science, biology, and nuclear physics. These codes will migrate to Argonne’s Theta and Aurora systems, new highly parallel systems coming from Intel as part of the DOE’s CORAL program, when those systems are available for production.

Code optimization is an integral part of the workload at ALCF, which consists of very large jobs, both in terms of capability and capacity. Community codes are under constant evolution, as are codes developed by a single research team. The mission of the Leadership Computing Facility is to enable computational science that requires very powerful systems. Therefore, by policy, ALCF hosts relatively few projects so that each project can have very large allocations of time on the system. On Mira the average allocation to projects is 100 million core-hours per year; some projects have been allocated 300 million core-hours per year.

AuroraFrom Mira to Aurora
DOE has allocated a $200 million investment for the ALCF to acquire and deploy a next-generation supercomputer, known as Aurora. This advanced machine, which will be well on the way toward exascale, will use Intel HPC scalable system framework to provide a peak performance of 180 petaflops. Included will be third-generation Intel Xeon Phi™ processors (Knights Hill) and next-generation Intel Omni-Path Architecture, an advanced fabric that includes powerful memory architecture, and Intel Enterprise Edition for Lustre software, an advanced parallel file system.

Delivery of Aurora is scheduled for 2018, but code optimization efforts to prepare for this advanced HPC system have already begun.

We are taking some steps toward four goals to move code optimization along. These include:

  • More efficient use of nodes – Since most of the system’s speed increase will come from more powerful nodes, codes will have to be modified to use the designs efficiently. The number of cores and number of hardware threads per node will be much greater and there will be more levels of memory in each node, each with different bandwidth and latency. Simultaneous use of multiple hardware threads per core will be necessary to get good performance for many applications. That means that applications will have to exploit much finer grain parallelism – hundreds per node – or leave much of the potential performance on the table. Our current IBM Blue Gene/Q cores have four hardware threads and indeed many applications that run on Mira see big performance benefits from threading.
  • More efficient vectorization from SIMD units – Each core will have SIMD vector units, four or more per core. Much of the peak speed in flops will be through use of those units. Currently it can be difficult to write code that will use the SIMD units. Compilers are getting better at helping, but the code developers will need to organize their data movement and structure to increase the use of those units.
  • Understand and better utilize memory architecture – As previously mentioned there will be memories within each node that have different speeds. Memory allocation will have to be managed by application writers much more than has been the case in recent decades. The APIs for accomplishing this have not even been defined in some cases. And efficient data access is crucial to getting good performance.
  • Deal with variability in traffic due to shortcuts in networking that will be built into future systems – If internode communication links are shared by different jobs and used by I/O and operating system tasks, there will be variability in communication time between nodes of a single job. Many current applications rely on consistent communication times to get good performance. Even seemingly minor variations can result in substantially slower performance for those applications. This poses another software challenge that must be addressed.

To optimize code, we work on a case-by-case basis with our user community. Since ALCF (and our companion DOE Leadership Computing Facility center at Oak Ridge National Laboratory, OLCF) have relatively few user projects, our experienced staff can work closely with applications and library developers, providing early access to our new systems and helping them to optimize their code.

One example of training program and initiative at Argonne is ATPESC – The Argonne Training Program on Extreme-Scale Computing – now in its third year. During the two-week program, we provide participants with intense hands-on training on two of the top five systems on the TOP500 list. Emphasis is on the key skills, approaches, and tools needed to design, implement, and execute complex science and engineering application on current supercomputers and future HPC systems such as Aurora, as well as other subsequent next-generation and exascale systems.

ExascaleEditionThumbThe sessions are attended by doctoral students, postdocs and computational scientists as well a few senior research scientists from national or industrial research laboratories. Last year’s sessions included training by James Reinders, Intel’s HPC Evangelist and parallel programming expert, and Jack Dongarra, co-founder of the TOP500 list. They also participated in this year’s program.

Other examples of training programs and initiatives at Argonne include:

  • ESP – At ALCF, when we are preparing to bring a new supercomputer into production, we initiate the Early Science Program (ESP) to make sure these next-generation systems are ready to move quickly into operation once they are delivered (there is no substitute to running full application codes for identifying issues in new systems). A number of applications are ready to carry out science runs as soon as the system transitions to production status. We have two ESP initiatives currently in operation. The first phase is for Theta, a production system based on Intel’s second-generation Xeon Phi processor known as Knights Landing and Cray Aries interconnect. Theta will act as a bridge between Mira and Aurora.
  • IDEAS – Argonne, along with other DOE national labs, is part of an initiative known as the Interoperable Design of Extreme-scale Application Software (IDEAS). Lois Curfman McInnes, a senior computational scientist with Argonne’s Mathematics and Computer Science Division and one of the drivers of the IDEAS program sums up the initiative’s aim nicely, “IDEAS goals are to qualitatively change the culture of extreme-scale computational science and to provide a foundation (through software productivity methodologies and an extreme-scale software ecosystem) that enables transformative next-generation predictive science and decision support. IDEAS motivation is to increase software development productivity – a key aspect of overall scientific productivity – through a new interdisciplinary and agile approach to creating extreme-scale scientific software, where modern software engineering tools, practices, and processes will improve software developer productivity, and applications will be constructed quickly and efficiently using components, libraries, and frameworks.”
  • PETSc – The Portable, Extensible Toolkit for Scientific computing (PETSc) project has substantially improved composability for multilevel, multidomain, multirate, and multiphysics algorithms. These capabilities enable users to investigate the design space of composable linear, nonlinear, and timestepping solvers for these more complex simulations, without making premature choices about algorithms and data structures. The strong encapsulation of the PETSc design facilitates runtime composition of hierarchical methods without sacrificing the ability to customize problem-specific components. These capabilities are essential for application codes to evolve over time and to incorporate advances in algorithms for emerging extreme-scale architectures. For example, PETSc provides the numerical backbone for simulating a 3D heterogeneous hyper-elastic material (multiphase steel) with scale-bridging algorithms that use the solution of microscopic models to provide the material properties for the macroscopic model. The code weak scales from 8,192 to 786,432 cores on Mira with a parallel efficiency over 96% and a final problem size of 19.3 billion unknowns.*

The Punctuated Path to Exascale
Parallelism has been around in various forms for more than 30 years, but it’s only in the last decade that the architectures have become more complex and difficult to use. But these new architectures are leading to discoveries that were never possible before.

Developing highly parallel modern code is helping to fuel the evolutionary jump – the punctuated equilibrium evolution – that is leading us down the road to exascale.

Author Bio

Paul Messina is Director of Science at the Argonne Leadership Computing Facility (ALCF). Dr. Messina guides the ALCF science teams using the IBM Blue Gene systems. In 2002-2004, he served as Distinguished Senior Computer Scientist at Argonne and as Adviser to the Director General at CERN (European Organization for Nuclear Research).

*Reference:  Computational Scale Bridging using PETSc on 786k Cores, O. Rheinbach (TU Bergakademie Freiberg), A. Klawonn (U of Cologne), and M. Lanser (U of Cologne), presentation, Celebrating 20 Years of Computational Science with PETSc, June 16-18, 2015, Argonne National Laboratory, see http://www.mcs.anl.gov/petsc/petsc-20/conference/Rheinbach_Klawonn.pdf

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Google’s Bill Magro, an HPCwire Person to Watch in 2021

June 11, 2021

Last Fall Bill Magro joined Google as CTO of HPC, a newly created position, after two decades at Intel, where he was responsible for the company's HPC strategy. This interview was conducted by email at the beginning of A Read more…

A Carbon Crisis Looms Over Supercomputing. How Do We Stop It?

June 11, 2021

Supercomputing is extraordinarily power-hungry, with many of the top systems measuring their peak demand in the megawatts due to powerful processors and their correspondingly powerful cooling systems. As a result, these Read more…

Honeywell Quantum and Cambridge Quantum Plan to Merge; More to Follow?

June 10, 2021

Earlier this week, Honeywell announced plans to merge its quantum computing business, Honeywell Quantum Solutions (HQS), which focuses on trapped ion hardware, with the U.K.-based Cambridge Quantum Computing (CQC), which Read more…

ISC21 Keynoter Xiaoxiang Zhu to Deliver a Bird’s-Eye View of a Changing World

June 10, 2021

ISC High Performance 2021 – once again virtual due to the ongoing pandemic – is swiftly approaching. In contrast to last year’s conference, which canceled its in-person component with a couple months’ notice, ISC Read more…

Xilinx Expands Versal Chip Family With 7 New Versal AI Edge Chips

June 10, 2021

FPGA chip vendor Xilinx has been busy over the last several years cranking out its Versal AI Core, Versal Premium and Versal Prime chip families to fill customer compute needs in the cloud, datacenters, networks and more. Now Xilinx is expanding its reach to the booming edge... Read more…

AWS Solution Channel

Building highly-available HPC infrastructure on AWS

Reminder: You can learn a lot from AWS HPC engineers by subscribing to the HPC Tech Short YouTube channel, and following the AWS HPC Blog channel. Read more…

Space Weather Prediction Gets a Supercomputing Boost

June 9, 2021

Solar winds are a hot topic in the HPC world right now, with supercomputer-powered research spanning from the Princeton Plasma Physics Laboratory (which used Oak Ridge’s Titan system) to University College London (which used resources from the DiRAC HPC facility). One of the larger... Read more…

A Carbon Crisis Looms Over Supercomputing. How Do We Stop It?

June 11, 2021

Supercomputing is extraordinarily power-hungry, with many of the top systems measuring their peak demand in the megawatts due to powerful processors and their c Read more…

Honeywell Quantum and Cambridge Quantum Plan to Merge; More to Follow?

June 10, 2021

Earlier this week, Honeywell announced plans to merge its quantum computing business, Honeywell Quantum Solutions (HQS), which focuses on trapped ion hardware, Read more…

ISC21 Keynoter Xiaoxiang Zhu to Deliver a Bird’s-Eye View of a Changing World

June 10, 2021

ISC High Performance 2021 – once again virtual due to the ongoing pandemic – is swiftly approaching. In contrast to last year’s conference, which canceled Read more…

Xilinx Expands Versal Chip Family With 7 New Versal AI Edge Chips

June 10, 2021

FPGA chip vendor Xilinx has been busy over the last several years cranking out its Versal AI Core, Versal Premium and Versal Prime chip families to fill customer compute needs in the cloud, datacenters, networks and more. Now Xilinx is expanding its reach to the booming edge... Read more…

What is Thermodynamic Computing and Could It Become Important?

June 3, 2021

What, exactly, is thermodynamic computing? (Yes, we know everything obeys thermodynamic laws.) A trio of researchers from Microsoft, UC San Diego, and Georgia Tech have written an interesting viewpoint in the June issue... Read more…

AMD Introduces 3D Chiplets, Demos Vertical Cache on Zen 3 CPUs

June 2, 2021

At Computex 2021, held virtually this week, AMD showcased a new 3D chiplet architecture that will be used for future high-performance computing products set to Read more…

Nvidia Expands Its Certified Server Models, Unveils DGX SuperPod Subscriptions

June 2, 2021

Nvidia is busy this week at the virtual Computex 2021 Taipei technology show, announcing an expansion of its nascent Nvidia-certified server program, a range of Read more…

Using HPC Cloud, Researchers Investigate the COVID-19 Lab Leak Hypothesis

May 27, 2021

At the end of 2019, strange pneumonia cases started cropping up in Wuhan, China. As Wuhan (then China, then the world) scrambled to contain what would, of cours Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire