Sept. 2, 2022 — To ensure that simulation codes from the National Nuclear Security Administration’s (NNSA’s) Advanced Simulation and Computing (ASC) program are ready to leverage new exascale machines, NNSA funds the Advanced Technology Development and Mitigation (ATDM) program within the Exascale Computing Project (ECP). The ECP is jointly funded by the NNSA and the US Department of Energy’s (DOE’s) Office of Science. There is also a complementary relationship between the NNSA and the Office of Science’s Advanced Scientific Computing Research program.
Although ATDM primarily supports NNSA’s traditionally closed mission of national security, Lawrence Livermore National Laboratory’s (LLNL’s) ATDM Software Technology (ST) project exemplifies how in many cases the best way to support that mission is through open collaboration and a sustainable software infrastructure. LLNL ATDM is contributing key open-source components of a full-featured, integrated, and maintainable software stack for exascale systems that will impact both the ECP and the broader high-performance computing (HPC) community. “This work ultimately supports users in getting simulation results, but the primary focus is on infrastructure needs,” said Becky Springmeyer, PI of the LLNL ATDM ST project and leader of the Livermore Computing division at LLNL. “LLNL ATDM ST supports a set of open-source projects that create a framework for workflows that support end users, computational scientists, and computer scientists.”
Key infrastructure foci include the following:
- Programming models and runtimes for GPUs (RAJA,CHAI, and Umpire)
- Mathematical libraries (MFEM)
- Productivity technologies (Spack)
- Workflow scheduling (Flux)
Todd Gamblin, ECP lead for Software Packaging Technologies and ATDM ST deputy, points out, “This project is about recognizing the sustainable value of open-source projects. We are looking to broaden our software and the software for the HPC community.”
The overarching goal of the LLNL ATDM ST project is to build infrastructure to support a full featured, integrated, and maintainable exascale software stack, which is essential to support the ASC Program’s computational mission. At the infrastructure level, the ASC program’s national security mission shares many challenges with the rest of the ECP. Exploiting the capabilities of modern GPUs (graphics processing units) is a key for nearly every component of the exascale software stack, and libraries such as RAJA and MFEM that enable developers in harnessing these devices in a portable way. The complexities introduced by GPUs likewise fuel the need for better software management (e.g., Spack) and better scheduling (e.g., Flux). LLNL ATDM ST is an integrated effort across these sub-projects, which, as noted on the LLNL-ECP software technology website, “provides both coordination as well as a clear path from R&D to delivery and deployment.”
Both Springmeyer and Gamblin observe that the ATDM project existed before the start of the ECP, and LLNL plans to continue these efforts in the future – throughout the exascale era and beyond. The ECP provided an opportunity for the LLNL ATDM projects to have broader impact on the HPC community through collaboration and for the projects to build sustainable open-source communities. Springmeyer states, “Collaborating with open source, universities, and vendors means we will have a stronger next-generation software framework to support our code at exascale and the future generations of supercomputers. It’s all intertwined, which is why we are investing in the overall environment and not only on individual projects.”
Both Springmeyer and Gamblin (Figure 1) highlight the importance of the NNSA support for the development of open-source software technologies and how they contribute to the success of national security applications. External to national security, these same open-source technologies have a large impact on the ECP and global HPC community. The result is a win-win for everyone by enabling a large scientific userbase who can set new high-water marks in scalability and performance while they exercise ATDM targeted standards, toolsets, and libraries on the latest exascale supercomputer architectures. In return, the success of these external efforts ensure that the ASC mission needs can be met with software technology that has been tested and vetted on multiple instantiations of leadership-class supercomputers by pressing the limits of the latest hardware technology.
Gamblin specifically notes, “Spack is one example of a software project that affects much of the ECP software stack. The package handles the process of downloading a tool and all the necessary dependencies—which can be tens or hundreds of other packages—and assembles those components while ensuring that they are properly linked and optimized for the machine. Spack is the backbone of E4S, which is the ECP’s suite of open-source software products. ” Gamblin points out that “E4S itself is around 100 packages that leverage nearly 500 additional packages from Spack’s open-source repository. An integrated stack of this size would not be possible without tooling like Spack.”
RAJA, MFEM, and Flux have also benefitted from open-source communities and their work with the ECP. RAJA is used for GPU portability not only in LLNL’s internal codes, but also by external collaborators. The MFEM project has developed a network of users and contributors and recently held a workshop with over 100 community attendees. The Flux team also developed significant industry partnerships to leverage exascale scheduling capabilities for cloud computing.
The success of the program is exemplified by the broad adoption of E4S software across the cloud and HPC communities. LLNL has existing collaborations with AWS that involve Spack and Flux,,  which has expanded to a memorandum of understanding to define the role of leadership-class HPC in a future where cloud HPC is ubiquitous.  According to LLNL, “Building off that collaboration, LLNL and AWS will look to better understand how HPC centers can best utilize cloud resources to support HPC and explore models for cloud bursting, data staging, and data migration for deploying both on-site and in the cloud.”
To read Ron Farber’s entire article, visit this link.
Source: Ron Farber, contributing writer, Exascale Computing Project