At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs, the company behind Singularity. Thirteen thousand six hundred conference attendees had the chance to learn about Spack from two meet-and-greets, three birds-of-a-feather (BoF) meetings, three papers, and more.
What is Spack?
Spack is an open-source scientific software package manager for high-performance computing (HPC) environments, MacOS and Linux platforms. It simplifies an otherwise tedious and time-consuming task. There are currently more than 3,600 packages in the Spack library; with 2,300 active users and over 480 contributors from national labs, academia and industry, the number of packages is constantly growing and being improved with community input.
From the Spack website, spack.io, “Packages can be built with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine. It isn’t tied to a particular language; a software stack can be built in Python or R, link to libraries written in C, C++, or Fortran, easily swap compilers, and target specific microarchitectures. Spack can be used to install without root in a home directory, to manage shared installations and modules, or to build combinatorial versions of software for testing.” Packages are templated so users can easily tune for the host environment.
Ninety attended the November 21 BoF led by Todd Gamblin (Senior Principal Member of Technical Staff at Lawrence Livermore National Laboratory (LLNL), middle); Adam Stewart (UIUC); Massimiliano Culpo (Sylabs, Inc.); Greg Becker (LLNL, right); and Peter Scheibel (LLNL, left).
Gamblin leads the Spack development team at LLNL with fellow computer scientists Becker, Scheibel, Tamara Dahlgren, Gregory Lee, and Matt Legendre. Co-developers are from Argonne National Laboratory; Columbia University; École polytechnique fédérale de Lausanne; Fermi National Accelerator Laboratory, Iowa State University; Kitware, Inc.; NASA Goddard Institute for Space Studies, the Center for Climate Systems Research; the National Energy Research Scientific Computing Center; Perimeter Institute; the University of Hamburg, the University of Illinois at Urbana-Champaign; and the University of Iowa.
With widespread adoption among DOE labs, it’s not surprising that the high-energy physics community based at Fermilab in the U.S. and at CERN (European Organization for Nuclear Research) in Switzerland is replacing a longstanding solution with end-to-end tooling built around Spack.
The BoF incorporated a real-time audience polling tool called Glisser. Participant questions and answers populated pie charts and word clouds, which helped to identify development priorities for the future.
- At LLNL, Spack was used to automate the radiation hydrodynamics code, ARES, with 40 dependency libraries on a dozen compilers running on Blue Gene/Q and commodity Linux clusters. Porting time on a new platform was reduced from two weeks to three hours.
- At Oak Ridge National Laboratory, Spack reduced deployment time on Summit, the world’s fastest publicly-ranked supercomputer, from two weeks to twelve hours. Summit’s stack of 1,300 packages can be built overnight.
- DOE’s Exascale Computing Project (ECP) is tasked with ensuring exascale-readiness for the U.S. lab complex. Its software stack is large and complex; a brand-new class of 90 applications is managed with Spack so that users will be able to easily incorporate them in many different environments. Security features were added to the open-source GitLab product that integrate with each center’s identity management and schedulers, such as SLURM and LSF.
- The Spack development team is in the process of democratizing a pipeline of software management across six DOE labs; automated package builds with both public and private repositories will work with dozens of systems.
Spack is used with Germany’s SuperMUC-NG at the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences (LRZ). Gerald Mathias, Deputy Leader of the LRZ Application Group, explained that their interest in Spack emerged about two years ago when they were undergoing the procurement process for SuperMUC-NG, LRZ’s new 26.9 petaflops HPC system. On the predecessor system, SuperMUC, they managed the HPC software stack for users individually for each packet. Over the years, the stack had grown so large that they realized it wouldn’t be feasible to continue the practice with the larger system. That’s when they began to port their stack into Spack.
In late 2018, LRZ deployed their beta release of a Spack stack on their heterogeneous Linux cluster which serves local Munich universities. At the same time, they were working on the deployment and acceptance phase of SuperMUC-NG where it was the primary software stack at the start of user operations in August 2019. “Currently, our Spack-managed stack yields about 110 packages which supply some 220 modules to our users,” said Mathias. “Step-by-step, we continue to include more packages, particularly custom software, commercial software and core software-like compilers and MPI libraries,” he added.
For the future, they intend to push hard to keep pace with Spack’s rapid development. A key target is to provide a preconfigured Spack instance in user spaces. This will allow users to configure and install their own software, individually, but based on the common software stack provided by LRZ. “Furthermore, the definition of ‘environments’ seems very promising to compose easy-to-use software stacks for different scientific tasks and domains,” Mathias added.
While the majority of use cases represent X86 CPU/GPU environments, the Japanese Flagship Supercomputer Project being conducted by RIKEN, uses Spack to develop packages for their Arm (A64FX) system, Fugaku (formerly Post-K).
“Fugaku is optimized for large scale HPC and AI workflows; for this, together with our industry partner Fujitsu, we developed an HPC optimized Arm chip A64FX, that is three times as fast, and three times more power efficient than any other CPUs, as acknowledged by recent Green500 and other benchmarks. A64FX machines will be commercialized not only by Fujitsu but also Cray/HPE,” said RIKEN Center for Computational Science Director Satoshi Matsuoka (above photo, center). A64FX complies with Arm V8 and server standards; and Fugaku’s software stack is built on top of Red Hat Enterprise Linux. RIKEN and Fujitsu collaborate closely with the Spack development team to fix issues associated with aarch64; Arm 64-bit builds. Being one of the first exascale or near-exascale machines, both hardware and software must scale to an unprecedented magnitude of more than 150,000 nodes. Largely due to such effort centered around Spack, Matsuoka expects that everything running on current Arm and X86 clusters will ultimately run on Fugaku and other A64FX machines, but much, much faster.
According to Matsuoka, Fugaku’s first six racks will arrive at RIKEN this week (December 3, 2019), and by summer 2020, more than 400 racks will be in place. Early adopters will use the system this winter, and once performance benchmark are satisfied in December 2020, the system will go into full production in early 2021.
But not all Spack contributors work at large laboratories with massive systems; in fact, the largest number of contributors work with small to mid-sized clusters in academic research computing environments. HPC Architect Glenn Johnson (University of Iowa) was recognized as a feature contributor on Spack’s DOE R&D 100 Award (Software & Services). He is using Spack on Iowa’s Argon system that features 15,000 cores and 250 GPUs.
What motivates Johnson to contribute? He said, “Building and distributing scientific software applications is very challenging and becoming more so over time,” said Johnson. “Spack provides a tool, and a community, to develop best practices and standards for building and distributing scientific software, while still allowing for site variability. With Spack, as a tool, it is possible to benefit from the work of others while Spack, as a community, provides the opportunity to contribute back and realize the satisfaction that comes from having others benefit from your work,” he added.
As for Spack’s roadmap, DOE labs are leading the effort to detect and label installs for microarchitectures (Skylake, Zen, etc.). They’re optimizing packages for both cloud and static HPC environments and planning to focus on container integration; multi-mode, parallel builds; and better external package detection. They’re also developing a prototype for a new concretizer. In 2020, they will launch a multi-stage container builds that automatically exclude build dependencies from artifacts.
When asked if Spack is sustainable, Gamblin said that the DOE Advanced Simulation and Computing (ASC) program and Exascale Computing Project are funding a strong core team of developers, and are fully committed. The collaboration with Sylabs, which has a user base of 2.5 million developers worldwide, will help establish an even broader base of adoption.
A National Science Foundation/DOE tutorial will soon be added to their website: spack.io.
Photos by Leake, LLNL, and Fumikazu KONISHI.
About the Author
HPCwire Contributing Editor Elizabeth Leake is a consultant, correspondent and advocate who serves the global high performance computing (HPC) and data science industries. In 2012, she founded STEM-Trek, a global, grassroots nonprofit organization that supports workforce development opportunities for science, technology, engineering and mathematics (STEM) scholars from underserved regions and underrepresented groups.
As a program director, Leake has mentored hundreds of early-career professionals who are breaking cultural barriers in an effort to accelerate scientific and engineering discoveries. Her multinational programs have specific themes that resonate with global stakeholders, such as food security data science, blockchain for social good, cybersecurity/risk mitigation, and more. As a conference blogger and communicator, her work drew recognition when STEM-Trek received the 2016 and 2017 HPCwire Editors’ Choice Awards for Workforce Diversity Leadership.