Feb. 25, 2021 — Earlier this month, the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory hosted the First International Symposium on Checkpointing for Supercomputing (SuperCheck21). This free virtual event drew more than 250 participants from around the world.
Organized by NERSC HPC Consultant Zhengji Zhao and User Engagement Group Lead Rebecca Hartman-Baker, in collaboration with Devesh Tiwari and Gene Cooperman at Northeastern University, the symposium showcased the latest research on checkpoint/restart (C/R), sought to motivate the development of usable C/R tools, and boost the adoption of those tools in supercomputing workloads.
“Checkpoint/Restart is like saving your place in a video game. But in our case, we want to ‘checkpoint’ or save scientific computing jobs to resume later, for example, so that when the supercomputer unexpectedly needs to be shut down, the job can pick up where it left off when the system comes back online,” said Hartman-Baker.
According to Zhao, the fault-tolerant computing capability that C/R provides greatly benefits both users and computing centers. For supercomputer users, C/R enables long-running applications and improves queue turnaround. For computing centers, C/R improves system utilization, prepares for unexpected system failures, and provides scheduling flexibility to support diverse workloads with different priorities, for instance, making space for high-priority, real-time workloads by preempting low-priority jobs. She adds that these benefits make C/R critical to many of NERSC’s future plans.
While transparently checkpointing and restarting a huge number of applications at NERSC at the system level remains a daunting task, Zhao notes that it can be broken into incremental steps by prioritizing top applications, which account for more than 70% of the computing cycles at NERSC. NERSC is looking into the MPI Agnostic and Network Agnostic (MANA) transparent checkpointing tool, implemented in DMTCP (Distributed MultiThreaded Checkpointing), which is a promising C/R tool to be used on front-edge HPC systems.
Despite the promise of MANA and DMTCP, few high performance computing (HPC) users are applying these tools in real-world scenarios, says Zhao. This disconnect between C/R research and real-world applied use is what inspired the duo to host an international symposium.
“I tried DMTCP for myself about two years ago and found it to be unreliable, at least for the SPAdes genome assembly tool at the time,” said Tony Wildish, Cloud Bioinformatics Lead Architect at the European Bioinformatics Institute who attended the event. “When I heard about the talk on checkpointing SPAdes at SuperCheck21, I was very interested. Twinkle Jain gave an excellent presentation and gave me a pointer to her code. I’ve started experimenting with it and am able to make it work, so we’ll be investigating it more thoroughly now.”
“The checkpointing that most of my users, and even myself, do is fairly rudimentary since our jobs usually have shorter runtimes. So, I saw this symposium as an opportunity to attend a few talks and learn what really challenging checkpointing looks like—I learned that and more,” said Albert Reuther, senior technical staff at the MIT Lincoln Laboratory Supercomputing Center. “I have several sets of users who do computational fluid dynamics, structural dynamics, and electromagnetic simulations. After Supercheck21, I now understand the challenges that those users face when they do a checkpoint. If this event were held again, I would definitely recommend it to them.”
After a successful first symposium, Hartman-Baker and Zhao would like to see this become an annual event where different organizations share the responsibility of hosting.
“Our intention is to build a strong and active C/R community, and we hope that this work is carried onto the next generation of computational scientists. That’s why we included a lot of students and early career researchers in our program committee,” said Hartman-Baker. “To protect research, we want checkpointing to become second nature at NERSC.”
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 7,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.