Time to finally(!) clear the 2022 decks and get the rest of the 2022 Great American Supercomputing Road Trip content out into the wild. The last part of the year was grueling with more than 5,000 miles of driving over 22 days, the SC22 conference, then a post-Thanksgiving eight-day trip to South Africa. Oh, and did I mention developing pneumonia?
I’m just now coming back into the world of the living, and what better way than showing you my visit with NERSC/Lawrence Berkeley National Laboratory? LBNL/NERSC is located on top of a mountain overlooking Berkeley, Oakland, sort of like the Grinch if he employed a huge staff of scientists and conducted massive amounts of research.
Our first interview was with Dr. Wahid Bhimji, Group Lead, Data & Analytics Services for NERSC. The mission is big: support all of the open science and HPC needs for the U.S. Department of Energy. We talk a little Perlmutter, which is currently number eight on the Top500, and about their fancy new Slingshot Ethernet-based interconnect. The system features more than 6,000 Nvidia A100 GPUs and 64-core AMD Epyc CPUs.
The lab has been rapidly embracing machine learning, with a 6x increase in machine learning workloads in the last few years. One project in particular is a joint endeavor with Nvidia to use machine learning to better predict the weather. They’re anticipating up to a 10,000x speed up in time to forecast, which is pretty sporty.
Looking down the road, Bhimji sees a role for composable infrastructure in the mix for lab HPC systems. Given their widely varied workloads, CI will help the lab better apportion resources to satisfy needs and wring more performance out of their systems.
The lab has an Advanced Quantum Testbed, which will at some point deploy a true quantum system on the site. They’re not only researching how to design a quantum box, but also spending a lot of time researching how to best make use of it when it is up and running.
Along the way in the interview, we touched on his takeaways from SC22, the burgeoning world of AI, plus other topics.
As usual, one of the last questions is about his vendor wish list – what he’d like to see from the HPC vendor community in the next three years or so. For Bhimji, cloud orchestration, better and more modern system software that allows for flexibility and resiliency (with rolling reboots), and better integration with a variety of accelerators top his list.
Next we interviewed Inder Monga, the executive director of ESnet (Energy Science Network), which is a vital network that provides high-bandwidth connections between tens of thousands of government labs, universities, researchers, and other national labs. As Inder said “we are the data circulatory system for the DOE.”
In the last year, ESnet has moved more than 1.4 exabytes of data. Exabytes. That’s a pretty large amount of data. They don’t just move a lot of data, they move it fast, with a network backbone natively running at 400Gbps. They’re currently in the process of extending this 400Gbps capacity to individual labs, which will radically increase their existing capabilities.
Another example of this is the Stanford Linear Accelerator Laboratory (SLAC) and their new LCLS-II instrument, which will be the first XFEL to be based on continuous-wave superconducting accelerator technology. It comes standard with linac energy of 4 GeV and not one, but two tunable-gap undulators. I have no idea what this means except that it will be spitting out data at around 1 Tb per second and transmit it via ESnet to NERSC for processing. Wow.
The NSF is funding an initiative for ESnet to build a terabit ring around the U.S. to speed data transfer among the national labs and other research organizations. This should provide enough bandwidth for even the most demanding data transfers.
Last but certainly not least, we spent a few minutes talking with Cory Snavely, group manager of the NERSC Infrastructure Group. They run the systems that glue everything together for the lab, providing mechanisms to build gateways, data interchanges, and other sets of services that help automate tying together computation, storage, etc.
One of the goals of this group is to become more cloud-like, making it easier for users to submit jobs and make the services look more like their cloudy counterparts.
The next big challenge for the group is to embrace the upcoming exascale system,“NERSC 10”, which could be as much as 10x the performance of Perlmutter. Snavely discusses types of new services the group will be preparing to deal with a beast of this size and with the expanded number of users, jobs, and usage models that will need to be handled.
Snavely’s vendor wish list includes better interoperability and standardization between systems and modes of usage (batch vs. real time, for example). It will be important to have better integration between the back-end scheduling in order to satisfy both sets of workloads.
This was a very meaty session with LBNL, ESnet, and NERSC folks and I learned a lot about their mission and what’s coming down the road at the lab.