May 25, 2023 — In the latest episode of the Let’s Talk Exascale Podcast, Scott Gibson speaks with Bronis R. de Supinski about El Capitan, LLNL’s upcoming exascale-class supercomputer.
El Capitan, LLNL’s first exascale-class supercomputer, is projected to exceed two exaFLOPS, which is two quintillion floating-point operations per second of peak performance. That capability could make El Capitan the most powerful supercomputer in the world when it comes online.
Bronis is chief technology officer for Livermore Computing at LLNL. He formulates LLNL’s large-scale computing strategy and oversees its implementation. He frequently interacts with supercomputing leaders and oversees many collaborations with industry and academia. Previously, Bronis led several research projects in LLNL’s Center for Applied Scientific Computing.
He earned his Ph.D. in computer science from the University of Virginia in 1998 and he joined LLNL in July 1998.
In addition to his work with LLNL, Bronis is a professor of exascale computing at Queen’s University, Belfast.
Throughout his career, Bronis has won several awards. This includes the prestigious Gordon Bell prize in 2005 and 2006, as well as two R&D 100s. He is a fellow of the ACM and IEEE.
Scott: Welcome, Bronis. Thank you.
Bronis: Thanks. Nice to be here.
Scott: Great. Well, let’s get going here talking about El Capitan. How is the siting process going for El Capitan?
Bronis: It’s going well. You know, we had to first start with a big project for getting the whole building ready; that was called the Exascale Computing Facility Modernization Project that [High Performance Computing Chief Engineer] Anna Maria Bailey led for Livermore. And that has increased the power available in our main data center on our main compute floor to 85 megawatts. And that’s been done for about a year now. That also gives us about another 15 megawatts for cooling. So we actually have a 100-megawatt data center now.
Since then, there’s also, of course, preparations for the system itself, like that upgraded the building. And then inside the building, we need to deploy water through the primary cooling loop and do some upgrades to the electrical system that actually brings the power from the wall all the way to the system. And that basically also just finished about a week or so ago. And so that’s now all done, and we’re now ready to start siting the computer in our machine room.
Scott: All right. Well, how is El Capitan going to impact Livermore Lab’s core mission of national nuclear security?
Bronis: Well, so we expect El Cap to kind of be a transformative system. So our existing system is Sierra, and one of my happiest moments was when I heard members of our code teams state that Sierra was really the first system that they found truly transformative, in it had actually made it so that 3D simulations are now fairly routine; they can complete them in a reasonable period of time.
With El Capitan, it’s going to significantly increase the capability that we provide to our users. And I expect it’ll make … again, have a similar transformation, in that now they’ll be able to run those so routinely that they’ll be able to use them in uncertainty quantification on a very rapid turnaround basis.
Scott: Will there be an unclassified companion system for El Capitan like you have with Lassen for Sierra?
Bronis: Lassen, yes. Lassen, it’s a park in Northern California near Lake Shasta. So, yes, we’re planning to get an unclassified system that will be called Tuolumne. We pretty regularly take the names for our biggest systems from landmarks related to California and mountains. El Capitan, of course, is an iconic rock face in Yosemite. Tuolumne Meadows is a nice area up kind of near the highest point in Yosemite. It’s near the Tioga Pass. It’ll be roughly between 10 to 15% the size of El Capitan.
Scott: All right. With the recent success at LLNL of fusion ignition, will El Capitan be used for fusion research?
Bronis: Some. Primarily, El Cap will be used for the Advanced Simulation and Computing program for the stockpile stewardship mission. But we do have a team actively working on an application that they call Ice Cap. And that uses a variety of techniques to simulate the NIF [National Ignition Facility] beams. The goal of that set of simulations is to simulate fusion processes, the ignition process, sufficiently that we can make achieving energy gain a regular occurrence with NIF. NIF, of course, is the National Ignition Facility, which is where the big fusion energy experiments take place.
Scott: What other scientific areas might benefit from the capabilities of El Capitan? I guess beyond nuclear.
Bronis: El Capitan will be pretty heavily used pretty much—not quite exclusively but nearly exclusively—for stockpile stewardship. Tuolumne will be contributing more to the wider range of scientific areas. Now, there’s a wide range of scientific disciplines that get explored as part of stockpile stewardship. There’s a lot of materials modeling. So a lot of just kind of basic ways that the universe fits together. We’ve typically had a wide range of molecular dynamics. Some QCD get run on the system, seismic modeling.
What will probably happen is that, you know, those sorts of applications, climate, and that sort of thing will run on Tuolumne. And if there’s a particular case to be made, we can occasionally provide for briefer runs on the big system.
Scott: What is the role of AI going to be on El Capitan, and moving forward even beyond El Capitan? What do you see as the role of AI?
Bronis: So the ICECAP application that I mentioned actually uses AI. So we’ve been very actively exploring cognitive simulation, which is where we use AI techniques—primarily deep neural networks—to short-circuit the need to do detailed physics simulations of some aspects of these large multi-physics simulations.
So ICECAP is using a model called the Hermit model that models portions of the overall fusion process. I don’t think I want to get into all the details of what it does, but we’re actively looking at ways. So I mentioned uncertainty quantification. That’s where we run a wide range of basically a parameter sweep of a specific type of simulation and then try to understand the uncertainties involved in that simulation. And so, we tend to use AI models to guide the parameter choices in those simulations. Then we also … In ICECAP, it’s actually using AI at the kind of lowest level of the simulation, right within the inner loop of the simulation to simulate specific physical aspects.
Scott: What can you tell us about the El Capitan software? I think TOSS, RHEL, Spack, Flux.
Click here to listen and access the full transcript.
Source: Scott Gibson, ECP