As Intel, HPE, and Argonne National Laboratory drive toward a 2022 delivery of the Aurora leadership-class supercomputer, HPCwire spoke with Dr. Robert Wisniewski, Intel Fellow: SuperCompute Software, Aurora technical lead and PI, to learn about Intel’s Borealis testbed for Aurora. Wisniewski also explains why he views High Bandwidth Memory as a game-changer for HPC.
HPCwire: What is Borealis? Give us the basics.
Wisniewski: Borealis is our testbed for evaluating, testing, and debugging the Aurora system and the new technologies going into it. It’s a two-rack mini-system, so it’s small compared to the 100+ racks we’ll have at Argonne – but it will have more compute power than many HPC centers, and we expect it to rank on the TOP500 when it’s completed.
It’s at our Jones Farm lab in Oregon and will remain there as a maintenance and support system after Aurora is installed at Argonne.
HPCwire: Does Borealis have the same system architecture and design as Aurora?
Yes, the two machines are purposely the same architecture and design.
HPCwire: What are the node and interconnect specs?
The node has eight HPE Slightshot 11 NICs; the interconnect is a Dragonfly topology utilizing HPE Slightshot networking.
HPCwire: Why is Borealis important? What is its value?
Wisniewski: Every vendor has some way of testing and debugging a new system, so the basic idea isn’t new. Borealis is critical because of the size and scale of the Aurora system, the number of new technologies, and the complexity of the software stack.
The value Borealis brings is to let us do as much of the debug and integration work early on, so that when we scale the system at Argonne, we’ve already tested the smaller-scale functional capability. Because we are using the machine and software stack now, it allows us to confirm that all the pieces of the complex software stack can be built, integrated, installed and executed well ahead of when we get to the critical and challenging work of installing and scaling the hardware. Having as much of that behind us will let us focus on the issues that only show up as you scale the system.
HPCwire: Talk about some of those challenges.
Wisniewski: When you start scaling the hardware and integrating the software, you inevitably see problems that are hard to find in a single-node or small-cluster configuration. There’s a lot of challenges to debugging at scale – there are many components that need to synchronize and scale together, and often those bugs induced by scaling and synchronization aren’t easy to reproduce reliably. It’s an invigorating but challenging exercise.
You find a couple classes of problems. One has to do with the sheer number of nodes and not specifically with the parallelism. They’re the ones our cloud customers also see. These are low probability bugs that don’t show up until you are running tens of thousands of nodes. These run the gamut and involve both hardware and software.
Another class involves obscure timing problems or timing window problems that happen if someone didn’t consider certain small, unlikely flows of execution across different nodes or unexpected interweaving of events or actions that are being taken across the different nodes. These problems don’t exclusively happen at scale, but often they only start manifesting themselves when you have enough nodes to really start testing things. You might eventually find them if you ran millions or billions of tests on a four-node system, but if you’re running on many nodes in parallel, you will encounter them quicker.
Then there’s a class of insidious problems that don’t manifest themselves until you get to scale and are really stressing the software stack or the hardware. These are often the harder ones to find. As a debugger, whether for hardware or software, you want a simple test case that’s isolated down to a small piece of code and that is reproducible, so you can run it again and try things out until you figure out what the problem is. If these problems only happen at scale, you need a lot more nodes to reproduce. Even then, it may not be clear where you should look for the problem. Is it across all the nodes? Or is it just one node that’s getting hung up? It can be difficult to reproduce when you need to get a lot of events or interactions at scale to stress it. So, you need to gather a lot of information, and it makes getting to the root of the problem really challenging. Those are the ones you hope you don’t have.
HPCwire: Will Borealis help with all three classes of problems?
Wisniewski: Yes. Borealis will certainly help with the first class of problems, where you just need to run the program over enough time. It will let us get a lot of CPU hours under us. Borealis will also be large enough that we would hope that over enough CPU hours, it would start showing us the low likelihood timing issues—the second class of problems. And the third class – if there’s something in there and we run enough cycles, hopefully we would come across it.
HPCwire: Will you do “big bang” integration on Borealis, or is it a step-wise process?
Wisniewski: Definitely step-wise. We’re starting with soft-tooled components and engineering samples on the hardware side and preliminary or open source versions of software. Month by month and quarter by quarter, as we approach delivery of the machine, Borealis becomes increasingly closer to the Aurora production environment, and the hardware and software will eventually mirror what we’ll deliver to Argonne.
HPCwire: Changing gears a bit – Intel made some significant announcements this year. Is there one that you’re especially excited about?
Wisniewski: We’re all excited about our next-generation Xeon Scalable processor, codenamed “Sapphire Rapids”, High-Bandwidth Memory (HBM) and DAOS. In particular, I believe HBM will have a huge impact for exascale applications and many others.
Memory bandwidth is the biggest challenge we’re facing in scaling HPC applications. An increasing number of HPC applications, probably greater than 50 percent, are running into performance challenges and bottlenecks due to insufficient memory bandwidth. It’s the long pole in the performance tent.
HBM is an opportunity to significantly increase memory bandwidth, and that’s exciting. It’s going to be a huge win for seismic imaging, hydrodynamics, weather forecasting, neutron transport, and Monte Carlo particle transport codes, to name a few.
HPCwire: We know doctors take the Hippocratic Oath to do no harm. I’ve heard you talk about an analogous “oath” for people working in HPC. Can you share that with us?
Wisniewski: In software and supercomputing, we say software should never get in the way of hardware. You always want the schedules of these programs to be gated by how long it takes to get the hardware and get it installed. You want to do everything to the system software stack and applications that you can ahead of time.
Borealis is one aspect of that approach. Intel has all our engineering teams going full bore. HPE is firming up the software stack and the designs for racks, storage, and networking elements. Argonne is building its new facility to house Aurora and helping researchers and scientists get their codes ready to go, and Intel is providing early technologies and access for Argonne’s development testbed and for early science by DOE and ECP users. We’re all laser-focused on having everything in place removed to do significant science on day one.
HPCwire: And do people still bring you PEZ dispensers?
Wisniewski: I started carrying a PEZ dispenser a few years ago as a reminder of the Peta-Exa-Zetta continuum. People were talking about exascale as if it’s the end, but it really just represents a point in the continuum. We need to do our exascale work with an eye toward the future and think about how we’re going to scale another 10, 100, or 1000 times. People do still bring me PEZ dispensers – I’ve got one that’s two feet high in my office right now! And I do think we’ll get to zettascale computing.
Header image: The images on the cabinet panels of Borealis represent the Early Science Projects that are preparing to run on Aurora.