It was with a hint of nostalgia that Argonne Lab’s Bill Allcock described the Argonne Leadership Computing Facility’s (ALCF) decision to switch to a commercially-supported workload management suite after 20+ years spent developing and using ALCF’s custom workload manager, Cobalt. Argonne National Laboratory announced today that it is deploying Altair PBS Professional across the organization’s HPC systems and clusters.
“From the inception of ALCF, we wrote our own scheduler called Cobalt, and we decided with the scale we were starting to hit with Aurora coming down the line, we needed to do something else,” said Allcock, who manages ALCF’s advanced integration group. ALCF is currently preparing to take delivery of Aurora, the Intel-HPE system slated to be one of the United States’ first exascale systems.
Looking out at the scaling challenges inherent in the 9,000-plus node Aurora, ALCF had planned to write their own workload manager, a sort of successor to Cobalt. But the one part they weren’t keen to build was the resource manager, so they went in search of a drop-in solution and began making inquiries to a number of workload management tool providers, including Altair.
The way Allcock tells it is after meeting with Altair, his team was won over by the capabilities of PBS Pro. The decision to switch garnered unanimous support, despite the team’s fondness for DIY builds.
The ALCF team is, however, still very involved in the software development, but now they get to contribute to a much broader effort through OpenPBS, the open source side of PBS Pro. At one point in the exploratory process, ALCF was considering implementing the open-source OpenPBS software, but having an exascale machine on the horizon strongly incentivized the extra testing and support provided by Altair. Any consideration of licensing costs had to be weighed against the risks of potential unplanned downtime for a half-a-billion-dollar computational instrument. Leveraging Aurora’s volume — its 9,000-plus nodes — a deal was negotiated for a site-wide license for the entire lab.
Allcock said it did take some “soul searching” to proactively shift the DIY culture that had been part of his team for so long.
“I got into this when ALCF was created and we picked up Cobalt, which was written at Argonne. And so I was a Cobalt guy through and through until about two years ago. And then we looked at what must have been a dozen different schedulers, thinking about what did they do well, what do they not do well, could we pull pieces out and reuse them? That kind of stuff.”
The ultimate decision involved factors that will be relatable to many of our readers.
“Bill Nitzberg [CTO of Altair] has told me that he thinks that Altair is benefiting from this, but from our side, it was a huge win, because it was a perfect melding of our strengths and weaknesses,” said Allcock. “If you were running a Blue Gene machine, which is what we used to run, there was no better scheduler in the world than Cobalt. We knew [IBM] Blue Gene inside and outside and we had tuned that bad boy…Blue Gene…but we had almost no documentation. Support was, you know, best effort from my small team. And then we go to PBS… there are 2,300 pages of very good documentation. We have a support contract now so that we can get help. Our contributions are going to see a much broader use. For us, it was always us and then a couple of other Blue Gene sites that ran Cobalt. Now, every contribution we make is helping everybody who uses OpenPBS, because as I said, we always contribute to the open side.”
A benefit of PBS Pro is its ability to work across the lab’s diverse end-point devices, even allowing co-scheduling across machines, said Allcock. Other relevant features include TLS encryption support; simulation capabilities for modeling different scheduling scenarios; and support for Nvidia’s multi-instance GPU (MIG) mode, which enables one GPU to be operated as seven logical GPUs.
ALCF’s integration team has been running PBS Pro on Docker on their laptops and testing it out for a couple of years, but the confidence that it would work at extreme-scale came from work that Altair had conducted in 2015, using AWS resources to create to create a system that accurately simulated 70,000 PBS Professional nodes. “The public numbers for Aurora are 9,000-plus nodes, so we got a factor of seven or eight to work with there,” said Allcock.
ALCF has now been running PBS Pro on testbed machines for about six months and is in the process of installing the software on its new Polaris supercomputer, which will be the first Argonne system to use PBS Pro in production.
Announced in August, Polaris spans 280 HPE Apollo Gen10 Plus systems, housing a total of 560 AMD Epyc CPUs and 2,240 Nvidia A100 GPUs, which together deliver 44-petaflops of peak double-precision performance. Although they have different HPE architectures, harnessing different GPU devices, both Polaris and Aurora feature node-level CPU-GPU heterogeneity and employ Slingshot networking.
A flexible and supported workload management suite that can operate smoothly at extreme-scale translates into greater scientific and engineering insight for the researchers that Argonne supports. In 2020, the ALCF contributed more than 100 million node hours of computing time, expanding the horizons of deep-space research, improving aircraft efficiencies, and accelerating material and drug design.
“Altair is proud to provide scalability fit for exascale computing and to help Argonne’s scientific user community accelerate their groundbreaking work,” said James R. Scapa, the founder and chief executive officer of Altair, in a statement.
Headquartered in Troy, Michigan, Altair Engineering is a publicly traded company that specializes in product simulation software, HPC technologies and data analytics tools. The company was founded by James R. Scapa, George Christ, and Mark Kistner in 1985. PBS Pro HPC workload management technology was originally developed at NASA’s Ames Research Center in the early 90s. It was acquired by Altair along with its development team in 2003. Last year, Altair expanded the company’s capabilities in high-performance computing via the aquisition of Univa (known for Grid Engine) and Ellexus (maker of I/O profiling tools).