More than a year ago, the National Center for Atmospheric Research (NCAR) announced that it was procuring a new supercomputer from HPE. That system — eventually (thanks to a middle-schooler) named “Derecho,” after a “straight-line moving windstorm” that “can be as destructive as a tornado” — is now hitting the radar, with delivery anticipated in the back half of this year. At the virtual HPC User Forum this week, Rory Kelly — a software engineer at NCAR — talked about the center’s work and its preparations for the impending arrival of Derecho.
NCAR and HPC
“The purpose of HPC at NCAR, really, is to provide robust, state-of-the-art, high-performance computing systems, and the systems should be able to accelerate the research that’s done by the community, and not just to accelerate computing research on its own,” Kelly said. “The other bullet point is to co-develop our systems not just work on developing our HPC systems in a vacuum, but to work with earth systems science researchers and make sure that we’re designing systems that will be able to revolutionize and transform the science that they do.”
Supporting this, Kelly showed a pie chart breakdown of NCAR’s HPC usage by scientific domain. “Almost half” — 49.8 percent — “of our cycles support climate and large-scale dynamics,” he said. Just 2.4 percent, by contrast, support computational science research.
Focusing on the science, not the flops
NCAR, he said, targets actual science activities “very early on” in the procurement process for a new system, forming a “Science Requirements Advisory Panel” with internal and external scientists. The panel for Derecho surveyed 50 projects, identifying the science they planned to do and the computational abilities the new system would need to support it.
“There was a group doing land-atmosphere interactions and they were targeting big runs — so expecting to need, you know, 30 to 60 thousand CPU cores for a large run or up to a hundred GPUs for a large run,” Kelly said. “Another group surveyed was doing development of a model and they needed somewhat modest compute sources — maybe only 500 cores — to do their development. However, they had different needs: they needed persistent and reliable storage for up to a year and a half. … And over the life of the system, they needed the ability to do workflow and to do continuous integration, to use containers, to have portable workflows — all of these things just to support the workflow of developing a model.”
“[There] was a severe weather modeling group, and they were specifically targeting machine learning to look at severe weather modeling,” he continued. “And in their case, they had this great need — in addition to the actual GPU nodes for computation, they needed significant memory and they also needed pretty vast storage requirements, they were expecting up to half a petabyte of storage just to hold their data training sets, and they needed a commensurately high storage bandwidth in order to enable their application.”
Chasing Derecho
Cheyenne, NCAR’s current system, is an HPE-built supercomputer that debuted in 2016 to 21st place on the Top500. Its 4,000 Intel Xeon Broadwell-powered nodes are split into small-memory (3,168 nodes, 64GB) and large-memory (865, 128GB) variants, delivering 4.79 Linpack petaflops.
Nearly six years later, NCAR is eagerly anticipating its next system — just its third major installation, with the first being the Yellowstone system installed in 2012.
“Derecho is an HPE Cray system,” Kelly said. “It is 2,400 traditional CPU nodes and then 82 GPU nodes. So the CPU nodes are dual-socket, 64-core [AMD] Milans, 256GB there — so looking at just the CPU portion, about 300,000 CPU cores, which is roughly twice as many cores as our current HPC system.”
“Then, additionally, the 82 GPU nodes are something that’s fairly new to NCAR. We haven’t had a really large GPU-based compute system yet. Those GPU nodes are single-socket, 64-core Milans, but they each feature four Nvidia A100 GPUs with 40GB of high-bandwidth memory on-package and 512GB of on-node memory.”
All of these nodes, he said, will be connected by HPE’s Slingshot-11 interconnect and will be supported by a 60PB Lustre filesystem that will serve primarily as scratch compute storage. In aggregate, the system will deliver around 19.87 peak petaflops of compute power.
But: “More importantly to us, it’s about three and a half times the sustained performance of the Cheyenne HPC system that we currently have,” Kelly said. “And that’s based on our application benchmarks that are specific to our workloads.”
About 2.8× of that 3.5× increase, Kelly said, comes from the CPU nodes. Those nodes — which fit four-to-a-blade — are superior in their power and cooling efficiency to the two-per-blade GPU nodes. Overall, the water-cooled system is expected to draw 2.6 to 2.7MW when it’s in regular production, for a power use efficiency (PUE) of about 171 megaflops per watt — more than double the 73 megaflops per watt of Cheyenne.
Upgrading the NCAR datacenter for Derecho
Derecho will be housed in the ten-year-old NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming (pictured in the header). “The datacenter actually had to be expanded in order to accommodate the Derecho system that’s coming in,” Kelly said. “It was designed to be a two-module datacenter, but only the first module — module B — was built out, and so far, all of our systems have gone into module B. However, there’s going to be a period of overlap between the Cheyenne system that’s currently in module B and the Derecho system, and module B couldn’t hold them both. So we had to build out the second module in order to accommodate Derecho.”
Module A, he explained, is a 12,000-square foot layout designed specifically for HPC workloads. Derecho will be housed in module A, while the accompanying parallel filesystem will be housed in module B. “Cheyenne won’t be around for much longer after Derecho arrives,” he said, “but there will be a months-long period of overlap.”
At the time of its announcement, Derecho was (perhaps optimistically, given the pandemic) targeted for operation in early 2022. Now, Kelly says, they expect the hardware to be delivered in Q3 of this year — just a few months away — and are anticipating the system to enter production in January 2023. At that time, Derecho will begin work on the projects to which hundreds of millions of CPU core hours and hundreds of thousands of GPU core hours have already been allocated: projects from in-house and university-affiliated teams spanning climate science, oceanography, paleoclimate research and more. Meanwhile, “Gust,” the cutely named test cluster for Derecho revealed last fall, is more or less on schedule, with Kelly saying that it was expected this month.