In support of the Department of Energy’s National Nuclear Security Administration (NNSA), the Tri-Lab CTS-2 system contract award was announced last week. The NNSA Tri-Lab partnership – comprising Livermore, Los Alamos and Sandia national labs – awarded Dell Technologies a $40-million-plus contract to supply commodity HPC systems totalling upwards of 40 peak petaflops.
The plan, it seems, is to turn the crank on Moore’s law yet another time. The labs are opting for primarily straight x86 gear, powered by the forthcoming ‘Intel 7’ Sapphire Rapids CPU and – when it becomes available – the high-bandwidth memory (HBM) version of said CPU. The Dell PowerEdge systems will be installed beginning in mid-2022 with deliveries continuing through 2025.
CTS – short for Commodity Technology Systems – is one of three NNSA procurement tracks; the others being the Advanced Technology Systems (ATS) and the Advanced Architecture Systems (AAS). ATS systems include Trinity, Sierra, the forthcoming Crossroads supercomputer, and the future exascale system, El Capitan.
The purpose of the CTS procurement model is to reduce costs by providing a common hardware platform across the three labs to serve up capacity computing needs for the NNSA’s Advanced Simulation and Computing (ASC), as well as other defense programs. With the LLNL-led Tri-Laboratory Operating System Software, aka TOSS, the Tri-Lab combined user community also benefits from a common software environment.
CTS-2 hardware is deployed as “scalable units” (SUs), providing the Tri-Lab complex with a modular building block to construct larger systems. CTS-2 SUs will comprise ~200 nodes, supplying about 1.5 petaflops of computing power (per SU). Compute nodes are based on (future) Dell PowerEdge C6620 servers, while the other nodes – management, login, and gateway nodes – will be (future) Dell PowerEdge R760 systems.
With the timing of the first installations planned for mid-2022, CTS-2 procurement lead Matt Leininger told HPCwire that the initial systems will leverage Intel Sapphire Rapids CPUs with DDR5. He expects systems with the HBM Sapphire Rapids parts will come after that, subject to availability and the timing needs of the three labs.
“We expect that there’ll be several orders that will be CPU plus HBM systems (no DDR5),” said Leininger, deputy for advanced technology projects at Lawrence Livermore National Laboratory. “There’s a lot of interest [in those HBM CPUs], and those decisions are being made now.”
As the CTS-2 systems come online, they will replace the aging CTS-1 systems, which will be phased out and retired. The CT-1 contract was awarded to Penguin Computing in 2015 with an initial value of $39 million and a total aggregate system capacity that reached more than 20 peak petaflops over the life of the procurement vehicle.
The size of the Tri-Lab commodity SUs has been growing over the years. In 2011, the second Tri-Lab Linux Capacity Cluster (TLCC-2) contract – progenitor to the CTS program – specified ~100 node SU blocks of 50 teraflops each. In 2015, under CTS-1, SUs expanded to ~200 nodes, delivering roughly 200 teraflops. Now, with CTS-2, there’s a 7.5x jump in computing power, with each SU expected to deliver around 1.5 peak petaflops. “Partly this is because the radix of the high performance switch has increased, but also the nodes are getting a lot more cores, and a lot more vector floating point operations in them as well,” said Leininger.
All the CTS-2 systems will use direct-to-chip liquid cooling, based on products from CoolIT. The transition to liquid cooling has quickened over the course of CTS-1 with a greater number of systems employing liquid cooling as the contract went on. In total, CTS-1 ended up with about half the installations being air cooled, and half liquid cooled, Leininger reported.
“It used to be liquid cooling was kind of the novel thing that you’d only do for the biggest systems in the world, but now many of the the GPUs and CPU SKUs – not just the highest end, but even your mid/upper tier of SKUs – require liquid cooling because the TDP is going up so much per socket these days,” said Leininger. “With CTS-2, we expect that basically everything we purchase, or 99 percent, is going to be liquid cooled.”
CTS-2’s networking (interconnect) solution has not yet been decided on. Leininger said they are looking at several options. Presumably those options include Nvidia-Mellanox HDR and NDR, and Omni-Path (100 and the future 400 products) from Cornelis Networks. One of the CTS-1 systems that is installed at Livermore, Ruby, was implemented with Cornelis’ Omni-Path 100 interconnect (currently, it’s the only system on the Top500 list with Cornelis-branded networking, although there are 42 OPA-connected machines on the list).
Tri-Lab storage systems are acquired via a separate procurement process. CTS-2 system gateway nodes will connect to the larger cluster network and into the various storage systems at Sandia, Livermore and Los Alamos. Overall, the storage is dominated by Lustre, but includes a lot of NFS, with pockets of other types of storage, said Leininger.
Although the CTS procurement program is primarily focused on all-CPU “workhorse” systems, Leininger said the contract facilitates the requisitioning of some diverse technologies as well, including CPUs with HBM and GPUs. The CTS-1 procurement, while being primarily Intel x86 based, included a notable GPU system, Corona, a Penguin machine with AMD CPUs and AMD GPUs (specifically, AMD Naples CPUs and a 50-50 mix of MI25 and MI60 GPUs).
“We do expect CTS to be kind of more conservative than some of the bigger ATS systems, because we have to support everybody on day one, and not require them to do a bunch of porting so they can get up and running. But we do see over time that those technologies are making their way in. It’s somewhat of an obvious statement, but adding HBM to CPUs is probably a pretty easy step to make, right? You’re not having to change codes; it’s more if they’re just memory bandwidth bound, then they could just run their code and see a good performance improvement. Going to the CPU-GPU (hybrid node), that’s a bigger step for some codes. But if we do need to put a GPU system down on the floor for our physics applications, or for our machine learning AI work, we can certainly do that off this contract. It is very flexible.”
“The CTS-2 systems will serve the NNSA stockpile management and production modernization programs, along with other mission-critical efforts underpinning our stockpile stewardship program,” said Thuc Hoang, director of NNSA ASC program, in a statement. “We look forward to working closely with Dell Technologies as our new vendor partner in delivering more powerful and energy-efficient computing cycles to our customers.”