LLNL Highlights Magma’s Role in NNSA’s Computing Arsenal

By Oliver Peckham

March 11, 2020

Lawrence Livermore National Laboratory (LLNL) is one of several national labs working with the National Nuclear Security Administration (NNSA), which manages the military applications of nuclear science – that is, the United States’ nuclear weapons stockpile. The NNSA doesn’t actually conduct weapons tests, though: it simulates them. To do this, the NNSA – and its partner labs – use in-house HPC systems. This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma.

The NNSA’s core mission might sound straightforward, but simulating nuclear weapons is a deeply multidimensional task. “The high-performance computing aspects of that mission involve the development of predictive physics-based models,” explained Matt Leininger, deputy for advanced technology projects at LLNL, in a webinar. “Those […] are models for such areas as materials science, molecular dynamics, particle transport, hydrodynamics, mathematical solvers and other areas.” 

LLNL researchers run multi-physics applications – applications incorporating, for instance, hydrodynamics, particle transport and complex geometries – at first, then run individual science-based applications to drill down into the uncertainties produced by the multi-physics models. Using those results, researchers then revisit the multi-physics applications, iterating the process until they achieve what Leininger calls “predictive science capability.” Many of those models run along various spectra of resolution, dimensionality, timescales and more, adding up to produce an enormous appetite for computing capacity.

To sate this appetite, LLNL calls on the Commodity Technology Systems contract (CTS-1), an NNSA grant awarded to LLNL and its two sister laboratories, Sandia National Laboratories and Los Alamos National Laboratory. Magma, which was shipped in November 2019, is the latest procurement under the CTS-1 umbrella following an award in 2016.

The specs

Magma. Image courtesy of LLNL.

Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 (Cascade Lake-AP) processors. The cluster has 293 terabytes of memory, liquid cooling provided by CoolIT Systems and an Intel Omni-Path interconnect. Its 3.24 Linpack petaflops placed it 69th on the latest Top500 list of the world’s most powerful supercomputers out of a theoretical peak of 5.31 petaflops. On a per-node basis, Leininger told HPCwire, the Cascade Lake processors delivered “about three to three and a half” times the performance compared to Broadwell processors deployed earlier in the CTS program.

Magma has no distinct storage capacity, Leininger said, as it is connected into several different Lustre file systems, but he says that it has access to “many, many petabytes” of storage. In terms of its footprint, Leininger explained that LLNL clusters are designed in “scalable units” that act like LEGO bricks, allowing researchers to scale a system from as few as 20 nodes to several thousand nodes. Magma is about four scalable units, making it physically around the size of “half a tennis court.”

What Magma brings to the table

Leininger was especially excited about a few new elements of Magma. The interconnect, he said, was “particularly critical.” “You can’t just solve [the models] on a single server,” he explained. “You really have to break up the problem and distribute it across thousands of servers and then use that high performance interconnect to tie the pieces back together again.” Thanks to that high performance interconnect, he said, tasks that used to be impossible on a single server now take a couple of days. Leininger also emphasized the memory bandwidth per node (which he called “tremendous”) noting that typical workloads were even more intensive on memory bandwidth than on the network. 

Crucially, and unlike much of LLNL’s Broadwell-based systems, Magma’s uses liquid cooling – specifically, liquid coolant focused on the CPU and memory modules, to which Leininger credits much of Magma’s high density. “When you have a gigantic machine like Sierra that’s liquid-cooled, and then you put a big cluster in the corner that’s air-cooled, it’s challenging facilities-wise to make sure all that cold air is going in that right spot,” Leininger said in an earlier interview with HPCwire. “And it’s often a very human-intensive thing to optimize for all that, and it ends up just being easier and much more cost-effective to just move to liquid cooling on these solutions. So we knew we wanted to do that as well.”

Leininger also stressed that memory errors are a large portion of overall computing errors at LLNL and suggested that the direct liquid cooling may help. “We’re looking forward to reducing the operating temperature of the DIMMs and hopefully therefore reducing the overall number of memory errors we see over the system lifetime,” Leininger said, adding that the cooling system was designed for easy serviceability.

How Magma fits into the NNSA computing landscape

Magma is currently in the final stages of installation at LLNL, after which it will undergo testing and enter full production within the next month. Magma exists alongside several CTS-1 comrades (also supplied by Penguin Computing), including Corona (another LLNL system) and Attaway, which is housed at Sandia. Unlike Magma, Corona is the first of the “A+A” systems: AMD CPUs and AMD GPUs (specifically, AMD Naples CPUs and a 50-50 mix of MI25 and MI60 GPUs). This A+A structure makes Corona an early precursor to the forthcoming exascale Frontier system at Oak Ridge National Laboratory and the forthcoming El Capitan system at LLNL itself. While Magma serves problems more related to materials science, Corona’s GPUs make it more suitable for tasks such as machine learning and AI applications, Leininger explained to HPCwire. Attaway, meanwhile, uses Intel Skylake processors and placed 94th in the most recent Top500.

Leininger claims that LLNL has no plans to sunset any of its other systems once Magma reaches full production, saying that “all the CTS-1 systems we’ve procured over the last four years now, including Magma, will continue to deliver HPC cycles to our users over the next several years.” In fact, he explained, those systems remain in “very heavy use” and LLNL is facing demand beyond even its new capabilities.

To that end, LLNL is ready to move beyond CTS-1. “We are preparing for our next round of CTS procurements that’ll occur starting in late 2021,” Leininger said, “and that’ll be under the second round of the CTS procurements, called CTS-2.” Leininger said an RFP would be issued this summer and a contract would be awarded late in the calendar year as part of a push to deliver systems to NNSA labs from the second half of 2021 through 2024. Of course, he emphasized, there are still a few more systems to deliver before that point.

In general, Leininger said, the CTS-1 systems are “everyday workhorses,” intended to take the load off of the Advanced Technology System (ATS) supercomputers. “Commodity-based systems take on the bulk of day-to-day computing, leaving the larger advanced technology capability systems available for only the most demanding problems across the Tri-Lab community,” said Mark Anderson, director of the NNSA’s Advanced Simulation and Computing Program. The current ATS flagship is the Trinity supercomputer at Los Alamos, which is scheduled to reach end-of-life in 2021. At that point, Trinity will be replaced by a new ATS system, called Crossroads. 

Tiffany Trader contributed to this report.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

AWS Arm-based Graviton3 Instances Now in Preview

December 1, 2021

Three years after unveiling the first generation of its AWS Graviton chip-powered instances in 2018, Amazon Web Services announced that the third generation of the processors – the AWS Graviton3 – will power all-new Amazon Elastic Compute 2 (EC2) C7g instances that are now available in preview. Debuting at the AWS re:Invent 2021... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies participated and, one of them, Graphcore, even held a separ Read more…

HPC Career Notes: December 2021 Edition

December 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

AWS Solution Channel

Running a 3.2M vCPU HPC Workload on AWS with YellowDog

Historically, advances in fields such as meteorology, healthcare, and engineering, were achieved through large investments in on-premises computing infrastructure. Upfront capital investment and operational complexity have been the accepted norm of large-scale HPC research. Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

Raja Koduri and Satoshi Matsuoka Discuss the Future of HPC at SC21

November 29, 2021

HPCwire's Managing Editor sits down with Intel's Raja Koduri and Riken's Satoshi Matsuoka in St. Louis for an off-the-cuff conversation about their SC21 experience, what comes after exascale and why they are collaborating. Koduri, senior vice president and general manager of Intel's accelerated computing systems and graphics (AXG) group, leads the team... Read more…

Jack Dongarra on SC21, the Top500 and His Retirement Plans

November 29, 2021

HPCwire's Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the 2021 Top500 list, the outlook for global exascale computing, and what exactly is going on in that Viking helmet photo. Read more…

SC21: Larry Smarr on The Rise of Supernetwork Data Intensive Computing

November 26, 2021

Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a... Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire