Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

By Tiffany Trader

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laboratory (LLNL). The contract, valued at $600 million, calls for Cray to deliver El Capitan to Livermore in late 2022 with full production targeted for late 2023, enabling NNSA to continue to perform essential functions for the United States’ Nuclear Stockpile Stewardship Program.

With a peak speed of more than 1.5 exaflops, El Capitan will be based on Cray’s Shasta architecture and provide advanced capabilities for modeling, simulation and artificial intelligence (AI). Compared with its geological namesake, the famous “El Cap” in Yosemite National Park, Livermore’s El Capitan compute blades when laid end to end would scale the peak of El Capitan more than three times, said Cray.

The announcement marks the second award announcement to come out of the CORAL-2 program, the joint DOE-NNSA effort to procure up to three exascale supercomputers with a potential budget of $1.8 billion. [Update 08/13: the DOE’s Office of Science confirmed there will only be two CORAL-2 awards made, noting “ANL is focused on delivering the Department’s first exascale system in 2021.”]

El Capitan’s Shasta system will be comprised of “fat” CPU-GPU nodes, utilizing Cray’s Slingshot interconnect and a future generation of ClusterStor storage. Notably, the chip and accelerator component suppliers have not been announced.

“The El Capitan system procurement was written in such a way that the Livermore team in working with us at Cray can make a late-binding decision on the node architecture choice; there’s lots of different options and that part of the market is changing very rapidly between the different CPUs and GPUs that are available,” said Cray CEO Pete Ungaro at a media briefing held yesterday, announcing the signing of the contract. To be clear, it’s not that Cray and Livermore aren’t disclosing the chip suppliers; these decisions haven’t been made yet.

“The Shasta hardware and software architecture can accommodate a wide variety of processors and accelerators. So we’re able to spend time with Livermore really working closely together to finalize the decision on which of these components will be used at the node level,” Ungaro added.

The Cray CEO emphasized the goal was to maximize the value of the machine for the dollar and for the U.S. taxpayers, a sentiment echoed by LLNL Director Bill Goldstein, who confirmed there were a number of competitors for the procurement. “It was tremendously competitive,” he said. “In the end, we found Cray to be the best suited for the types of problems that we have to solve, and provide the best value for the American taxpayers. Basically, it was a bang for the buck kind of evaluation.”

El Capitan will serve the critical mission needs of NNSA’s Tri-Laboratory community: Lawrence Livermore National Laboratory, Los Alamos National Laboratory and Sandia National Laboratories. Depending on the application, El Capitan is expected to run national nuclear security calculations at more than 50 times the speed of LLNL’s Sequoia system and roughly 10 times faster on average than LLNL’s Sierra system, currently the world’s number two ranked supercomputer at 95 Linpack petaflops (125 petaflops peak).

The forthcoming system is anticipated to be at least four times more energy efficient than Sierra. The RFP set a maximum power of 40 megawatts, with 30 megawatts as the preferred target. “It will depend on the final node configuration, and design of the system, but we expect it to require about 30 megawatts of power,” said Goldstein.

Facility upgrades will be undertaken at Livermore to be able support 85 megawatts to the machine room floor to support a number of systems, including Sierra, and provide the headroom necessary to bring in El Capitan.

CORAL-1 and CORAL-2 machines (click to expand)

NNSA and Livermore require the advanced scale and capabilities of El Capitan to perform the 3D simulations that are becoming essential to meet the demands of the NNSA Life Extension Programs (LEPs) and address nuclear weapon aging issues.

Lisa E. Gordon-Hagerty, Department of Energy undersecretary for Nuclear Security and NNSA administrator said, “El Capitan will further enable researchers from NNSA’s two nuclear weapons design laboratories — Lawrence Livermore and Los Alamos — and our premier Engineering Laboratory Sandia National Laboratory to run 3D simulations of calculations and resolutions that are difficult, time consuming or even impossible using today state-of-the-art supercomputers.”

“The capability represented by El Capitan is essential to national security,” Goldstein added. “Ever since we ceased nuclear testing in 1992, it was clear that the nation would require massive increases in computing power in order to meet the challenge of ensuring the safety, security and reliability of the nuclear stockpile. Through NNSA’s stockpile stewardship program, these advances have been realized, with computer speeds increasing one million fold to date. Now, we’ve face threat challenges as our systems age to the point that virtually every component of both warheads and delivery system must be redesigned and re-manufactured to maintain the same deterrent capabilities that we had in 1992. This will put incredible stress on our computational resources, and El Capitan is designed to address that problem.”

Further underscoring the importance of fielding this exascale machine is Rob Neely (weapons simulation and computing program coordinator for computing and programming environments at Livermore), interviewed by HPCwire for this article: “The problems that we’re being asked to address [for the nuclear stockpile mission] year over year become increasingly difficult to answer with high confidence without increased compute power. We need more predictive codes that are running higher fidelity models. We need to be running full 3D simulations not 2D approximations regularly, not as heroic or day-long calculations, but where we can turn dozens or hundreds of these around in a work stream. And the life extension programs, …aimed at extending the lifetime of certain stockpile elements, they’re asking very hard questions about things that we can’t always answer experimentally or that are prohibitively expensive to answer experimentally; simulation is increasingly going to bear on addressing those issues. So you take all that together, combine that with our underlying science mission of better understanding material science and additive manufacturing techniques and a number of other things — and our current systems remain completely swamped and overutilized for the mission. I would say, El Capitan can’t arrive quickly enough.”

DOE HPC Facilities Systems. Aurora, Frontier and El Capitan are all Cray Shasta systems (with Intel the prime contractor for Aurora). The next big announcement will be Crossroads. Of the grayed out machines, Titan has been decommissioned and Mira and Sequoia will be decommissioned soon.

The DOE states that El Capitan will be its third exascale-class supercomputer, following Argonne National Laboratory’s Aurora and Oak Ridge National Laboratory’s Frontier system. It’s not clear at this point if Aurora, relying on a future Intel Xeon CPU and the Intel Xe GPU which is in development, will reach an exaflops performance on Linpack, long regarded as the minimum standard for these 1,000x performance thresholds. Recall that Aurora is technically part of the CORAL-1 RFP, which was a joint DOE/NNSA pre-exascale procurement project. When Knights Hill (intended as a follow-on to the Knights Landing Xeon Phi) was canceled, Aurora was retooled with a target of at least a peak exaflop/s. At the announcement ceremony in March after the rewritten contract was formalized, Cray and Argonne cited a target of sustained exaflop/s; and recently slides have been spotted showing a projected peak speed of 1.3 exaflops.

Quite notably, Aurora, Frontier and El Capitan–the three extreme-scale supercomputers the United States if fielding in the 2021-2023 timeframe–will all employ the Cray Shasta architecture, its Slingshot interconnect and a new software platform.

Speaking in an interview with HPCwire, Cray CTO Steve Scott set some expectations about Shasta’s software system, noting cloud-like capabilities that have been added since the XC series, “which had a very scalable but very monolithic and not very flexible software stack,” according to Scott.

There’s since “been a ground up restructuring of the software stack to be much more cloud-like to be to support these converged workflows and be able to dynamically instantiate lots of different software environments on the system,” he said.

“The entire management system is basically a big Kubernetes cluster, so you can easily have fault tolerance and fail-over for different services. And the analytics stack then can run microservices, in a very cloud-like way. So the APIs have become much more open and document to allow people to swap in different software components.”

Like the other two announced Cray CORAL systems, El Capitan will enable the converged use of modeling, simulation and AI.

“The heterogeneous architecture that underlies El Capitan is actually uniquely able to host both artificial intelligence machine learning applications at the same time it does modeling and simulation,” said Goldstein. “And we are already starting to think about it and actually implement ways in which to combine machine learning with modeling and simulation to accelerate our ability to simulate beyond the factor of ten, that the hardware alone, is going to give us with El Capitan.”

Specifically, Goldstein said machine learning is ideally suited to optimally sampling the multi-dimensional space of possible uncertainties in all of the models that go into the simulations. “It’s a problem that goes under the rubric uncertainty quantification,” he said, “and it’s one that is crucial for [Livermore and NNSA] in being able to make further progress in life-extending our systems.”

While El Capitan is being fielded foremost to serve the U.S. classified nuclear stockpile mission, the partners note that the machines advanced capabilities “will [also] benefit areas of basic science beyond nuclear security, requiring high-resolution multi-physics simulations, such as cancer research, optimizing design for additive manufacturing, climate, seismology and astrophysics.”

Further thoughts…

The lack of CPU and accelerator disclosure does lead to some interesting speculation as to what the nodes could potentially be. With Arm+GPU in the running soon and – who knows – maybe one day AMD+Intel or Intel+AMD – we came up with nine CPU+GPU permutations in the realm of possibility, with some definitely being more – and less — likely than others. (Neely agreed that not all were under “deep consideration.”)

For its part, Cray said it was up to the challenge of supporting all the combinations.

“There are lots of interesting technologies these days,” Scott said. “It’s all part of sort of the blossoming of processors as we look towards more and more architectural specialization. That’s the reason that Shasta was designed the way it was. In previous designs we had very little flexibility. Shasta has a lot more flexibility in terms of the size of nodes and what type of technologies go in.”

As a final note, we would be remiss in not mentioning the efforts of the Exascale Computing Project, which is in charge of assuring there’s an exascale-ready software ecosystem to get the most from exascale hardware when it arrives. Read “Doug Kothe Delivers Whirlwind ECP Update in 70 (or so) Slides” for a fast-paced dive into this comprehensive effort; a recent video interview with the ECP Director offers interesting commentary on the rise of heterogeneous accelerated-node architectures.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about how AI can benefit their business operations and products. Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman Institute for Advanced Science and Technology at the Universi Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Gordon Bell Special Prize for High Performance Computing-Ba Read more…

By Oliver Peckham

AWS Solution Channel

Introducing AWS ParallelCluster as an Intel Select Solution

High performance computing (HPC) system owners can spend weeks or months researching, procuring, and assembling components to build HPC clusters to run their workloads. Understanding and managing the complexities of compute, storage, networking, and software requirements can be confusing and time-consuming, slowing innovation and results. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Gordon Bell Prize Winner Breaks Ground in AI-Infused Ab Initio Simulation

November 19, 2020

The race to blend deep learning and first-principle simulation to speed up solutions and scale up problems tackled is one of the most exciting research areas in computational science today. This year’s ACM Gordon Bell Prize winner announced today at SC20 makes significant progress in that direction. Read more…

By John Russell

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh

November 27, 2020

As HPE’s chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about Read more…

By Todd R. Weiss

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

November 25, 2020

Tiering in HPC storage has a bad rep. No one likes it. It complicates things and slows I/O. At least one storage technology newcomer – VAST Data – advocates dumping the whole idea. One large-scale user, NERSC storage architect Glenn Lockwood sort of agrees. The challenge, of course, is that tiering... Read more…

By John Russell

Exscalate4CoV Runs 70 Billion-Molecule Coronavirus Simulation

November 25, 2020

The winds of the pandemic are changing – for better and for worse. Three viable vaccines now teeter on the brink of regulatory approval, which will pave the way for broad distribution by April or May. But until then, COVID-19 cases are skyrocketing across the U.S. and Europe... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Gordon Bell Prize Winner Breaks Ground in AI-Infused Ab Initio Simulation

November 19, 2020

The race to blend deep learning and first-principle simulation to speed up solutions and scale up problems tackled is one of the most exciting research areas in computational science today. This year’s ACM Gordon Bell Prize winner announced today at SC20 makes significant progress in that direction. Read more…

By John Russell

SC20 Keynote: Climate, Exascale & the Ultimate Answer

November 19, 2020

SC20’s keynote was delivered by renowned meteorologist and climatologist Bjorn Stevens, a director at the Max Planck Institute for Meteorology since 2008 and a professor at the University of Hamburg. In his keynote, Stevens traced the history of climate science from its earliest days through... Read more…

By Oliver Peckham

EuroHPC Exec. Dir. Talks Procurement, EPI, and Europe’s Efforts to Control its HPC Destiny

November 19, 2020

While much of the HPC community’s attention is fixed on SC20’s flood of news and new product announcements, Anders Dam Jensen, the newly-minted executive di Read more…

By Steve Conway

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This