Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

By Tiffany Trader

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laboratory (LLNL). The contract, valued at $600 million, calls for Cray to deliver El Capitan to Livermore in late 2022 with full production targeted for late 2023, enabling NNSA to continue to perform essential functions for the United States’ Nuclear Stockpile Stewardship Program.

With a peak speed of more than 1.5 exaflops, El Capitan will be based on Cray’s Shasta architecture and provide advanced capabilities for modeling, simulation and artificial intelligence (AI). Compared with its geological namesake, the famous “El Cap” in Yosemite National Park, Livermore’s El Capitan compute blades when laid end to end would scale the peak of El Capitan more than three times, said Cray.

The announcement marks the second award announcement to come out of the CORAL-2 program, the joint DOE-NNSA effort to procure up to three exascale supercomputers with a potential budget of $1.8 billion. [Update 08/13: the DOE’s Office of Science confirmed there will only be two CORAL-2 awards made, noting “ANL is focused on delivering the Department’s first exascale system in 2021.”]

El Capitan’s Shasta system will be comprised of “fat” CPU-GPU nodes, utilizing Cray’s Slingshot interconnect and a future generation of ClusterStor storage. Notably, the chip and accelerator component suppliers have not been announced.

“The El Capitan system procurement was written in such a way that the Livermore team in working with us at Cray can make a late-binding decision on the node architecture choice; there’s lots of different options and that part of the market is changing very rapidly between the different CPUs and GPUs that are available,” said Cray CEO Pete Ungaro at a media briefing held yesterday, announcing the signing of the contract. To be clear, it’s not that Cray and Livermore aren’t disclosing the chip suppliers; these decisions haven’t been made yet.

“The Shasta hardware and software architecture can accommodate a wide variety of processors and accelerators. So we’re able to spend time with Livermore really working closely together to finalize the decision on which of these components will be used at the node level,” Ungaro added.

The Cray CEO emphasized the goal was to maximize the value of the machine for the dollar and for the U.S. taxpayers, a sentiment echoed by LLNL Director Bill Goldstein, who confirmed there were a number of competitors for the procurement. “It was tremendously competitive,” he said. “In the end, we found Cray to be the best suited for the types of problems that we have to solve, and provide the best value for the American taxpayers. Basically, it was a bang for the buck kind of evaluation.”

El Capitan will serve the critical mission needs of NNSA’s Tri-Laboratory community: Lawrence Livermore National Laboratory, Los Alamos National Laboratory and Sandia National Laboratories. Depending on the application, El Capitan is expected to run national nuclear security calculations at more than 50 times the speed of LLNL’s Sequoia system and roughly 10 times faster on average than LLNL’s Sierra system, currently the world’s number two ranked supercomputer at 95 Linpack petaflops (125 petaflops peak).

The forthcoming system is anticipated to be at least four times more energy efficient than Sierra. The RFP set a maximum power of 40 megawatts, with 30 megawatts as the preferred target. “It will depend on the final node configuration, and design of the system, but we expect it to require about 30 megawatts of power,” said Goldstein.

Facility upgrades will be undertaken at Livermore to be able support 85 megawatts to the machine room floor to support a number of systems, including Sierra, and provide the headroom necessary to bring in El Capitan.

CORAL-1 and CORAL-2 machines (click to expand)

NNSA and Livermore require the advanced scale and capabilities of El Capitan to perform the 3D simulations that are becoming essential to meet the demands of the NNSA Life Extension Programs (LEPs) and address nuclear weapon aging issues.

Lisa E. Gordon-Hagerty, Department of Energy undersecretary for Nuclear Security and NNSA administrator said, “El Capitan will further enable researchers from NNSA’s two nuclear weapons design laboratories — Lawrence Livermore and Los Alamos — and our premier Engineering Laboratory Sandia National Laboratory to run 3D simulations of calculations and resolutions that are difficult, time consuming or even impossible using today state-of-the-art supercomputers.”

“The capability represented by El Capitan is essential to national security,” Goldstein added. “Ever since we ceased nuclear testing in 1992, it was clear that the nation would require massive increases in computing power in order to meet the challenge of ensuring the safety, security and reliability of the nuclear stockpile. Through NNSA’s stockpile stewardship program, these advances have been realized, with computer speeds increasing one million fold to date. Now, we’ve face threat challenges as our systems age to the point that virtually every component of both warheads and delivery system must be redesigned and re-manufactured to maintain the same deterrent capabilities that we had in 1992. This will put incredible stress on our computational resources, and El Capitan is designed to address that problem.”

Further underscoring the importance of fielding this exascale machine is Rob Neely (weapons simulation and computing program coordinator for computing and programming environments at Livermore), interviewed by HPCwire for this article: “The problems that we’re being asked to address [for the nuclear stockpile mission] year over year become increasingly difficult to answer with high confidence without increased compute power. We need more predictive codes that are running higher fidelity models. We need to be running full 3D simulations not 2D approximations regularly, not as heroic or day-long calculations, but where we can turn dozens or hundreds of these around in a work stream. And the life extension programs, …aimed at extending the lifetime of certain stockpile elements, they’re asking very hard questions about things that we can’t always answer experimentally or that are prohibitively expensive to answer experimentally; simulation is increasingly going to bear on addressing those issues. So you take all that together, combine that with our underlying science mission of better understanding material science and additive manufacturing techniques and a number of other things — and our current systems remain completely swamped and overutilized for the mission. I would say, El Capitan can’t arrive quickly enough.”

DOE HPC Facilities Systems. Aurora, Frontier and El Capitan are all Cray Shasta systems (with Intel the prime contractor for Aurora). The next big announcement will be Crossroads. Of the grayed out machines, Titan has been decommissioned and Mira and Sequoia will be decommissioned soon.

The DOE states that El Capitan will be its third exascale-class supercomputer, following Argonne National Laboratory’s Aurora and Oak Ridge National Laboratory’s Frontier system. It’s not clear at this point if Aurora, relying on a future Intel Xeon CPU and the Intel Xe GPU which is in development, will reach an exaflops performance on Linpack, long regarded as the minimum standard for these 1,000x performance thresholds. Recall that Aurora is technically part of the CORAL-1 RFP, which was a joint DOE/NNSA pre-exascale procurement project. When Knights Hill (intended as a follow-on to the Knights Landing Xeon Phi) was canceled, Aurora was retooled with a target of at least a peak exaflop/s. At the announcement ceremony in March after the rewritten contract was formalized, Cray and Argonne cited a target of sustained exaflop/s; and recently slides have been spotted showing a projected peak speed of 1.3 exaflops.

Quite notably, Aurora, Frontier and El Capitan–the three extreme-scale supercomputers the United States if fielding in the 2021-2023 timeframe–will all employ the Cray Shasta architecture, its Slingshot interconnect and a new software platform.

Speaking in an interview with HPCwire, Cray CTO Steve Scott set some expectations about Shasta’s software system, noting cloud-like capabilities that have been added since the XC series, “which had a very scalable but very monolithic and not very flexible software stack,” according to Scott.

There’s since “been a ground up restructuring of the software stack to be much more cloud-like to be to support these converged workflows and be able to dynamically instantiate lots of different software environments on the system,” he said.

“The entire management system is basically a big Kubernetes cluster, so you can easily have fault tolerance and fail-over for different services. And the analytics stack then can run microservices, in a very cloud-like way. So the APIs have become much more open and document to allow people to swap in different software components.”

Like the other two announced Cray CORAL systems, El Capitan will enable the converged use of modeling, simulation and AI.

“The heterogeneous architecture that underlies El Capitan is actually uniquely able to host both artificial intelligence machine learning applications at the same time it does modeling and simulation,” said Goldstein. “And we are already starting to think about it and actually implement ways in which to combine machine learning with modeling and simulation to accelerate our ability to simulate beyond the factor of ten, that the hardware alone, is going to give us with El Capitan.”

Specifically, Goldstein said machine learning is ideally suited to optimally sampling the multi-dimensional space of possible uncertainties in all of the models that go into the simulations. “It’s a problem that goes under the rubric uncertainty quantification,” he said, “and it’s one that is crucial for [Livermore and NNSA] in being able to make further progress in life-extending our systems.”

While El Capitan is being fielded foremost to serve the U.S. classified nuclear stockpile mission, the partners note that the machines advanced capabilities “will [also] benefit areas of basic science beyond nuclear security, requiring high-resolution multi-physics simulations, such as cancer research, optimizing design for additive manufacturing, climate, seismology and astrophysics.”

Further thoughts…

The lack of CPU and accelerator disclosure does lead to some interesting speculation as to what the nodes could potentially be. With Arm+GPU in the running soon and – who knows – maybe one day AMD+Intel or Intel+AMD – we came up with nine CPU+GPU permutations in the realm of possibility, with some definitely being more – and less — likely than others. (Neely agreed that not all were under “deep consideration.”)

For its part, Cray said it was up to the challenge of supporting all the combinations.

“There are lots of interesting technologies these days,” Scott said. “It’s all part of sort of the blossoming of processors as we look towards more and more architectural specialization. That’s the reason that Shasta was designed the way it was. In previous designs we had very little flexibility. Shasta has a lot more flexibility in terms of the size of nodes and what type of technologies go in.”

As a final note, we would be remiss in not mentioning the efforts of the Exascale Computing Project, which is in charge of assuring there’s an exascale-ready software ecosystem to get the most from exascale hardware when it arrives. Read “Doug Kothe Delivers Whirlwind ECP Update in 70 (or so) Slides” for a fast-paced dive into this comprehensive effort; a recent video interview with the ECP Director offers interesting commentary on the rise of heterogeneous accelerated-node architectures.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Stampede2 ‘Shocks’ with New Shock Turbulence Insights

August 19, 2019

Shockwaves play roles in everything from high-speed aircraft to supernovae – and now, supercomputer-powered research from the Texas A&M University and the Texas Advanced Computing Center (TACC) is helping to shed l Read more…

By Oliver Peckham

Nanosheet Transistors: The Last Step in Moore’s Law?

August 19, 2019

Forget for a moment the clamor around the decline of Moore’s Law. It's had a brilliant run, something to be marveled at given it’s not a law at all. Squeezing out the last bit of performance that roughly corresponds Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip using standard CMOS fabrication. At Hot Chips 31 in Stanfor Read more…

By Tiffany Trader

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Keys to Attracting the Newest HPC Talent – Post-Millennials

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

For engineers and scientists growing up in the 80s, the current state of HPC makes perfect sense. Read more…

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This