Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

By John Russell

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve the National Nuclear Security Administration (NNSA), will use AMD’s next-gen ‘Genoa’ Epyc CPUs and Radeon GPUs and deliver 2 exaflops (peak double-precision) performance, a 30 percent increase over the original spec. The new system, expected to be put into service in 2023, will be 10x faster than Summit, the fastest publicly-ranked supercomputer in the world today (Top500, November 2019).

The choice of AMD processor technology had not been made when the Department of Energy first announced the ~$600 million El Capitan procurement last August. Cray, now part of HPE, was announced as the prime contractor as was selection of its Shasta architecture. More detail on the CPU/GPU selections along with a few other system elements were presented in a media pre-briefing this week given by Bronis de Supinski, CTO, LLNL, Steve Scott, SVP, senior fellow, and CTO, HPE, and Forrest Norrod, SVP and GM, datacenter and embedded systems group, AMD.

HPE, through Cray, has been the big winner so far in the U.S. Exascale sweepstakes, obtaining contracts for all three systems – Aurora, with an Intel CPU/GPU pair; Frontier, with another AMD CPU/GPU pair, and El Capitan, which we now know will also feature AMD processors and AMD accelerators. After re-entering the HPC server market with its Epyc line of CPUs in 2017, AMD at first treaded lightly in pairing Epyc with Radeon GPUs in high-end servers. That clearly has changed.

Steve Scott, HPE/Cray

Talking about the delayed processor selections for El Capitan, Scott said, “The strategy that they’ve (DoE/LLNL) used – and increasingly others are using it as well – is to choose the system architecture, start getting that in place, and then make the processor choice as late as possible in the process. By doing that you can you get better visibility. Your headlights are illuminating farther in the future. The processor roadmaps have continued to improve and by making that decision later you tend to get a better result in the end, and that’s exactly what happened here.”

He declined to specify the process technology for the new processors but there’s been speculation Genoa will be fabbed on a 5nm process. We may know more soon, “We’ll be unpacking more the details in those parts as time goes by,” said AMD’s Norrod, “Our next disclosure on Genoa will, quite frankly…we’ll say a little bit about that at our financial analyst day which is this Thursday.” That’s tomorrow.

El Capitan’s primary mission is within NNSA’s advanced simulation and computing program, which uses simulations to certify the country’s nuclear stockpile is safe, secure and reliable. “To provide that certification we require complex simulations and as the nuclear stockpile ages, the complexity of the simulations only increases and need to be able to use larger and larger systems,” said de Supinski.

El Capitan will leverage HPE’s Shasta architecture which is at the core of the most recent refresh of the HPE/Cray advanced scale product line. Other core components include a new software stack, new Slingshot interconnect technology, and new storage system.

“[The] new software stack that provides a much more dynamic cloud like environment for hybrid workflows,” said Scott. “It has open documented APIs between the software components. It has a management system that’s built with redundant microservices and managed as a Kubernetes cluster, and has robust container support to allow users to take any workload that runs anyplace and run under this system as well.” A systems monitoring framework will run underneath the stack to optimize performance and help predict failures.”

Calling it a future-proof design, Scott said “We’ve designed it to accommodate a wide diversity of processors, different amounts of power, different types of processing, different physical sizes of the processor and memory system. And we’ve given it the power and the cooling headroom to handle processors that are headed again in the years ahead up to the kilowatt power levels.”

El Capitan will be liquid cooled and have an energy budget between 30-to-40 megawatts with expectations it will end up closer to 30MW than 40MW according to Scott. Slingshot and high performance Ethernet comprise the planned system interconnect. The planned storage system is HPE’s new ClusterStor E1000, which Scott said is, “a highly flexible tiered storage system using flash and hard drive partitions That allows you to individually optimize for performance as well as capacity and then does intelligent tiering of data between the partitions and this attaches directly to the slingshot interconnect, which helps take out cost and complexity and latency.”

Shasta Compute Blade

Specific performance specs were generally not disclosed. The new AMD CPU will use the Zen4 core which is reportedly on schedule for launch in 2021. The new CPU-GPU pairing (A-plus-A in AMD parlance) will leverage AMD’s Infinity fabric 3.0 to deliver memory coherency. The detailed node structure and number of nodes for El Capitan were not discussed in the pre-briefing but the official press release characterized the architecture as, “using accelerator-centric compute blades (in a 4:1 GPU to CPU ratio, connected by the 3rd Gen AMD Infinity Architecture for high-bandwidth, low latency connections) to increase performance for data-intensive AI, machine learning and analytics needs by offloading processing from the CPU to the GPU.”

As is the case generally in heterogeneous architectures, accelerators handle most of the work and require efficient IO. Norrod said, “We have next generation memory and IO subsystems that can provide non-blocking access to memory, non-blocking access to IO, and ensure that the full power of the Zen4 CPU engine and the Radeon Instinct GPU engines.”

He said the new GPU is optimized for high performance computing and machine intelligence applications. “It has extensive mix precision operations to optimize that deep learning performance, as well as [the ability] to provide peak single and dual precision performance with more traditional HPC applications. It does embody a next generation of high bandwidth memory (HBM) memory on package to provide the memory bandwidth and capacity that’s so critical to again feed the beast (GPU).”

While many data analytic workloads look quite different from high performance simulation, Scott said, “It turns out AI is one of the workloads that shares a lot in common with high performance simulation. Typically, the granularity or the precision that you use for the computations is quite a bit different. Most of AI is done at 16- or, or 32-bit precision, whereas, most of the scientific simulation is done at 64-bit precision. But modern processors like the AMD GPUs can take their function units and run them either in 64-bit mode or in 16-bit mode or 32-bit mode depending upon the particular computation. [To do that] you need a strong interconnect and need very high memory bandwidth which it shares in common with scientific workloads.

“We find the combination of the CPU and GPU with flexible precision, married with very high memory bandwidth and interconnect bandwidth and storage bandwidth to be well suited for both simulation and AI workloads and we can use all the compute nodes in the system to bring to bear to either those workloads,” Scott said.

Interestingly, AI is not currently a top priority at LLNL.

De Supinski said, “We’re doing a lot of research and development at Livermore exploring how we can bring [AI] to bear our simulations. Whereas we need a certain accuracy, deep learning models are probabilistic and so you can often be good enough with lower precision operations whereas we have to be able to understand where the errors are and where they are becoming larger because of the reduced precision and then be able to bring some mechanism in to increase precision and accuracy required.”

At the pre-briefing, a question was asked about El Capitan’s ability to use non-GPU accelerators. Scott said while GPUs are currently the AI accelerator of choice, many users are looking at alternatives and that El Capitan’s system architecture is “designed to accommodate that kind of heterogeneous mix.”

De Supinski noted LLNL is using an unclassified system, Lassen, a sister machine to the classified Sierra system, to learn more about emerging AI accelerators. “We’re actively exploring ways of adding purpose-built machine learning accelerators to that system. I would anticipate that the mechanism by which we’re doing that is available entirely in El Capitan; that is we can add additional nodes to the system that are designed specifically for that purpose. We will see how things go with our exploratory studies on Lassen. If they go well, we will be very likely to engage HPE in helping us figure out how we can exploit that.”

AMD, HPE and LLNL are collaborating on software tools for El Capitan. Part of the plan is to leverage AMD’s ROCm framework to take advantage of  “coherent acceleration in the OpenMP environment as well as other environments” according to Norrod.

Scott said, “As part of this procurement, the Department of Energy has provided additional funds beyond the purchase of the machine to fund non-recurring engineering efforts and one major piece of that is to work closely with AMD on enhancing the programming environment for their new CPU-GPU architecture.” Work is ongoing by all three partners to take the critical applications and workloads forward and optimize them to get the best performance in the machine when El Capitan is delivered.

De Supinski emphasized, “This is a collaborative process particularly for the software. Some of the software for the system is being developed at Lawrence Livermore in addition to the applications. For instance, we very much expect Spack, which is an open source management package [to able to run] on the new system.”

One interesting feature which was not discussed in the briefing but was mentioned in the official announcement is El Capitan’s planned use of optical data transmission.

According to the release, “HPE is expanding its partnership with LLNL to actively explore HPE optics technologies, a computing solution that uses light to transmit data, to feature in the DOE’s El Capitan. HPE’s optics technologies stem from R&D efforts related to PathForward, a program backed by U.S. DOE’s Exascale Computing Project. HPE developed and demonstrated breakthrough optics prototypes that integrate electrical-to-optical interfaces to enable broad use in future classes of system interconnects. Together, HPE and LLNL are exploring ways to integrate these optics technologies with HPE’s Cray Slingshot for DOE’s El Capitan to transmit more data, more efficiently. This approach aims to improve power efficiency, reliability and ability to cost-effectively increase global system bandwidth.”

Optical data transmission is a hotbed of research with many companies aggressively seeking practical implementation.

El Capitan will become NNSA’s fastest computer and greatly enhance NNSA’s ability to run 3D simulations quickly instead of 2D simulation. LLNL is managing the new system for the NNSA and has developed emerging techniques that allow researchers to create faster, more accurate models for primary missions across stockpile modernization and inertial confinement fusion (ICF), a key aspect of stockpile stewardship.

Link to the official announcement: https://www.llnl.gov/news/llnl-and-hpe-partner-amd-el-capitan-projected-worlds-fastest-supercomputer

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of computing capability in support of data analysis and AI workload Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been instrumental to AMD’s datacenter market resurgence. Nanomet Read more…

By Doug Black

Supercomputer-Powered Protein Simulations Approach Lab Accuracy

June 1, 2020

Protein simulations have dominated the supercomputing conversation of late as supercomputers around the world race to simulate the viral proteins of COVID-19 as accurately as possible and simulate potential bindings in t Read more…

By Oliver Peckham

HPC Career Notes: June 2020 Edition

June 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Supercomputer Modeling Shows How COVID-19 Spreads Through Populations

May 30, 2020

As many states begin to loosen the lockdowns and stay-at-home orders that have forced most Americans inside for the past two months, researchers are poring over the data, looking for signs of the dreaded second peak of t Read more…

By Oliver Peckham

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This