Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

By John Russell

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve the National Nuclear Security Administration (NNSA), will use AMD’s next-gen ‘Genoa’ Epyc CPUs and Radeon GPUs and deliver 2 exaflops (peak double-precision) performance, a 30 percent increase over the original spec. The new system, expected to be put into service in 2023, will be 10x faster than Summit, the fastest publicly-ranked supercomputer in the world today (Top500, November 2019).

The choice of AMD processor technology had not been made when the Department of Energy first announced the ~$600 million El Capitan procurement last August. Cray, now part of HPE, was announced as the prime contractor as was selection of its Shasta architecture. More detail on the CPU/GPU selections along with a few other system elements were presented in a media pre-briefing this week given by Bronis de Supinski, CTO, LLNL, Steve Scott, SVP, senior fellow, and CTO, HPE, and Forrest Norrod, SVP and GM, datacenter and embedded systems group, AMD.

HPE, through Cray, has been the big winner so far in the U.S. Exascale sweepstakes, obtaining contracts for all three systems – Aurora, with an Intel CPU/GPU pair; Frontier, with another AMD CPU/GPU pair, and El Capitan, which we now know will also feature AMD processors and AMD accelerators. After re-entering the HPC server market with its Epyc line of CPUs in 2017, AMD at first treaded lightly in pairing Epyc with Radeon GPUs in high-end servers. That clearly has changed.

Steve Scott, HPE/Cray

Talking about the delayed processor selections for El Capitan, Scott said, “The strategy that they’ve (DoE/LLNL) used – and increasingly others are using it as well – is to choose the system architecture, start getting that in place, and then make the processor choice as late as possible in the process. By doing that you can you get better visibility. Your headlights are illuminating farther in the future. The processor roadmaps have continued to improve and by making that decision later you tend to get a better result in the end, and that’s exactly what happened here.”

He declined to specify the process technology for the new processors but there’s been speculation Genoa will be fabbed on a 5nm process. We may know more soon, “We’ll be unpacking more the details in those parts as time goes by,” said AMD’s Norrod, “Our next disclosure on Genoa will, quite frankly…we’ll say a little bit about that at our financial analyst day which is this Thursday.” That’s tomorrow.

El Capitan’s primary mission is within NNSA’s advanced simulation and computing program, which uses simulations to certify the country’s nuclear stockpile is safe, secure and reliable. “To provide that certification we require complex simulations and as the nuclear stockpile ages, the complexity of the simulations only increases and need to be able to use larger and larger systems,” said de Supinski.

El Capitan will leverage HPE’s Shasta architecture which is at the core of the most recent refresh of the HPE/Cray advanced scale product line. Other core components include a new software stack, new Slingshot interconnect technology, and new storage system.

“[The] new software stack that provides a much more dynamic cloud like environment for hybrid workflows,” said Scott. “It has open documented APIs between the software components. It has a management system that’s built with redundant microservices and managed as a Kubernetes cluster, and has robust container support to allow users to take any workload that runs anyplace and run under this system as well.” A systems monitoring framework will run underneath the stack to optimize performance and help predict failures.”

Calling it a future-proof design, Scott said “We’ve designed it to accommodate a wide diversity of processors, different amounts of power, different types of processing, different physical sizes of the processor and memory system. And we’ve given it the power and the cooling headroom to handle processors that are headed again in the years ahead up to the kilowatt power levels.”

El Capitan will be liquid cooled and have an energy budget between 30-to-40 megawatts with expectations it will end up closer to 30MW than 40MW according to Scott. Slingshot and high performance Ethernet comprise the planned system interconnect. The planned storage system is HPE’s new ClusterStor E1000, which Scott said is, “a highly flexible tiered storage system using flash and hard drive partitions That allows you to individually optimize for performance as well as capacity and then does intelligent tiering of data between the partitions and this attaches directly to the slingshot interconnect, which helps take out cost and complexity and latency.”

Shasta Compute Blade

Specific performance specs were generally not disclosed. The new AMD CPU will use the Zen4 core which is reportedly on schedule for launch in 2021. The new CPU-GPU pairing (A-plus-A in AMD parlance) will leverage AMD’s Infinity fabric 3.0 to deliver memory coherency. The detailed node structure and number of nodes for El Capitan were not discussed in the pre-briefing but the official press release characterized the architecture as, “using accelerator-centric compute blades (in a 4:1 GPU to CPU ratio, connected by the 3rd Gen AMD Infinity Architecture for high-bandwidth, low latency connections) to increase performance for data-intensive AI, machine learning and analytics needs by offloading processing from the CPU to the GPU.”

As is the case generally in heterogeneous architectures, accelerators handle most of the work and require efficient IO. Norrod said, “We have next generation memory and IO subsystems that can provide non-blocking access to memory, non-blocking access to IO, and ensure that the full power of the Zen4 CPU engine and the Radeon Instinct GPU engines.”

He said the new GPU is optimized for high performance computing and machine intelligence applications. “It has extensive mix precision operations to optimize that deep learning performance, as well as [the ability] to provide peak single and dual precision performance with more traditional HPC applications. It does embody a next generation of high bandwidth memory (HBM) memory on package to provide the memory bandwidth and capacity that’s so critical to again feed the beast (GPU).”

While many data analytic workloads look quite different from high performance simulation, Scott said, “It turns out AI is one of the workloads that shares a lot in common with high performance simulation. Typically, the granularity or the precision that you use for the computations is quite a bit different. Most of AI is done at 16- or, or 32-bit precision, whereas, most of the scientific simulation is done at 64-bit precision. But modern processors like the AMD GPUs can take their function units and run them either in 64-bit mode or in 16-bit mode or 32-bit mode depending upon the particular computation. [To do that] you need a strong interconnect and need very high memory bandwidth which it shares in common with scientific workloads.

“We find the combination of the CPU and GPU with flexible precision, married with very high memory bandwidth and interconnect bandwidth and storage bandwidth to be well suited for both simulation and AI workloads and we can use all the compute nodes in the system to bring to bear to either those workloads,” Scott said.

Interestingly, AI is not currently a top priority at LLNL.

De Supinski said, “We’re doing a lot of research and development at Livermore exploring how we can bring [AI] to bear our simulations. Whereas we need a certain accuracy, deep learning models are probabilistic and so you can often be good enough with lower precision operations whereas we have to be able to understand where the errors are and where they are becoming larger because of the reduced precision and then be able to bring some mechanism in to increase precision and accuracy required.”

At the pre-briefing, a question was asked about El Capitan’s ability to use non-GPU accelerators. Scott said while GPUs are currently the AI accelerator of choice, many users are looking at alternatives and that El Capitan’s system architecture is “designed to accommodate that kind of heterogeneous mix.”

De Supinski noted LLNL is using an unclassified system, Lassen, a sister machine to the classified Sierra system, to learn more about emerging AI accelerators. “We’re actively exploring ways of adding purpose-built machine learning accelerators to that system. I would anticipate that the mechanism by which we’re doing that is available entirely in El Capitan; that is we can add additional nodes to the system that are designed specifically for that purpose. We will see how things go with our exploratory studies on Lassen. If they go well, we will be very likely to engage HPE in helping us figure out how we can exploit that.”

AMD, HPE and LLNL are collaborating on software tools for El Capitan. Part of the plan is to leverage AMD’s ROCm framework to take advantage of  “coherent acceleration in the OpenMP environment as well as other environments” according to Norrod.

Scott said, “As part of this procurement, the Department of Energy has provided additional funds beyond the purchase of the machine to fund non-recurring engineering efforts and one major piece of that is to work closely with AMD on enhancing the programming environment for their new CPU-GPU architecture.” Work is ongoing by all three partners to take the critical applications and workloads forward and optimize them to get the best performance in the machine when El Capitan is delivered.

De Supinski emphasized, “This is a collaborative process particularly for the software. Some of the software for the system is being developed at Lawrence Livermore in addition to the applications. For instance, we very much expect Spack, which is an open source management package [to able to run] on the new system.”

One interesting feature which was not discussed in the briefing but was mentioned in the official announcement is El Capitan’s planned use of optical data transmission.

According to the release, “HPE is expanding its partnership with LLNL to actively explore HPE optics technologies, a computing solution that uses light to transmit data, to feature in the DOE’s El Capitan. HPE’s optics technologies stem from R&D efforts related to PathForward, a program backed by U.S. DOE’s Exascale Computing Project. HPE developed and demonstrated breakthrough optics prototypes that integrate electrical-to-optical interfaces to enable broad use in future classes of system interconnects. Together, HPE and LLNL are exploring ways to integrate these optics technologies with HPE’s Cray Slingshot for DOE’s El Capitan to transmit more data, more efficiently. This approach aims to improve power efficiency, reliability and ability to cost-effectively increase global system bandwidth.”

Optical data transmission is a hotbed of research with many companies aggressively seeking practical implementation.

El Capitan will become NNSA’s fastest computer and greatly enhance NNSA’s ability to run 3D simulations quickly instead of 2D simulation. LLNL is managing the new system for the NNSA and has developed emerging techniques that allow researchers to create faster, more accurate models for primary missions across stockpile modernization and inertial confinement fusion (ICF), a key aspect of stockpile stewardship.

Link to the official announcement: https://www.llnl.gov/news/llnl-and-hpe-partner-amd-el-capitan-projected-worlds-fastest-supercomputer

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Supersonic Jets, Skin Modeling, Astrophysics & More

March 31, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

LLNL Leverages Supercomputing to Identify COVID-19 Antibody Candidates

March 30, 2020

As COVID-19 sweeps the globe to devastating effect, supercomputers around the world are spinning up to fight back by working on diagnosis, epidemiology, treatment and vaccine development. Now, Lawrence Livermore National Read more…

By Staff report

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium-Range Weather Forecasts and the U.S. National Oceanic and At Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be nearer to becoming a practical reality. In this second inst Read more…

By John Russell

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

SiFive Accelerates Chip Design with Cloud Tools

March 25, 2020

Chip designers are drawing on new cloud resources along with conventional electronic design automation (EDA) tools to accelerate IC templates from tape-out to custom silicon. Among the challengers to chip design leade Read more…

By George Leopold

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Conversation: ANL’s Rick Stevens on DoE’s AI for Science Project

March 23, 2020

With release of the Department of Energy’s AI for Science report in late February, the effort to build a national AI program, modeled loosely on the U.S. Exascale Initiative, enters a new phase. Project leaders have already had early discussions with Congress... Read more…

By John Russell

Servers Headed to Junkyard Find 2nd Life Fighting Cancer in Clusters

March 20, 2020

Ottawa-based charitable organization Cancer Computer is on a mission to stamp out cancer and other life-threatening diseases, including coronavirus, by putting Read more…

By Tiffany Trader

Kubernetes and HPC Applications in Hybrid Cloud Environments – Part II

March 19, 2020

With the rise of cloud services, CIOs are recognizing that applications, middleware, and infrastructure running in various compute environments need a common management and operating model. Maintaining different application and middleware stacks on-premises and in cloud environments, by possibly using different specialized infrastructure and application... Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This