First Annual Exascale Day Celebrates Next 1000x Horizon

By Tiffany Trader

October 21, 2019

October 18 (aka 10/18) marked the first annual exascale day, hosted by Cray, the Exascale Computing Project and the DOE labs — Argonne, Oak Ridge and Lawrence Livermore — that are getting ready to host the nation’s first exascale supercomputers. All three machines will be built by Cray utilizing their Shasta architecture, Slingshot interconnect and new software platform.

To mark the occasion, Cray (now an HPE company) and the DOE hosted a virtual panel discussion Friday morning. The participants came together to discuss how the exascale era will change the face of computational science and the advances it will foster. The panel was moderated by Earl Joseph, CEO of HPC analyst firm Hyperion Research.

Joining the panel were:

Doug Kothe, ECP Director
Steve Scott, Cray CTO
Rick Stevens, Associate Lab Director, ANL
Jeff Nichols, Associate Lab Director, ORNL
Michel McCoy, LLNL Program Director

Crossing the exascale threshold gives rise to a computer that can perform 1018 (18 quintillion) adds or multiplies per second. October 18 seemed a natural choice to acknowledge this important computational milestone and the community that is working to enable it. “[Exascale computing] really is the major driver for the future of our society and making the world a much better place as far as advancing science, building better products, improving health care for everyone and [reducing] the cost of health care, and also doing very unusual and fascinating things like testing the impossible in the world,” said Hyperion’s Joseph.

The trajectory from megaflops to exaflops brings a trillion-times growth in the ability to carry out adds or multiplies. “This allows us to do all kinds of science that we weren’t able to do 40 years ago,” said ORNL’s Nichols. “When we were doing calculations 40 years ago, we would be lucky to actually get something that would be close to experiment, but today, we can actually predict what experimentalists might go find in their laboratories.”

We heard again about the enormity of the computational challenge at Livermore, tasked with maintaining the nation’s nuclear stockpile without the use of nuclear testing. “The nasty truth is that up until now, we’ve basically had to run most of our routine calculations in 2D simply because running in 3D, the turnaround time was so long that the analyst would forget the question before getting the answer,” remarked LLNL’s McCoy. “Codes have had to become increasingly predictive,” he said, “because the nuclear weapons which were designed to last maybe a year and then they got replaced, now have to be kept going for decades. They age in place; things happen to them. The codes cannot rely on predictions from their antecedents.”

LLNL’s Sierra machine is already boosting the capabilities and enabling lab scientists to run 3D codes at 2D resolution. “That’s opening a door that was never opened before,” said McCoy. “With El Capitan, they will be able to run a series of calculations tests, and quantify their uncertainty in 3D. In other words, 3D can become the new 2D. The exacale systems are going to make a difference for us. And they’re coming just in time.”

Cray’s Steve Scott, designer of many Cray systems and the lead on the Slingshot network, underscored the scale of the coming generation of DOE machines. “Just looking at the Frontier system [expected at ORNL in late 2021], it’s the size of two basketball courts and has the weight of 35 school buses. It’s got 90 miles of cabling in it. If you just looked at the network bandwidth that ties everything together, there’s enough network bandwidth to upload 100,000 high definition movies in one second.”

The U.S. is on track to deploy one or possibly two exascale machines by the end of 2021. The pace of leadership computing battles a diminished Moore’s law and the loss of Dennard scaling. “We were expecting to get to an exaflop computer originally right around now, and it’s taking a little bit longer; it’s getting harder than in the past,” said Scott. Previously, supercomputing was hitting 1,000-fold performance increases roughly every 10 years. Roadrunner, the first petascale system, was deployed in 2008. ASCI Red broke the 1 teraflops barrier in 1997.

“For multiple decades, the power efficiency of that logic kept up with Moore’s law perfectly well. And over the past 10-15 years, that’s no longer been the case; it’s starting to drive up power. And we’re starting to get to where we can see the end of Moore’s law where the current silicon technology is not going to continue to exponentially improve over time, over the next decade, and so it’s getting increasingly harder to build these systems,” said Scott.

The power wall has propelled the transition to accelerators, which, said Scott, “give you more computing performance per watt than you can get with a traditional CPU.” All three planned U.S. exascale systems will be powered by accelerators: Aurora at Argonne with Intel GPUs, Frontier at Oak Ridge with AMD GPUs and El Capitan at Livermore with an as-yet-to-be-revealed GPU.

“As we look forward to the next decade, we’re going to have to do something even even more dramatic,” Scott said. “We will save that for zettaflops day.”

Argonne’s Rick Stevens pointed out that the Exascale Computing Project is developing software that will run on many machines, not just the three CORAL [Collaboration of Oak Ridge, Argonne and Livermore] machines. He noted the general trend of architectures moving toward accelerated systems. “GPU systems are the target for these application software packages and for the software stack (that ECP is developing). So think of this as not just feeding these three machines it’s feeding the whole ecosystem with open source technology that will raise everything, so I think that’s a really important point.”

Ensuring and enabling broader capability and wider utility is part of the ECP mission. “These technologies are going to be portable and transportable to everything from your laptop and your desktop to an engineering cluster to the biggest machines that we can put together, and the accelerated node technology is really critical for us to do this,” said ECP’s Doug Kothe. “These first three machines are important first movers to tackle the problems we’ve signed up to, but we expect these technology be used broadly across the ecosystem.”

Kothe reviewed the progress of the nation’s exascale program and reiterated the importance of a day-one ready software stack without which the (very expensive) exascale machines would not be productive. The ECP is supporting critical science workloads related to the nuclear stockpile program, energy production and transmission, additive manufacturing, cancer research and many other domains.

AI is a major focus area. AI capabilities are being brought into the ECP software stack and all three labs will be hosting both artificial intelligence machine learning applications in tandem with modeling and simulation. Stevens, who is also co-PI for the AI for science town hall meetings (as is Nichols, along with Kathy Yelick at Berkeley), shared his enthusiasm for the synergies between simulation and AI.

“I think this is going to be yet another sea change in how we do science,” he said. “In particular, we think that we can use that combination to design new materials, new materials for energy, whether it’s improved photovoltaics or energy storage materials, or materials that could make reactors safer. For example, we think we can apply the same thinking to building new classes of polymers, polymers that are environmentally friendly, that degrade on a regular schedule or don’t have harmful effects when we manufacture them. We think we can use it to design better drugs, particularly in cancer and other diseases. And finally, I think it will become possible in the exascale timeframe to use AI actually to design new types of organisms.”

While the AI silicon space is still nascent, GPUs have proved themselves suitable for traditional HPC codes as well as emerging AI codes, and their mixed-precision capabilities provide a speed-up for machine learning workloads. “Today, Summit can do 200 quadrillion double-precision adds or multiplies per second, but it can actually already do 3.3 quintillion half-precision adds or multiplies,” said Nichols. “This concept of using much lower precision to do training in order to do machine learning to build models based on the data is something that all of our systems will be able to exploit to a much greater degree. And so the machines that we have today with Summit and the machines that we have in the future are going to be quite capable of not only solving science problems from a first principles perspective, but also from a much reduced precision data based model.”

McCoy underscored the importance of AI and reduced precision approaches. “Moore’s law is slowing down; Dennard scaling is already in the rearview mirror, so computers aren’t going to get faster very quickly,” he said. “So we need to find some way to accelerate time to solution. Machine learning combined with partial differential equation simulation could act as a as a force multiplier, and continue the trajectory forward at an undiminished pace. So this is a huge world opening up for us.”

Scott concurred: “It’s very unlikely that we will ever get to a zettaflop computer, 10 to the 21 operations per second, using the technology that we know today, CMOS silicon technology. The next decade is going to be all about having significantly different approaches to how we do computing. And this convergence of analytics and simulation and with traditional modeling is likely to be at least a but likely the central trust for getting improved performance and capabilities, given the slowing in [CMOS] technology.”

The conversation continued to come back to power. DARPA initially set the opening exascale power envelope at 20 megawatts, but that has been relaxed to about 30 megawatts.

Power efficiency is a first-class concern, and it’s been a key driver for GPUs at the leading-edge of supercomputing. The world’s top 10 greenest supercomputers all employ accelerators, primarily GPUs, in a hybrid system design.

“When we did our upgrade from Jaguar to Titan, we got this 10x boost in performance, and our power consumption remained flat. Same thing going from Titan to Summit,” said Nichols.

That relatively flat power line won’t hold at exascale however. The transition from Summit to Frontier ups the power ante from 13 megawatts to 30 megawatts (up to 40 megawatts at the outside). At roughly $1 million per megawatt cost, 30 megawatts of power translates into a $30 million a year power bill. “How we get another factor 10 or 100, or 1,000 performance improvement without doubling, tripling or an order of magnitude of power is absolutely [a] huge, fundamental [question],” said Nichols. “We can’t go forward and continue to pay the kind of power bills that we’re paying after exascale without some significant innovation.”

The participants took this as an opening to give credit to the investments made by the DOE and the NNSA through the Exascale Computing Project and PathForward program. “These programs [and their forerunners] allowed us to fund the companies that do the deep dive node and system design to tackle exactly the problem of having more power-aware hardware and it’s really paid off,” said Kothe.

“It wasn’t easy to convince a lot of people in government, it took a while. But when they got behind it, they made it happen,” said Nichols.

“I can guarantee you that the machines that are going to be going onto your floors in a couple of years would not have been possible without all the early support and the Exascale Computing Project and the very targeted R&D that went into into several aspects of the machines,” said Scott.

Watch a replay of the webcast here:

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Finland’s CSC Chronicles the COVID Research Performed on Its ‘Puhti’ Supercomputer

May 11, 2021

CSC, Finland’s IT Center for Science, is home to a variety of computing resources, including the 1.7 petaflops Puhti supercomputer. The 682-node, Intel Cascade Lake-powered system, which places about halfway down the T Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x speedup in simulating molecules. Qiskit is IBM’s quantum soft Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base clock of 2.0GHz – implemented in HPE's single-socket ProLian Read more…

Supercomputer Research Tracks the Loss of the World’s Glaciers

May 7, 2021

British Columbia – which is over twice the size of California – contains around 17,000 glaciers that cover three percent of its landmass. These glaciers are crucial for the Canadian province, which relies on its many Read more…

AWS Solution Channel

FLYING WHALES runs CFD workloads 15 times faster on AWS

FLYING WHALES is a French startup that is developing a 60-ton payload cargo airship for the heavy lift and outsize cargo market. The project was born out of France’s ambition to provide efficient, environmentally friendly transportation for collecting wood in remote areas. Read more…

Meet Dell’s Pete Manca, an HPCwire Person to Watch in 2021

May 7, 2021

Pete Manca heads up Dell's newly formed HPC and AI leadership group. As senior vice president of the integrated solutions engineering team, he is focused on custom design, technology alliances, high-performance computing Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

LRZ Announces New Phase of SuperMUC-NG Supercomputer with Intel’s ‘Ponte Vecchio’ GPU

May 5, 2021

At the Leibniz Supercomputing Centre (LRZ) in München, Germany – one of the constituent centers of the Gauss Centre for Supercomputing (GCS) – the SuperMUC Read more…

Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach

May 4, 2021

There’s no quibbling with Nvidia’s success. Entrenched atop the GPU market, Nvidia has ridden its own inventiveness and growing demand for accelerated computing to meet the needs of HPC and AI. Recently it embarked on an ambitious expansion by acquiring Mellanox (interconnect)... Read more…

Intel Invests $3.5 Billion in New Mexico Fab to Focus on Foveros Packaging Technology

May 3, 2021

Intel announced it is investing $3.5 billion in its Rio Rancho, New Mexico, facility to support its advanced 3D manufacturing and packaging technology, Foveros. Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from I Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers


Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

  • arrow
  • Click Here for More Headlines
  • arrow