Exascale: Cleaner-burning Gasoline Engines, Cities Powered by Wind, Nuclear Reactors That Fit on a Tabletop

By Matt Lakin, Oak Ridge National Laboratory

May 18, 2020

When the US Department of Energy (DOE) boots up the world’s first generation of exascale supercomputers next year, researchers hope to find some of the most elusive questions of modern science suddenly closer to being solved.

The two machines—Frontier at Oak Ridge National Laboratory (ORNL) in Tennessee and Aurora at Argonne National Laboratory (ANL) in Chicago—promise the fastest computing horsepower in history of more than 1.5 exaflops apiece. They’ll represent the culmination of a 5 year effort across six national laboratories with a price tag of roughly $1.8 billion.

The quest for exascale officially began in 2015, when the White House laid out marching orders for the National Strategic Computing Initiative, a whole-of-government effort designed to create a cohesive, multi-agency strategic vision and Federal investment strategy, executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States.

That Initiative gave rise to the DOE’s Exascale Computing Initiative, focused on making the computing leap, and the Exascale Computing Project (ECP), focused on building a comprehensive software ecosystem consisting of target applications, an exascale computing software stack, and accelerated hardware technology innovations primed to take full advantage of the newfound processing power.

To prepare the nation’s first exascale-ready applications, two dozen teams of scientists and engineers have worked around the clock since, stringing together computer code and writing new algorithms to tackle everything from energy science and production to investigating cures for cancer to predicting natural disasters.

“We’re on track, and we’ll be ready to go on Day One,” said Doug Kothe, the ECP’s director. “I think these applications we’re developing are going to be the scientific and engineering tools of the trade for decades to come.”

A sampling of projects from across the scientific spectrum being prepared for exascale processing. Source: ECP

An exaflop amounts to 1 quintillion—that’s 1018 or a billion billion—calculations per second, five times faster than the highest speeds available on today’s top-performing supercomputer Summit at ORNL, which clocks in at 200 petaflops—200 quadrillion calculations per second, or 1015.

For perspective, the average human brain consists of about 100 billion neurons. Multiply those brain cells by 15 million, and they’ll approach the problem-solving muscle of one such exascale machine.

“Fire up those circuits, and the eureka moments can’t help but follow,” said Kothe.

“It’s a real game-changer,” he said. “I think it’s going to be a translational moment once we pick up that extra computing speed and capacity that gets us over the remaining obstacles. You can’t plan for scientific breakthroughs, but if you have that tremendous technology at your fingertips, they will happen.”

Those breakthroughs could include new ways to appease the world’s ravenous hunger for energy—to build cleaner-burning combustion enginesharness wind power on an unprecedented scale, even shrink a nuclear reactor to the size of a desktop. Insights gained could help slash pollution and double or triple the efficiency of existing fossil-fuel resources to ease the transition to a greener energy economy.

“It’s about leap-frogging,” said Tom Evans, a distinguished researcher at ORNL. “In some of these fields, the leap could be very large. We’re not machining screws in any of these projects. These are ambitious questions. Our need for energy is only going to grow, so let’s go where the existing science can’t take us.”

Exascale computing’s promise rests on the ability to synthesize massive amounts of data into detailed simulations so complex previous generations of computers couldn’t handle the calculations. The faster the computer, the more possibilities and probabilities that can be plugged into the model to be tested against what’s already known—how a satellite might react under various conditions in space over time, how cancer cells might respond to new treatments, how a 3D-printed design might hold up under strain.

The process helps researchers target their experiments and fine-tune designs while saving the time and expense of real-world testing. Scientists at ORNL, for example, recently used simulations on Summit to trim a list of more than 8,000 potential drug compounds that might fight the coronavirus down to the 77 likeliest candidates.

“I wouldn’t call it a crystal ball, but it’s almost like a time machine,” said Steve Hamilton, an ORNL scientist working on an application to design smaller, modular nuclear reactors. “Think of it as a virtual experiment. We’re learning in minutes, hours, or days what might otherwise take years to discover.”

Hamilton hopes to use that virtual laboratory to perfect designs for the next generation of nuclear fission reactors, big enough to power a small community but small enough to fit inside the average living room with such built-in safety features as auto-shutoff or a removable core. The designs could be modular, built out of 3D-printed parts and assembled onsite.

“If we can build a virtual reactor and run simulations of how it would behave if it were built, we don’t have to do as many physical experiments,” Hamilton said. “But because these are new designs, we don’t have as much experimental data to fall back on, and we want the modeling to be as accurate as possible.”

The details of that modeling extend to mapping the behavior of radioactive isotopes constantly colliding and to tracing the steady flow of coolant through the reactor core, for simulations that add up to half a million or more lines of computer code.

“If something is meters across, we want to simulate it down to the millimeter,” Hamilton said. “What we can do right now on Summit—the fastest computer in the world—is along the lines of modeling a single state of the reactor, basically a snapshot of a moment in time. With exascale, we hope to simulate an entire reactor cycle—about a couple years of use—in the space of about 24 hours of wall-clock time. We’re talking about the difference between a snapshot and a timeline.”

The work won’t stop when the model is finished.

“This is a demo,” Hamilton said. “We’re just setting the table. It’s going to be up to the private players to take these tools and lessons and then apply them. We won’t be building the reactors, but we want to build tools that will help us assess how these designs work in the real world.”

Fission reactors typically take decades to build and receive permits due to the massive designs employed by previous generations to power cities and chunks of states. Going smaller could shrink those costs and speed the time from blueprints to reality.

“The current reactors use these giant stainless-steel components made at only one or two facilities in the world, things that typically have to be imported,” Hamilton said. “By going with a smaller design, we’re hoping to decrease manufacturing costs by making something that can be manufactured in a variety of places. The reactor could be maybe the size of an office or living room, minus the containment structure and all the other components.”

Success could pave the way for even smaller reactors, micro-powerplants with nuclear cores that could power a remote military base or a mobile disaster response. Modular cores could be added and removed like batteries. An earthquake, tidal wave or other natural disaster threatens the reactor as in Fukushima, Japan, in 2011? Pop out the core and haul it to safety.

Some companies are already exploring those possibilities. NuScale Power plans to bring a modular reactor capable of generating up to 720 megawatts online in Utah as early as 2026. Idaho National Laboratory plans to provide low enriched uranium to fuel a 1.5 megawatt reactor being built by the company Oklo that could begin operation by 2024.

“From the perspective of these companies, they need to manage to their own timelines and not be completely dependent on a project that DOE’s scientists are managing,” Hamilton said. “They’re moving ahead. If we’re able to work with them, we can use the exascale modeling to evaluate the improvements and improve the economic viability of these reactors. Nuclear fission provides about 20 percent of our power nationwide right now. If these reactors become more viable, maybe we can increase that share.”

At the other end of the nuclear spectrum, researchers hold out similar hopes of modeling a reactor that could generate the power of a star from a few drops of seawater.

After a half-century of study, trial and error has yet to yield a successful commercially controlled thermonuclear fusion reaction. Scientists joke that the big breakthrough is always 20 years away.

“The power and speed of exascale could make the difference through 3D modeling and evaluation aided by artificial intelligence,” said Amitava Bhattacharjee, a professor of astrophysical sciences at the Princeton Plasma Physics Laboratory.

“I can think of no higher aspiration,” he said. “The fuel for fusion is virtually limitless. The energy released would be clean and sustainable and much greater than the amount needed to get the nuclear reactions going. The challenge is that we’re trying to duplicate the nuclear reaction of the sun under controlled laboratory conditions and within a confined device. These are complex and expensive experiments. It is important to develop high-fidelity and predictive computer simulations that can optimize the design and performance of such experiments—and even choose between them through careful validation studies.”

The simulated approach has worked for other industries. Aircraft manufacturers once relied on hands-on testing to try out new designs before adopting virtual models.

“We previously used wind tunnels all over the country to design and test airframes,” Evans said. “Now we use them much more sparingly and efficiently. We’ll still need ground experiments, but we’ll need far fewer of them to validate the designs because the simulations have gotten us 80 percent of the way there.”

The bigger the idea, the bigger the simulation. Humans learned centuries ago how to harness the energy of a passing breeze to cross the ocean or power a grist mill.

“But the idea of the modern wind farm came about only recently, and it is an ideal challenge for exascale,” said Mike Sprague, a senior scientist at the National Renewable Energy Laboratory.

“The industry knows how to build a single turbine,” Sprague said. “That’s based on experience and hands-on knowledge. What we don’t have yet are computer models that can predict accurately what wind turbines are going to do when you put them together in a wind-farm setting where these blades are rotating, turning, yawing, and creating a wake that affects turbines downwind. We don’t really understand the complex dynamics that are going on and how they interplay.”

Sprague’s project means breaking every inch of each turbine, every potential movement of each blade, and every gust of wind at each potential speed into bricks of equations and building a virtual wind farm that could be applied to all circumstances. Offshore wind farms of floating turbines at sea add even more variables to the mix.

The most recent simulation by Sprague and his team of an operating turbine took 6 billion equations. It lasted about 17 seconds.

He believes a full exascale simulation could reveal how to position turbines to maximize energy production, cut costs, and make the best use of available terrain.

“We want to be able to build up a grid in a virtual wind plant and basically watch a wind front move through at 10 miles per second, for example, and see how the turbines react,” Sprague said. “We need to understand exactly what’s going on, and I think we’re definitely going to get there with exascale. Then we can walk into the ground experiment to validate what we think we know.”

Wind generated about 7.3 percent of the electricity in the United States last year, according to the US Energy Information Administration. Sprague believes exascale could lead to innovations that could push wind to a greater share of that market.

“It’s not going to be the lightbulb—not on its own,” Sprague said. “But it’s a gateway to reducing the cost of energy and making wind competitive with fossil fuels. The private players can take this foundation and build on it to try to make that vision a reality.”

Even if breakthroughs in nuclear and wind power materialize, dependence on fossil fuels won’t disappear overnight. Other exascale projects focus on finding cleaner approaches to the internal combustion engines that have powered the world for the past century.

“Exascale simulations could enable improvements to fossil-fuel engines that would bridge the gap and reduce pollution during the transition,” said Jacqueline Chen, a senior scientist at Sandia National Laboratories. She hopes to develop science-based combustion models that can be used to design high-efficiency, low-emission engines based on the unprecedented level of detail provided by exascale.

“It’s a stopgap,” Chen said. “Combustion’s going to be around for quite some time still, especially for aviation. These models could be used by the fuel and trucking industries to develop clean-burning, fuel-efficient combustion engines that would be competitive for the next 20–30 years while we’re still trying to electrify the powertrain. Even a couple points of increased fuel efficiency go a long way, and efficiencies of 50 percent or higher equate to a huge amount of savings.”

But some of the dynamics of the fossil-fuel combustion process, which generated more than 60 percent of the electricity in the United States last year, remain as slippery as those of the atomic fuel cycle. Modeling an engine, like modeling a nuclear reactor, requires mapping legions of moving, reacting particles.

“These can be sorbent or oxygen carrier particles,” said Madhava Syamlal, a senior fellow for computational engineering at the National Energy Technology Laboratory. “As they’re moving around inside the reactor, there are chemical reactions, heat and mass transfer, happening the whole time. We want to capture all those processes and model them, but there are some very complex reactor geometries involved. We need to track where those particles are and get a complete picture at the scale of each of the individual particles.”

Syamlal’s work of the past 30 years focuses on modeling gas-solid reactors, also known as fluidized bed reactors or chemical looping reactors, that avoid direct contact between fossil fuels and air in an effort to capture carbon from emissions and avoid pumping hydrocarbon pollutants into the atmosphere. Current technology can’t accommodate the number of calculations needed to model a pilot-scale chemical-looping reactor.

“If we’re able to do this simulation, it will be incredible,” Syamlal said. “We’ll be able to move technology development ahead at least 5 years, and we can apply the capability to a whole variety of industrial processes. Today on a state-of-the-art computer, we can model about 5 million particles. We’ve been able to run simulations on Summit at about 40 million particles. Our goal is to simulate a reactor that contains 5 billion particles by the end of this project.

Existing codes would take 2 or more years to run such a simulation. We hope to develop a code that can use an exascale machine to increase the speed and resolution by a thousand-fold and get the results in a couple of days.”

The possibilities don’t stop there. Some researchers hope to use exascale simulations to explore foundational principles of science, from designing enhanced particle accelerators to reenacting the Big Bang.

“We’ve never observed a supernova directly,” Evans said. “We can simulate that with exascale and see if it correlates or matches with what we’ve observed from earth. Some questions will still be too complex.”

And not everyone expects exascale to provide all the answers.

“Designing a fusion reactor is so complicated, with so many layers of science and engineering, a whole-device model could easily go beyond resources offered at the exascale level,” Bhattacharjee said. “Our hunger for more powerful computers is not likely to stop. But exascale will get us a long way there. Things are hard to predict. We need to understand all the pieces of the puzzle and how they come together, because there will be surprises in the whole—which will be more than the sum of the parts. Be prepared for surprises.”

Some of those surprises could be long in coming. Others might come quicker.

Exascale’s computing speed could deliver immediate benefits in such sectors as the nation’s utility grid. Simulations could predict scenarios for massive power failures like those seen during the California wildfires, balance demand, pinpoint weak spots in the delivery system and devise workarounds to keep the electric currents flowing to consumers. Power companies could build parallel digital models to run alongside the grid in real time and help promptly identify the cause when the wires stop humming and the lights blink out.

“You could think of it as a digital twin,” said Kothe. “We want to be able to ask what-if scenarios to prepare for emergencies and manage the delivery. Solar and wind power are going to be more intermittent. We need to plan for ebbs and flows of those energy sources on the grid and be able to counter with other sources. What if we lose 10–20 sources at once? This is where the efficacy of exascale simulation comes in.”

The average power plant won’t have the luxury of an exascale computer onsite. But the apps built to run on machines like Frontier at ORNL and Aurora at ANL will be designed to scale down to the industrial and consumer level, eliminating the shortcuts relied on by simpler models and drawing on exascale findings to close the circuit and produce reliable conclusions.

“When you need to talk about operational decisions, you’re talking about a matter of minutes or seconds,” Kothe said. “This will give answers you can bank on when you need them.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in four different configurations, including a Grace Hopper HGX Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to an Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that encompasses the company’s rigorous accounting of the supply Read more…

AWS Solution Channel

Shutterstock 1044740602

DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances

Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stage Read more…

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that en Read more…

ISC 2022: International Association of Supercomputing Centers to Debut

May 23, 2022

At ISC 2022 in Hamburg, Germany, representatives from four supercomputing centers across three countries plan to debut the International Association of Supercom Read more…

ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

HPE Announces New HPC Factory in Czech Republic

May 18, 2022

A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Leading Solution Providers

Contributors

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Nvidia Announces ‘Eos’ Supercomputer

March 22, 2022

At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire