Scientists Use Supercomputers to Study Reliable Fusion Reactor Design, Operation

February 22, 2021

Feb. 22, 2021 — Nuclear fusion, the same kind of energy that fuels stars, could one day power our world with abundant, safe, and carbon-free energy. Aided by supercomputers Summit at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) and Theta at DOE’s Argonne National Laboratory (ANL), a team of scientists strives toward making fusion energy a reality.

Fusion reactions involve two or more atomic nuclei combining to form different nuclei and particles, converting some of the atomic mass into energy in the process. Scientists are working toward building a nuclear fusion reactor that could efficiently produce heat that would then be used to generate electricity. However, confining plasma reactions that occur at temperatures hotter than the sun is very difficult, and the engineers who design these massive machines can’t afford mistakes.

To ensure the success of future fusion devices—such as ITER, which is being built in southern France—scientists can take data from experiments performed on smaller fusion devices and combine them with massive computer simulations to understand the requirements of new machines. ITER will be the world’s largest tokamak, or device that uses magnetic fields to confine plasma particles in the shape of a donut inside, and will produce 500 megawatts (MW) of fusion power from only 50 MW of input heating power.

One of the most important requirements for fusion reactors is the tokamak’s divertor, a material structure engineered to remove exhaust heat from the reactor’s vacuum vessel. The heat-load width of the divertor is the width along the reactor’s inner walls that will sustain repeated hot exhaust particles coming in contact with it.

A team led by C.S. Chang at Princeton Plasma Physics Laboratory (PPPL) has used the Oak Ridge Leadership Computing Facility’s (OLCF’s) 200-petaflop Summit and Argonne Leadership Computing Facility’s (ALCF’s) 11.7-petaflop Theta supercomputers, together with a supervised machine learning program called Eureqa, to find a new extrapolation formula from existing tokamak data to future ITER based on simulations from their XGC computational code for modeling tokamak plasmas. The team then completed new simulations that confirm their previous ones, which showed that at full power, ITER’s divertor heat-load width would be more than six times wider than was expected in the current trend of tokamaks. The results were published in Physics of Plasmas.

Using Eureqa, the team found hidden parameters that provided a new formula that not only fits the drastic increase predicted for ITER’s heat-load width at full power but also produced the same results as previous experimental and simulation data for existing tokamaks. Among the devices newly included in the study were the Alcator C-Mod, a tokamak at the Massachusetts Institute of Technology (MIT) that holds the record for plasma pressure in a magnetically confined fusion device, and the world’s largest existing tokamak, the JET (Joint European Torus) in the United Kingdom.

“If this formula is validated experimentally, this will be huge for the fusion community and for ensuring that ITER’s divertor can accommodate the heat exhaust from the plasma without too much complication,” Chang said.

ITER deviates from the trend

The Chang team’s work studying ITER’s divertor plates began in 2017 when the group reproduced experimental divertor heat-load width results from three US fusion devices on the OLCF’s former Titan supercomputer: General Atomics’ DIII-D toroidal magnetic fusion device, which has an aspect ratio similar to ITER; MIT’s Alcator C-Mod; and the National Spherical Torus Experiment, a compact low-aspect-ratio spherical tokamak at PPPL. The presence of steady “blobby”-shaped turbulence at the edge of the plasma in these tokamaks did not play a significant role in widening the divertor heat-load width.

At left, ions being lost from the confined plasma and following the magnetic field lines to the material diverter plates in the gyrokinetic simulation code XGC1. At right, an XGC1 simulation of edge turbulence in DIII-D plasma, showing the plasma turbulence changing the eddy structure to isolated blobs (represented by red color) in the vicinity of the magnetic separatrix (black line). Image Credit: Kwan-Liu Ma’s research group, University of California Davis; David Pugmire and Adam Malin, ORNL

The researchers then set out to prove that their XGC code, which simulates particle movements and electromagnetic fields in plasma, could predict the heat-load width on the full-power ITER’s divertor surface. The presence of dynamic edge turbulence—different from the steady blobby-shaped turbulence present in the current tokamak edge—could significantly widen the distribution of the exhaust heat, they realized. If ITER were to follow the current trend of heat-load widths in present-day fusion devices, its heat-load width would be less than a few centimeters—a dangerously narrow width, even for divertor plates made of tungsten, which boasts the highest melting point of all pure metals.

The team’s simulations on Titan in 2017 revealed an unusual jump in the trend—the full-power ITER showed a heat-load width more than six times wider than what the existing tokamaks implied. But the extraordinary finding required more investigation. How could the full-power ITER’s heat-load width deviate so significantly from existing tokamaks?

The interior of MIT’s Alcator C-Mod tokamak. Image Credit: Robert Mumgaard, MIT

Scientists operating the C-Mod tokamak at MIT cranked the device’s magnetic field up to ITER value for the strength of the poloidal magnetic field, which runs top to bottom to confine the donut-shaped plasma inside the reaction chamber. The other field used in tokamak reactors, the toroidal magnetic field, runs around the circumference of the donut. Combined, these two magnetic fields confine the plasma, as if winding a tight string around a donut, creating looping motions of ions along the combined magnetic field lines called gyromotions that researchers believe might smooth out turbulence in the plasma.

Scientists at MIT then provided Chang with experimental data from the Alcator C-Mod against which his team could compare results from simulations by using XGC. With an allocation of time under the INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program, the team performed extreme-scale simulations on Summit by employing the new Alcator C-Mod data using a finer grid and including a greater number of particles.

“They gave us their data, and our code still agreed with the experiment, showing a much narrower divertor heat-load width than the full-power ITER,” Chang said. “What that meant was that either our code produced a wrong result in the earlier full-power ITER simulation on Titan or there was a hidden parameter that we needed to account for in the prediction formula.”

Machine learning reveals a new formula

Chang suspected that the hidden parameter might be the radius of the gyromotions, called the gyroradius, divided by the size of the machine. Chang then fed the new results to a machine learning program called Eureqa, presently owned by DataRobot, asking it to find the hidden parameter and a new formula for the ITER prediction. The program spit out several new formulas, verifying the gyroradius divided by the machine size as being the hidden parameter. The simplest of these formulas most agreed with the physics insights.

Chang presented the findings at various international conferences last year. He was then given three more simulation cases from ITER headquarters to test the new formula. The simplest formula successfully passed the test. PPPL research staff physicists Seung-Hoe Ku and Robert Hager employed the Summit and the Theta supercomputers for these three critically important ITER test simulations. Summit is located at the OLCF, a DOE Office of Science User Facility at ORNL. Theta is located at ALCF, another DOE Office of Science User Facility, located at ANL.

In an exciting finding, the new formula predicted the same results as the present experimental data—a huge jump in the full-power ITER’s heat-load width, with the medium-power ITER landing in between.

“Verifying whether ITER operation is going to be difficult due to an excessively narrow divertor heat-load width was something the entire fusion community has been concerned about, and we now have hope that ITER might be much easier to operate,” Chang said. “If this formula is correct, design engineers would be able to use it in their design for more economical fusion reactors.”

A big data problem

Each of the team’s ITER simulations consisted of 2 trillion particles and more than 1,000 time steps, requiring most of the Summit machine and one full day or longer to complete. The data generated by one simulation, Chang said, could total a whopping 200 petabytes, eating up nearly all of Summit’s file system storage.

“Summit’s file system only holds 250 petabytes’ worth of data for all the users,” Chang said. “There is no way to get all this data out to the file system, and we usually have to write out some parts of the physics data every 10 or more time steps.”

The IBM AC922 Summit supercomputer’s storage system. Image Credit: Genevieve Martin, ORNL

This has proven challenging for the team, who often found new science in the data that was not saved in the first simulation.

“I would often tell Dr. Ku, ‘I wish to see this data because it looks like we could find something interesting there,’ only to discover that he could not save it,” Chang said. “We need reliable, large-compression-ratio data reduction technologies, so that’s something we are working on and are hopeful to be able to take advantage of in the future.”

Chang added that staff members at both the OLCF and ALCF were critical to the team’s ability to run codes on the centers’ massive high-performance computing systems.

“Help rendered by the OLCF and ALCF computer center staff—especially from the liaisons—has been essential in enabling these extreme-scale simulations,” Chang said.

The team is anxiously awaiting the arrival of two of DOE’s upcoming exascale supercomputers, the OLCF’s Frontier and ALCF’s Aurora, machines that will be capable of a billion billion calculations per second, or 1018 calculations per second. The team will next include more complex physics, such as electromagnetic turbulence in a more refined grid with a greater number of particles, to verify the new formula’s fidelity further and improve its accuracy. The team also plans to collaborate with experimentalists to design experiments to further validate the electromagnetic turbulence results that will be obtained on Summit or Frontier.

This research was supported by the DOE Office of Science Scientific Discovery through Advanced Computing program.

Related Publication: C.S. Chang et al., “Constructing a New Predictive Scaling Formula for ITER’s Divertor Heat-Load Width Informed by a Simulation-Anchored Machine Learning,” Physics of Plasmas 28 (2021) 022501, doi:10.1063/5.0027637.

UT-Battelle LLC manages Oak Ridge National Laboratory for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.


Source: RACHEL MCDOWELL, OLCF

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire