Exascale Computing to Help Accelerate Drive for Clean Fusion Energy

By Jon Bashor, Lawrence Berkeley National Laboratory Computing Sciences

October 2, 2017

Editor’s note: One of the U.S. Exascale Computing Project’s mandates is to explain how exascale computing power will enhance scientific discovery and society broadly. This article from ECP not only examines the need for exascale computing power to advance research on fusion reactor design but it also highlights the potential for collaboration with industry partners who will require this kind of power.

For decades, scientists have struggled to create a clean, unlimited energy source here on Earth by recreating the conditions that drive our sun. Called a fusion reactor, the mechanism would use powerful magnetic fields to confine and compress gases four times as hot as our sun. By using the magnetic fields to squeeze the gases, the atoms would fuse and release more energy than was used to power the reactor. But to date, that has only worked in theory.

Achieving fusion energy production would benefit society by providing a power source that is non-polluting, renewable and using fuels such as the hydrogen isotopes found in seawater and boron isotopes found in minerals.

Early fusion research projects in the 1950s and ‘60s relied on building expensive magnetic devices, testing them and then building new ones and repeating the cycle. In the mid-1970s, fusion scientists began using powerful computers to simulate how the hot gases, called plasmas, would be heated, squeezed and fused to produce energy. It’s an extremely complex and difficult problem, one that some fusion researchers have likened to holding gelatin together with rubber bands.

Using supercomputers to model and simulate plasma behavior, scientists have made great strides toward building a working reactor. The next generation of supercomputers on the horizon, known as exascale systems, will bring the promise of fusion energy closer.

The best-known fusion reactor design is called a tokamak, in which a donut-shaped chamber is used to contain the hot gases, inside. Because the reactors are so expensive, only small-scale ones have been built. ITER, an international effort to build the largest-ever tokamak-in the south of France. The project, conceived in 1985, is now scheduled to have its first plasma experiments in 2025 and begin fusion experiments in 2035. The estimated cost is 14 billion euros, with the European Union and six other nations footing the bill.

Historically, fusion research around the world has been funded by governments due to the high cost and long-range nature of the work.

But in the Orange County foothills of Southern California, a private company is also pursuing fusion energy, but taking a far different path than that of ITER and other tokamaks. Tri Alpha Energy’s cylindrical reactor design is completely different in its design philosophy, geometry, fuels and method of heating the plasma, all built with a different funding model. Chief Science Officer Toshiki Tajima says their approach makes them mavericks in the fusion community.

But the one thing both ITER and similar projects and Tri Alpha Energy have consistently relied on is using high-performance computers to simulate conditions inside the reactor as they seek to overcome the challenges inherent in designing, building and operating a machine that will replicate the processes of the sun here on Earth.

As each generation of supercomputers has come online, fusion scientists have been able to study plasma conditions in greater detail, helping them understand how the plasma will behave, how it may lose energy and disrupt the reactions, and what can be done to create and maintain fusion. With exascale supercomputers that are 50 times more powerful than today’s top systems looming on the horizon, Tri Alpha Energy sees great possibilities in accelerating the development of their reactor design. Tajima is one of 18 members of the industry advisory council for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP).

“We’re very excited by the promise of exascale computing – we are currently fund-raising for our next-generation machine, but we can build a simulated reactor using a very powerful computer, and for this we would certainly need exascale,” Tajima said. “This would help us accurately predict if our idea would work, and if it works as predicted, our investors would be encouraged to support construction of the real thing.”

The Tri Alpha Energy fusion model builds on the experience and expertise of Tajima and his longtime mentor, the late Norman Rostoker, a professor of physics at the University of California, Irvine (UCI). Tajima first met Rostoker as a graduate student, leaving Japan to study at Irvine in 1973. In addition to his work with TAE, Tajima holds the Norman Rostoker Chair in Applied Physics at UCI. In 1998, Rostoker co-founded TAE, which Tajima joined in 2011.

In it for the long run

It was also in the mid-1970s, that the U.S. Atomic Energy Commission, the forerunner of DOE, created a computing center to support magnetic fusion energy research, first with a cast-off computer from classified defense programs, but then with a series of ever-more capable supercomputers. From the outset, Tajima was an active user, and still remembers he was User No. 1100 at the Magnetic Fusion Energy Computer Center. The Control Data Corp. and Cray supercomputers were a big leap ahead of the IBM 360 he had been using.

“The behavior of plasma could not easily be predicted with computation back then and it was very hard to make any progress,” Tajima said. “I was one of the very early birds to foul up the machines. When the Cray-1 arrived, it was marvelous and I fell in love with it.”

At the time, the tokamak was seen as the hot design and most people in the field gravitated in this direction, Tajima said, and he followed. But after learning about plasma-driven accelerators under Professor Rostoker, in 1976 he went to UCLA to work with Prof. John Dawson. “He and I shared a vision of new accelerators and we began using large-scale computation in 1975, an area in which I wanted to learn more from him,” Tajima said.

As a result, the two men wrote a paper entitled “Laser Electron Accelerator,” which appeared in Physical Review Letters in 1979. The seminal paper explained how firing an intense electromagnetic pulse (or beam of particles) into a plasma can create a wake in the plasma and that electrons, and perhaps ions, trapped in this wake can be accelerated to very high energies.

TAE’s philosophy, built on Rostoker’s ideas, is to combine both accelerator and fusion plasma research. In a tokamak, the deuterium-tritium fuel needs to be heated and confined at an energy level of 10,000 eV (electron volts) for fusion to occur. The TAE reactor, however, needs to be 30 times hotter. In a tokamak, the same magnetic fields that confine the plasma also heat it to 3 billion degrees C. In the TAE machine, the energy will be injected using a particle accelerator. “A 100,000 eV beam is nothing for an accelerator,” Tajima said, pointing to the 1G eV BELLA device at DOE’s Lawrence Berkeley National Laboratory. “Using a beam-driven plasma is relatively easy but it may be counterintuitive that you can get higher energy with more stability — the more energetic the wake is, the more stable it becomes.”

But this approach is not without risk. With the tokamak, the magnetic fields protect the plasma, much like the exoskeleton of a beetle protects the insect’s innards, Tajima said. But the accelerator beam creates a kind of spine, which creates the plasma by its weak magnetic fields, a condition known as Reverse Field Configuration. One of Rostoker’s concerns was that the plasma would be too vulnerable to other forces in the early stages of its formation. However, in the 40-centimeter diameter cylindrical reactor, the beam forms a ring like a bicycle tire, and like a bicycle, the stability increases the faster the wheels spin.

“The stronger the beam is, the more stable the plasma becomes,” Tajima said. “This was the riskiest problem for us to solve, but in early 2000 we showed the plasma could survive and this reassured our investors. We call this approach of tackling the hardest problem first ‘fail fast’.”

Another advantage of TAE’s approach is that the main fuel, Boron-11, does not produce neutrons as a by-product; instead it produces three alpha particles, which is the basis of the company’s name. A tokamak, using hydrogen-isotope fuels, generates neutrons, which can penetrate and damage materials, including the superconducting magnets that confine the tokamak plasma. To prevent this, the tokamak reactor requires one-meter-thick shielding. Without the need to contain neutrons, the TAE reactor does not need heavy shielding. This also helps reduce construction costs.

Computation Critical to Future Progress

With his 40 years of experience using HPC to advance fusion energy, Tajima offers a long-term perspective, from the past decades to exascale systems in the early 2020s. As a principal investigator on the Numerical Tokamak project in the early 1990s, he has helped build much of the HPC ecosystem for fusion research.

At the early stage of modeling fusion behavior, the codes focus on the global plasma at very fast time scales. These codes, known as MHD codes (magnetohydrodynamics), are not as computationally “expensive,” meaning they do not require as many computing resources, and at TAE were run on in-house clusters.

The next step is to model the more minute part of the plasma instability, known as kinetic instability, which requires more sophisticated codes that can simulate the plasma in greater detail over longer time scales. Achieving this requires more sophisticated systems. Around 2008-09, TAE stabilized this stage of the problem using its own computing system and by working with university collaborators who have access to federally funded supercomputing centers, such as those supported by DOE. “Our computing became more demanding during this time,” Tajima said.

The third step, which TAE is now tackling, is to make a plasma that can “live” longer, which is known as the transport issue in the fusion community. “This is a very, very difficult problem and consumes large amounts of computing resources as it encompasses a different element of the plasma,” Tajima said, “and the plasma becomes much more complex.”

The problem involves three distinct functions:

  • The core of the field reverse configuration, which is where the plasma is at the highest temperature
  • The “scrape-off layer,” which is the protective outer layer of ash on the core and which Tajima likens to an onion’s skin
  • The “ash cans,” or diverters, that are at each end of the reactor. They remove the ash, or impurities, from the scrape-off layer, which can make the plasma muddy and cause it to behave improperly.

“The problem is that the three elements behave very, very differently in both the plasma physics as well as in other properties,” Tajima said. “For example, the diverters are facing the metallic walls so you have to understand the interaction of the cold plate metals and the out-rushing impurities. And those dynamics are totally different than the core which is very high temperature and very high energy and spinning around like a bicycle tire, and the scrape-off layer.”

These factors are all coupled to each other using very complex geometries and in order to see if the TAE approach is feasible, researchers need to simulate the entirety of the reactor in order to understand and eventually control the reactions.

“We will run a three-layered simulation of our fusion reactor on the computer, with the huge particle code, the transport code and the neural net on the simulation – that’s our vision and we will certainly need an exascale machine to do this,” Tajima said. “This will allow us to predict if our concept works or not in advance of building machine so that our investors’ funds are not wasted.”

The overall code will have three components. At the basic level will be a representative simulation of particles in each part of the plasma. The second layer will be the more abstract transport code, which tracks heat moving in and out of the plasma. But even on exascale systems, the transport code will not be able to run fast enough to keep up with real-time changes in the plasma. Instabilities which affect the heat transport in the plasma come and go in milliseconds.

“So, we need a third layer that will be an artificial neural net, which will be able to react in microseconds, which is a bit similar to a driverless auto, and will ‘learn’ how to control the bicycle tire-shaped plasma, Tajima said. This application will be run on top of transport code and it will observe experimental data and react appropriately to keep the simulation running.

“Doing this will certainly require exascale computing,” Tajima said. “Without it we will take up to 30 years to finish – and our investors cannot wait that long. This project has been independent of the government funding, so that our investors’ fund provided an independent, totally different path toward fusion. This could amount to a means of national security to provide an alternative solution to a problem as large as fusion energy. Society will also benefit from a clean source of energy and our exascale-driven reactor march will be a very good thing for the nation and the world.”

Advanced Accelerators are Pivotal

Both particle accelerators and fusion energy are technologies important to the nation’s scientific leadership, with research funded over many decades by the Department of Energy and its predecessor agencies.

Not only are particle accelerators a vital part of the DOE-supported infrastructure of discovery science and university research, they also have private-sector applications and a broad range of benefits to industry, security, energy, the environment and medicine.

Since Toshiki Tajima and John Dawson published their paper “Laser Electron Accelerator” in 1979, the idea of building smaller accelerators, with the length measure in meters instead of kilometers, has gained traction. In these new accelerators, particles “surf” in the plasma wake of injected particles, reaching very high energy levels in very short distances.

According to Jean-Luc Vay, a researcher at DOE’s Lawrence Berkeley National Laboratory, taking full advantage of accelerators’ societal benefits, game-changing improvements in the size and cost of accelerators are needed. Plasma-based particle accelerators stand apart in their potential for these improvements, according to Vay, and turning this from a promising technology into a mainstream scientific tool depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales.

To help achieve this goal, Vay is leading a project called Exascale Modeling of Advanced Particle Accelerators as part of DOE’s Exascale Computing Project. This project supports the practical economic design of smaller, less-expensive plasma-based accelerators.

As Tri Alpha Energy pursues its goal of using a particle accelerator (though this accelerator is not related to wakefield accelerators) to achieve fusion energy, the company is also planning to apply its experience and expertise in accelerator research for medical applications. Not only will this effort produce returns for the company’s investors, but it should also help advance TAE’s understanding of accelerators and using them to create a fusion reactor.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire