Supercomputers Help Discover Gamma Ray Creation From Lasers

July 12, 2016

July 12 — Ever play with a magnifying lens as a kid? Imagine a lens as big as the Earth. Now focus sunlight down to a pencil tip. That still wouldn’t be good enough for what some Texas scientists have in mind. They want to make light even 500 times more intense. And they say it could open the door to the most powerful radiation in the universe: gamma rays.

Comic book readers might know about gamma rays. The Incredible Hulk was transformed from mild scientist into wild superhero by gamma rays from a nuclear explosion. The real gamma rays form in nature from radioactive decay of the atomic nucleus. Besides hazardous materials, you’d have to look in exotic places like near a black hole or closer to home at lightning in the upper atmosphere to find natural forces capable of making gamma rays.

Movie of the laser simulation. Laser pulse (red and blue) enters plastic target (green) from left, generating intense magnetic fields (orange and blue). Gamma rays (red dots) exit target in collimated beam. Only photons above 125MeV shown. Credit: Stark et al.

Scientists have found that gamma rays, like the Hulk, can do heroic things too — if they can be controlled. Hospitals now eradicate cancer tumors using a ‘gamma ray knife’ with surgical precision. The rays can also image brain activity. And gamma rays are used to quickly scan cargo containers for terrorist materials. But it’s near impossible to make gamma rays with non-radioactive materials. To do that today one needs a colossal atom smasher like at CERN or SLAC. No one has been able to make a gamma ray beam from lasers. But it can be done, say scientists at The University of Texas (UT) at Austin.

Supercomputers might have helped unlock a new way to make controlled beams of gamma rays from a laser that fits on a table-top, according to research physicist Alex Arefiev, who has a dual appointment at the Institute for Fusion Studies and at the Center for High Energy Density Science at UT Austin. Arefiev co-authored the study, “Enhanced multi-MeV photon emission by a laser-driven electron beam in a self-generated magnetic field,” published May 2016 in the journal Physical Review Letters.

“One of the key results that we found is that a laser pulse can be efficiently converted into a beam of very energetic photons,” Arefiev said. “They are more than one million times more energetic than the photons in the laser pulse. Until recently, there hasn’t been a method for producing a beam of such energetic photons. So the proposed regime can be groundbreaking for a number of applications and also for fundamental science studies.”

Arefiev and colleagues want to fire up the Texas Petawatt Laser, one of the most powerful lasers in the world. They’ll target a piece of solid plastic with a tiny chamber drilled through that’s filled with plastic foam. Simulations run on the Lonestar and Stampede supercomputers of the Texas Advanced Computing Center (TACC) show that the laser goes through the target chamber without making a hole, like sunlight through a pane of glass. Along the way it energizes the electrons of the foam. This plasma of high-energy electron particles then release a controlled beam of ultra-energized photons, the gamma rays.

3D visualization of the photon burst as it propagates through a calculated volume. Only gamma ray propagation is shown here because of the large size of the data set. The photons appear at left and propagate to the right. Credit: Stark et al.

Study lead David Stark said, “It’s exciting to be able to work in collaboration with people at the Texas Petawatt Laser,” which is also at UT Austin. “That was one of the benefits to doing this study, being able to combine plasma physics with the optical capabilities that are just in the basement of our building.” Stark was then a graduate student of the physics department at UT Austin, and has since completed his PhD and moved on to an appointment at Los Alamos National Laboratory.

The scientists found even more than just radiation, said study co-author Toma Toncian. “In a nutshell, we have discovered using numerical simulations a physical regime where we would generate the highest magnetic fields ever generated on Earth. A side benefit is that we would also generate one of the most intense gamma ray sources.” Toncian is the assistant director of the Center for High Energy Density Science at UT Austin.

The ultra-high magnetic fields induced by the laser strike are key to what the scientists describe as ‘relativistic transparency’ of the target. For instance, if you aim your normal laser pointer at a blackboard, some light is reflected but mainly it’s absorbed at the surface. The electrons in the material follow the oscillation of the laser field and short circuit it so it cannot propagate inside the board.

“In our case,” Toncian explained, “the electrons are getting heavier and heavier because we are accelerating them very close to the light speed. They become immobile. They cannot respond anymore to the high oscillating light of the laser. Suddenly, the laser can propagate inside the target because the electrons cannot short circuit the laser light.”

Besides relativity, the scales of the experiment can boggle the mind. They’re working with some of the world’s most powerful laser light, amplified to a petawatt — a billion million watts. The light burst dwarfs by several several hundred times the power from all of the world’s electric plants combined. But it only lasts only a few hundred femtoseconds — a millionth of one billionth of a second. That’s about as long as it takes for the laser light to go through the target, which is only 1/100 as thick as a human hair.

“On that timescale, we need to be able to resolve the dynamics,” Stark said. “Because that’s how we understand the physics of what’s going on. We needed to use in our kinetic simulation very high resolution on both space and time.”

Scientists have turned again and again to computer simulation in cases where they need to know what’s happening when there’s thousands, millions, billions of things going on simultaneously, and each thing influences every other thing. Here they used the UK-developed EPOCH ‘particle-in-cell’ code, where particles are modeled as ‘chunks’ that describe the bigger reality of the dynamics of the plasma system. About three billion excited electrons advance at infinitesimally small time steps in the simulation.

“To do that, we needed to be able to use many, many processors simultaneously in order to evolve the system in a meaningful length of time to observe what we’re trying to find. That was one of the major challenges,” Stark said.

“That’s why we turned to TACC. We started out by using Lonestar 4. And now we’ve started working with Stampede more. We’re using both 2D and 3D simulations. We’re using thousands of processors simultaneously for all these simulations and running them for the better part of a day. We’re talking about tens of thousands, up to 60,000 processor hours for one simulation, just to get all the data out. So, we realistically needed to use the facilities here at TACC in order to achieve what we’re looking for,” Stark said.

What’s more, as the particles move through their plasma, they generate the gamma ray photon particles. “The number of particles increases dramatically during the simulations,” Arefiev said. “The memory requirements are also very stringent. Stampede, with the extra memory resources was very helpful.” And then once you are done with the simulation, you have a lot of data. Even for just a 2-D output, one snapshot can be hundreds of megabytes. That can be tens of gigabytes for a 3D output. And then you have tens and tens of those files.”

Hundreds of thousands of computing hours on Stampede and Lonestar were needed not only for the computation but also for the visualization and post-processing of the laser experiment data, said Arefiev.

“The supercomputer can run for a day, but then to post-process the data and to assemble it to determine which electron emitted what photon, that was pretty demanding too. And after that, the visualization takes a lot of time. This would not have been possible without the resources that TACC provided to us,” Arefiev said.

“One of the big assets of having Stampede at TACC available for our research is, of course, you can do a lot of productive runs,” Toncian said. “You can do parameter variations that you wouldn’t have been able to do in the past.”

One of the further possibilities opened up by advanced computing in this laser research is the creation of antimatter — the mirror nemesis of the ordinary matter that makes our existence. When matter and antimatter meet, they annihilate and create gamma rays. Arefiev’s team want to reverse the process.

“Potentially,” said Arefiev, “you could have a gamma ray collider, which seemed not even feasible until recently, in a laboratory on Earth, to collide two beams of light and actually produce matter. Not just a couple of particles, but a lot of them.” Plentiful antimatter creation has eluded even the world’s biggest science labs like CERN. It would cost over one million billion dollars to make one gram of antimatter, according to Symmetry magazine.

“There would be a substantial amount of matter in the vacuum created out of light,” Arefiev continued. “This can potentially allow people to study some of the processes that are underpinning a lot of phenomena in the universe, in the laboratory.”

“Scientists are generally very, very curious,” Toncian said. “Their curiosity drives them. In Europe, there is a laser consortium sponsored by the European Union to build a huge laser facility. This huge laser facility would be at least 10 times bigger than what we have here in Texas at UT Austin, in terms of the Texas Petawatt Laser. These are 10 petawatt lasers. They have a huge and broad scientific case in order to be able to finance a lot of these envisioned studies.”

Toncian said that what they’re doing in Texas with their laser could pave the way for bigger science with the proposed EU laser. “I think the most important outcome of our study is that we can now actually fast track a lot of the science that was planned to be done basically just with this future 10 petawatt laser,” said Toncian.

But Texas scientists aren’t just going to wait around. Real tests based on the simulations will be performed in 2016 with the Texas Petawatt Laser led by Professors Manuel Hegelich and Todd Ditmire from the Center for High Energy Density Science at UT Austin. “So very soon (at the time of interview), an experiment will probe for the first time the intensity regime we just predicted up to now, theoretically,” Arefiev explained. “It’s going to be a very interesting time for us to see if these effects will really be seen and measured.”

Arevfiev joked that he didn’t want to become a victim of his own success. “I told the guys to let me know when they do their runs. The gamma rays are so intense and so energetic that they don’t even need to remove the aluminum flanges to detect them. So I would like to stay at home when they do the experiment, just in case everything works,” Arefiev said.

This research was supported by funding from the Air Force Office of Scientific Research, National Nuclear Security Administration, and the US Department of Energy. HPC resources were provided by the TACC at the University of Texas at Austin.


Source: Jorge Salazar, TACC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire