NERSC Supports First All-GPU Full-Scale Physics Simulation

June 7, 2023

June 7, 2023 — Using supercomputers at the National Energy Research Scientific Computing Center (NERSC), researchers have completed a simulation of a detector of neutrino interactions that’s designed to run exclusively on graphics processing units (GPUs) – the first simulation of its kind and an example of using GPUs’ highly parallel structure to process large amounts of physical data. The research was published in the Journal of Instrumentation in April.

Application-Specific Integrated Circuits (ASICs) are attached to the back of a tile containing 4900 pixel sensors, which record the charges left behind by neutrinos passing through a liquid argon chamber. Credit: Stefano Roberto Soleti, Berkeley Lab.

Neutrinos are the most abundant particle of matter in the universe. Produced by nuclear reactions like the one that powers the Sun, trillions of them pass through the human body every second, though they typically don’t interact with the human body or any other form of matter. Efforts to measure the mass of the neutrino and understand its relationship to matter in the universe have been underway for decades, and the research outlined in this paper could help clarify why the universe is made of matter and not antimatter (that is, particles with the same mass as matter, but opposite electrical charges and properties).

The simulation is part of the preparation for the Deep Underground Neutrino Experiment (DUNE), an international collaboration studying the neutrino that includes U.S. Department of Energy resources. Currently under construction, the components of the DUNE experiment will consist of an intense neutrino beam produced at Fermilab in Illinois and two main detectors: a Near Detector located near the beam source and a Far Detector located a mile underground at Sanford Underground Research Laboratory in South Dakota. Eventually the Near Detector will detect approximately 50 neutrino interactions per beam blast, adding up to tens of millions of interactions per year. By examining neutrinos and how they change in form over long distances, a process called oscillation, scientists hope to learn about the origin and behavior of the universe over time.

Since 1977, researchers have studied neutrinos using a device called a liquid argon time projection chamber (LArTPC), in which neutrinos flow through liquid argon and leave ions and electrons in their wake, which are received by arrays of sensing wires. With the wires’ position and the charges’ time of arrival for input, the wires produce 2D images of the neutrino interactions. However, a new method being pioneered at Berkeley Lab replaces the wires with sensors known as pixels, which add a third sensing dimension and yield 3D images instead – an increase in information and also a vast amount more data to analyze and store. (In this case a “pixel” is a small sensor, unrelated to the pixels found in consumer electronics screens.) Before construction of the detectors begins, the team uses digital simulations to ensure that both the physical detectors and the workflows around them will work as planned.

“At Berkeley Lab we are developing a new technology that employs pixels instead of wires, so you are able to immediately have a 3D image of your event, and that allows you to have a much better capability to actually reconstruct your interaction,” said co-author Stefano Roberto Soleti. “However, the problem is that now you have a lot of pixels. In the case of the detector that we’re building, we have around 12 million pixels, and we need to simulate that.”

That’s where supercomputing resources at NERSC come in. Because GPUs are uniquely suited to executing many calculations in parallel, they represent a much faster way of dealing with large quantities of data. Perlmutter’s thousands of GPU nodes allowed the researchers to simulate the detector over many nodes at once, greatly increasing the compute capability relative to CPU-only. Soleti’s team saw an associated increase in speed – the simulation of the signal from each pixel took about one millisecond on the GPU compared with ten seconds on a CPU.

“When you have so many channels, it becomes very difficult to simulate them on CPUs or in a classic way,” said Soleti. “Our idea was to use NERSC resources in particular; we started with Cori using test GPU nodes and then moved to Perlmutter to try to develop a physics detector simulation that runs on GPUs. To my knowledge, this is the first full physics detector simulation that runs entirely on GPUs: we do the full simulation from the energy deposit to the signal in our detector entirely on GPUs, so we don’t have to copy the data from the GPU memory and then transfer it to the CPU. To my knowledge this is the first full physics detector simulation that does this, and it allowed us to achieve an improvement of around four orders of magnitude in the speed of our simulation. It would’ve been unfeasible to simulate this detector with a classic technique using CPU algorithms, but with these new methods and with the NERSC resources using the GPU nodes, it is feasible.”

To make it happen, the team ran the simulation using a set of GPU-optimized algorithms written in Python and translated into CUDA, a platform that allows the use of GPUs for general-purpose computing using different programming languages. In this case the team chose Numba to do that translation, which allowed them to interface with CUDA and manage memory without directly writing C++ code. According to Soleti, this method of simulating many sensors in parallel on GPUs is potentially useful for other types of research as well, as long as they lend themselves to running in parallel.

Now that Soleti and his team have a working simulation, what’s next? The simulation is a key step in making DUNE a reality, both digitally and in the real world. A prototype of the DUNE Near Detector is currently being installed in a neutrino beam at Fermilab using the team’s simulation, and will begin operation later this year; installation of the final detector is slated to begin in 2026. But there’s plenty more to do as the whole instrument is constructed: next steps include a simulation for the full detector, which represents a manyfold increase in pixels.

“The next step is to produce the simulation for the full detector,” said Soleti. “Right now the prototype has about one ton of liquid argon and tens of thousands of pixels, but the full detector will have 12 million pixels. So we’ll need to produce a simulation and scale up even more to the level that we’ll need for the full detector.”

Overall, the approach introduced at Berkeley Lab is a major step forward for the DUNE experiment as well as for the field of high-energy physics and other scientific areas of study, said Dan Dwyer, the technical lead for Near Detector work at Berkeley and another author on the paper.

“This novel approach to detector simulation provides a great example of how to leverage modern computing technologies in high-energy physics,” said Dwyer. “It opens new paths to studying the data from DUNE and other future experiments.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 7,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.


Source: Elizabeth Ball, NERSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Watsonx Brings AI Visibility to Banking Systems

September 21, 2023

A new set of AI-based code conversion tools is available with IBM watsonx. Before introducing the new "watsonx," let's talk about the previous generation Watson, perhaps better known as "Jeopardy!-Watson." The origi Read more…

Researchers Advance Topological Superconductors for Quantum Computing

September 21, 2023

Quantum computers process information using quantum bits, or qubits, based on fragile, short-lived quantum mechanical states. To make qubits robust and tailor them for applications, researchers from the Department of Ene Read more…

Fortran: Still Compiling After All These Years

September 20, 2023

A recent article appearing in EDN (Electrical Design News) points out that on this day, September 20, 1954, the first Fortran program ran on a mainframe computer. Originally developed by IBM, Fortran (or FORmula TRANslat Read more…

Intel’s Gelsinger Lays Out Vision and Map at Innovation 2023 Conference

September 20, 2023

Intel’s sprawling, optimistic vision for the future was on full display yesterday in CEO Pat Gelsinger’s opening keynote at the Intel Innovation 2023 conference being held in San Jose. While technical details were sc Read more…

Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1

September 18, 2023

Intel used the latest MLPerf Inference (version 3.1) results as a platform to reinforce its developing “AI Everywhere” vision, which rests upon 4th gen Xeon CPUs and Gaudi2 (Habana) accelerators. Both fared well on t Read more…

AWS Solution Channel

Shutterstock 1679562793

How Maxar Builds Short Duration ‘Bursty’ HPC Workloads on AWS at Scale

Introduction

High performance computing (HPC) has been key to solving the most complex problems in every industry and has been steadily changing the way we work and live. Read more…

QCT Solution Channel

QCT and Intel Codeveloped QCT DevCloud Program to Jumpstart HPC and AI Development

Organizations and developers face a variety of issues in developing and testing HPC and AI applications. Challenges they face can range from simply having access to a wide variety of hardware, frameworks, and toolkits to time spent on installation, development, testing, and troubleshooting which can lead to increases in cost. Read more…

Survey: Majority of US Workers Are Already Using Generative AI Tools, But Company Policies Trail Behind

September 18, 2023

A new survey from the Conference Board indicates that More than half of US employees are already using generative AI tools, at least occasionally, to accomplish work-related tasks. Yet some three-quarters of companies st Read more…

Watsonx Brings AI Visibility to Banking Systems

September 21, 2023

A new set of AI-based code conversion tools is available with IBM watsonx. Before introducing the new "watsonx," let's talk about the previous generation Watson Read more…

Intel’s Gelsinger Lays Out Vision and Map at Innovation 2023 Conference

September 20, 2023

Intel’s sprawling, optimistic vision for the future was on full display yesterday in CEO Pat Gelsinger’s opening keynote at the Intel Innovation 2023 confer Read more…

Intel Showcases “AI Everywhere” Strategy in MLPerf Inferencing v3.1

September 18, 2023

Intel used the latest MLPerf Inference (version 3.1) results as a platform to reinforce its developing “AI Everywhere” vision, which rests upon 4th gen Xeon Read more…

China’s Quiet Journey into Exascale Computing

September 17, 2023

As reported in the South China Morning Post HPC pioneer Jack Dongarra mentioned the lack of benchmarks from recent HPC systems built by China. “It’s a we Read more…

Nvidia Releasing Open-Source Optimized Tensor RT-LLM Runtime with Commercial Foundational AI Models to Follow Later This Year

September 14, 2023

Nvidia's large-language models will become generally available later this year, the company confirmed. Organizations widely rely on Nvidia's graphics process Read more…

MLPerf Releases Latest Inference Results and New Storage Benchmark

September 13, 2023

MLCommons this week issued the results of its latest MLPerf Inference (v3.1) benchmark exercise. Nvidia was again the top performing accelerator, but Intel (Xeo Read more…

Need Some H100 GPUs? Nvidia is Listening

September 12, 2023

During a recent earnings call, Tesla CEO Elon Musk, the world's richest man, summed up the shortage of Nvidia enterprise GPUs in a few sentences.  "We're us Read more…

Intel Getting Squeezed and Benefiting from Nvidia GPU Shortages

September 10, 2023

The shortage of Nvidia's GPUs has customers searching for scrap heap to kickstart makeshift AI projects, and Intel is benefitting from it. Customers seeking qui Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

ISC 2023 Booth Videos

Cornelis Networks @ ISC23
Dell Technologies @ ISC23
Intel @ ISC23
Lenovo @ ISC23
Microsoft @ ISC23
ISC23 Playlist
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire