July 22, 2021 — The U.S. Department of Energy’s (DOE) Argonne National Laboratory will be home to one of the nation’s first exascale supercomputers when Aurora arrives in 2022. To prepare codes for the architecture and scale of the system, 15 research teams are taking part in the Aurora Early Science Program through the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. With access to pre-production time on the supercomputer, these researchers will be among the first in the world to use an exascale machine for science.
Early philosophers first formulated the idea of the atom around the fifth century BCE. And just when we thought we understood its basic structure — protons, neutrons, and electrons — theories and technologies emerged to prove us wrong. Turns out, there are still more fundamental particles, like quarks, bound together by aptly named gluons.
Physicists discovered many of these and other particles in the enormous beasts of machines we call colliders, helping to develop what we know today as the Standard Model of physics. But there are questions that continue to nag: Is there something more fundamental still? Is the Standard Model all there is?
Determined to find out, the high energy physics community is working to integrate ever larger colliders and more sophisticated detectors with exascale computing systems. Among them is Walter Hopkins, an assistant physicist with Argonne National Laboratory and a collaborator with the ATLAS experiment at the Large Hadron Collider (LHC) at CERN, near Geneva, Switzerland.
Collaborating with researchers from both Argonne and Lawrence Berkeley National Lab, Hopkins leads an Aurora Early Science Program project through the ALCF to prepare software used in LHC simulations for exascale computing architectures, including Argonne’s forthcoming exascale machine, Aurora. At a billion billion calculations per second, Aurora is at the frontier of supercomputing and equal to the next challenge in particle physics, one of gargantuan magnitude.
The project was started several years ago by physicist and Argonne Distinguished Fellow James Proudfoot, who understood exascale’s distinct advantages in improving the impact of such complex science.
Aligning codes with new architecture
The collisions produced in the LHC occur in one of several detectors. The one on which the team is focused, ATLAS, witnesses billions of particle interactions every second and the signatures of new particles those collisions create in their wake.
One type of code the team is focused on, called event generators, simulates the underlying physics processes that occur at the interaction points within the 17-mile circumference collider ring. Getting the software-produced physics to align with that of the Standard Model helps researchers accurately simulate the collisions and predict the types, paths, and energies of the remnant particles.
Detecting physics in this way creates a mountain of data and requires an equally large chunk of computer time. And now, CERN is upping the ante as it readies to upgrade the LHC’s luminosity, allowing for more particle interactions and a 20-fold increase in data output.
While the team is looking to Aurora to handle this increase in their simulation requirements, the machine does not come without a few challenges of its own.
Until recently, the event generators ran on computer CPUs (central processing units). While they work quickly, a CPU typically can only execute several operations at a time.
Aurora will be equipped with both CPUs and GPUs (graphic processing units), the choice of gamers everywhere. GPUs can handle many operations by breaking them into thousands of smaller tasks spread out across many cores, the engines that drive both types of unit.
But it takes a lot of effort to move CPU-based simulations onto GPUs in an efficient way, notes Hopkins. So, making this move to prepare for both Aurora and the onslaught of new data from LHC provides several challenges, which have become part of the team’s central focus.
“We want to be able to use Aurora to help us face these challenges,” says Hopkins, “but it requires us to study computing architectures that are new to us and our code base. For example, we’re focusing on a generator that is used in ATLAS, called MadGraph, and that runs on GPUs, which are more parallel and have different memory management requirements.”
A particle interaction simulation code, MadGraph was written by an international team of high energy physics theorists and supports the LHC’s simulation needs.
Simulation and AI support experimental work
The LHC has played a significant role in bringing prediction to reality. Most famously, the Standard Model predicted the existence of the Higgs boson, which conveys mass to all fundamental particles; ATLAS and its counterpart detector, CMS, confirmed Higgs’ existence in 2012.
But, as is so often the case in science, big discoveries can lead to more substantial questions, many of which are not predicted by the Standard Model. Why is the Higgs the mass that it is? What is dark matter?
“The reason for this very large upgrade to the LHC is that we’re hoping to find that needle in the haystack, that we’ll find some anomaly in the data set that offers a hint of physics beyond the Standard Model,” says Hopkins.
A combination of computational power, simulation, experiment, and artificial intelligence (AI) will dramatically help that search by providing accuracy in both prediction and identification.
When the ATLAS detector witnesses these particle collisions, for example, it records them as electronic signals. These are reconstructed as pixels of energy bursts that might correspond to an electron passing through.
“But just like in AI, where the canonical example is identifying cats and dogs in images, we have algorithms that identify and reconstruct those electronic signals into electrons, protons and other things,” says ALCF computer scientist Taylor Childers, a member of the team.
The reconstructed data from real collision events are then compared to the simulated data to look for differences in patterns. This is where accuracy in the physics models come to bear. If they’re working correctly and the real and simulated data doesn’t match, you continue to measure and rule out anomalies until it’s likely that you found that needle, that something that doesn’t fit the Standard Model.
The team is also using AI to quantify uncertainty, to determine the likelihood that they’ve identified a particle correctly.
Humans are capable of identifying particles to a limited extent — several parameters like momentum and position might tell us that a certain particle is an electron. But base that characterization on 10 parameters that are intimately tied together, then it’s another story, altogether.
“That’s where artificial intelligence really shines, especially if those input parameters are correlated, like the momentum of particles around an electron and the momentum of the electron itself,” says Hopkins. “These correlations are difficult to deal with analytically, but since we have so much simulation data, we can teach artificial intelligence and it can tell us, this is an electron with this likelihood because I have all of this input information.”
Exascale computing and the path forward
In advance of Aurora, the team continues work on the programming languages for the new architectures and the code to run on the Intel hardware that will be used on Aurora, as well as on hardware from other vendors.
“Part of the R&D that we do with our partner, Intel, is to make sure that the hardware is doing what we expect it to do and doing it efficiently,” says Childers. “Having a machine like Aurora will give us plenty of compute power and plenty of nodes to effectively reduce the time to solution, especially when we move to the upgraded LHC.”
The solution is an answer to a fundamental question — is there more beyond the Standard Model? — and one that could have unimagined repercussions a hundred years from now, notes Hopkins.
“Fundamental research can give us knowledge that may lead to societal transformation, but if we don’t do the research, it won’t lead to anything,” he says.
The ALCF is a DOE Office of Science User Facility.
Funding for this project was provided by DOE Office of Science: Offices of High Energy Physics and Advanced Scientific Computing Research. ATLAS is an international collaboration that benefits from DOE support.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
Click here to learn more.
Source: JOHN SPIZZIRRI, ALCF