Scientists have unlocked many secrets about particle interactions at atomic and subatomic levels. However, one mystery that has eluded researchers is dark matter. Current supercomputers don’t have the capability to run custom machine learning (ML) architectures that can tackle calculating the properties and interactions of large atomic nuclei needed to help in solving this mystery.
A team of scientists, using supercomputing resources at the US Department of Energy’s (DOE) Argonne National Laboratory are conducting research seeking to find answers to this puzzle with simulations using lattice quantum chromodynamics (LQCD). The team is developing novel ML algorithms to determine possible interactions between nuclei and a large class of dark matter candidate particles. The goal of the research is to enable calculations in nuclear physics on the Aurora exascale high performance computer (HPC) system that are not computationally possible with traditional approaches on existing petascale supercomputers.
Introducing the dark matter research team
The research team conducting the dark matter research includes Dr. William Detmold and Dr. Phiala Shanahan as co-principal investigators from MIT. The team also includes researchers at New York University, and their collaborators in Europe and at Argonne National Laboratory. The researchers are members of the U.S. Lattice Quantum Chromodynamics (USQCD) collaboration, a national infrastructure for LQCD hardware and software. The ESP research is supported by the US Department of Energy and National Science Foundation. Software development is funded by a grant from the Department of Energy Scientific Discovery through Advanced Computing (SciDAC)program. The project is an awardee in the Argonne Leadership Computing Facility’s (ALCF) Early Science Program (ESP) for Aurora.
Dark matter research based on the standard model of particle physics
The standard model of particle physics is a theory that seeks to explain the strong force and weak force as well as electromagnetism, but doesn’t include gravity. Based on past standard model research, scientists currently understand that protons and atomic nuclei are made up of quarks and gluons that are the fundamental building blocks of the universe. Dark matter is the name applied to unknown matter in the universe that has not been detected by current scientific instruments but is inferred based on its gravitational effects.
Detmold states, “When we talk about the standard model, we focus on things that at a very small scale– smaller than the atom basically. Our ESP team’s research is based on the theory of quantum chromodynamics (QCD), which explains the way quarks interact with one another inside the nucleus of an atom. We use LQCD simulations related to contemporary physics experiments to try to understand how those interactions work to determine the atomic constituents and their potential interactions with dark matter.”
Machine learning software developed for dark matter research
The team developed their own ML software for dark matter research to solve some of the challenging computational tasks. Detmold states, “There are big computational bottlenecks in certain parts of the LQCD calculation. Our ML software is designed to speed up HPC algorithms in parts of the LQCD calculation such as matrix inversions and big linear algebra calculations.”
The team’s ML algorithm is optimized to take advantage of other software tools such as USQCD libraries, TensorFlow, HDF5, and PyTorch. The ML software uses a self-training method where the model generates samples of typical configurations of quarks and gluons, and the program learns from the samples to more accurately generate new samples.
According to Dr. Shanahan, “Our team is developing novel machine-learning algorithms to enable next-generation lattice QCD calculations of nuclear physics on Aurora.” Parts of the LQCD calculations can only run on large-scale supercomputers. On Aurora, the team will calculate convolutions that work in four dimensions. Detmold indicates that dealing with a four-dimensional structure makes calculations a more challenging numerical problem and requires new software development.
The numerical calculations use a spacetime lattice (grid) to determine the properties and interactions of the nuclei, including their potential interactions with dark matter. The researchers initially examine a small volume and increase the region to larger volumes and extrapolate the results to the infinite box size limit.

Preparing for work on Aurora
In preparation for work on the future Aurora supercomputer, the team has historically worked on petascale supercomputers including ALCF Mira and Theta, Summit at ORNL, and Marconi at CINECA in Italy.
The future Aurora supercomputer architecture is designed to optimize deep learning and the ML software stack will run at scale. Aurora will incorporate new Intel compute engines, including the Intel Xeon CPU Max series, and the Intel Data Center GPU Max series, as well as DAOS storage. Aurora will take advantage of the Intel-led cross-industry oneAPI initiative designed to unify and simplify application development across diverse computing architectures. Detmold indicates that HPC researchers need tools such as oneAPI to save time.
Summary
Dark matter research is computationally challenging and there are many unanswered questions. A team of researchers is doing dark matter research as part of the Argonne ESP program. They are developing novel machine-learning (ML) algorithms to determine possible interactions between nuclei and a large class of dark matter candidate particles.
Being able to move to an exascale HPC system will allow the team to perform research that is not currently possible on petascale supercomputers. Detmold indicates that having access to an exascale supercomputer will help the team compare their numerical LQCD calculations against physical dark matter experiments as well as predictions from the standard model of particle physics to learn more about interactions at a subatomic particle level.
“Aurora will enable us to scale-up and deploy custom machine learning architectures developed for physics to the exascale for the first time. The hope is that this will enable calculations in nuclear physics that are not computationally tractable with traditional approaches, but it will also represent the first at-scale application of machine learning at all in this context,” states Shanahan.
The ALCF is a DOE Office of Science User Facility.
References
Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training, and web design firm in Beaverton, OR.
This article was produced as part of Intel’s editorial program, with the goal of highlighting cutting-edge science, research and innovation driven by the HPC and AI communities through advanced technology. The publisher of the content has final editing rights and determines what articles are published.