Preparing for Exascale: Argonne’s Aurora to Accelerate Discoveries in Particle Physics at CERN

July 22, 2021

July 22, 2021 — The U.S. Department of Energy’s (DOE) Argonne National Laboratory will be home to one of the nation’s first exascale supercomputers when Aurora arrives in 2022. To prepare codes for the architecture and scale of the system, 15 research teams are taking part in the Aurora Early Science Program through the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. With access to pre-production time on the supercomputer, these researchers will be among the first in the world to use an exascale machine for science.

Early philosophers first formulated the idea of the atom around the fifth century BCE. And just when we thought we understood its basic structure — protons, neutrons, and electrons — theories and technologies emerged to prove us wrong. Turns out, there are still more fundamental particles, like quarks, bound together by aptly named gluons.

Physicists discovered many of these and other particles in the enormous beasts of machines we call colliders, helping to develop what we know today as the Standard Model of physics. But there are questions that continue to nag: Is there something more fundamental still? Is the Standard Model all there is?

Determined to find out, the high energy physics community is working to integrate ever larger colliders and more sophisticated detectors with exascale computing systems. Among them is Walter Hopkins, an assistant physicist with Argonne National Laboratory and a collaborator with the ATLAS experiment at the Large Hadron Collider (LHC) at CERN, near Geneva, Switzerland.

Collaborating with researchers from both Argonne and Lawrence Berkeley National Lab, Hopkins leads an Aurora Early Science Program project through the ALCF to prepare software used in LHC simulations for exascale computing architectures, including Argonne’s forthcoming exascale machine, Aurora. At a billion billion calculations per second, Aurora is at the frontier of supercomputing and equal to the next challenge in particle physics, one of gargantuan magnitude.

The project was started several years ago by physicist and Argonne Distinguished Fellow James Proudfoot, who understood exascale’s distinct advantages in improving the impact of such complex science.

Aligning codes with new architecture

The collisions produced in the LHC occur in one of several detectors. The one on which the team is focused, ATLAS, witnesses billions of particle interactions every second and the signatures of new particles those collisions create in their wake.

One type of code the team is focused on, called event generators, simulates the underlying physics processes that occur at the interaction points within the 17-mile circumference collider ring. Getting the software-produced physics to align with that of the Standard Model helps researchers accurately simulate the collisions and predict the types, paths, and energies of the remnant particles.

Detecting physics in this way creates a mountain of data and requires an equally large chunk of computer time. And now, CERN is upping the ante as it readies to upgrade the LHC’s luminosity, allowing for more particle interactions and a 20-fold increase in data output.

While the team is looking to Aurora to handle this increase in their simulation requirements, the machine does not come without a few challenges of its own.

Workers inside ATLAS, one of several primary detectors for the Large Hadron Collider at CERN. ATLAS witnesses a billion particle interactions every second and the signatures of new particles created in near-speed-of-light proton-proton collisions. (Image: CERN)

Until recently, the event generators ran on computer CPUs (central processing units). While they work quickly, a CPU typically can only execute several operations at a time.

Aurora will be equipped with both CPUs and GPUs (graphic processing units), the choice of gamers everywhere. GPUs can handle many operations by breaking them into thousands of smaller tasks spread out across many cores, the engines that drive both types of unit.

But it takes a lot of effort to move CPU-based simulations onto GPUs in an efficient way, notes Hopkins. So, making this move to prepare for both Aurora and the onslaught of new data from LHC provides several challenges, which have become part of the team’s central focus.

“We want to be able to use Aurora to help us face these challenges,” says Hopkins, ​“but it requires us to study computing architectures that are new to us and our code base. For example, we’re focusing on a generator that is used in ATLAS, called MadGraph, and that runs on GPUs, which are more parallel and have different memory management requirements.”

A particle interaction simulation code, MadGraph was written by an international team of high energy physics theorists and supports the LHC’s simulation needs.

Simulation and AI support experimental work

The LHC has played a significant role in bringing prediction to reality. Most famously, the Standard Model predicted the existence of the Higgs boson, which conveys mass to all fundamental particles; ATLAS and its counterpart detector, CMS, confirmed Higgs’ existence in 2012.

But, as is so often the case in science, big discoveries can lead to more substantial questions, many of which are not predicted by the Standard Model. Why is the Higgs the mass that it is? What is dark matter?

“The reason for this very large upgrade to the LHC is that we’re hoping to find that needle in the haystack, that we’ll find some anomaly in the data set that offers a hint of physics beyond the Standard Model,” says Hopkins.

A combination of computational power, simulation, experiment, and artificial intelligence (AI) will dramatically help that search by providing accuracy in both prediction and identification.

When the ATLAS detector witnesses these particle collisions, for example, it records them as electronic signals. These are reconstructed as pixels of energy bursts that might correspond to an electron passing through.

“But just like in AI, where the canonical example is identifying cats and dogs in images, we have algorithms that identify and reconstruct those electronic signals into electrons, protons and other things,” says ALCF computer scientist Taylor Childers, a member of the team.

The reconstructed data from real collision events are then compared to the simulated data to look for differences in patterns. This is where accuracy in the physics models come to bear. If they’re working correctly and the real and simulated data doesn’t match, you continue to measure and rule out anomalies until it’s likely that you found that needle, that something that doesn’t fit the Standard Model.

The team is also using AI to quantify uncertainty, to determine the likelihood that they’ve identified a particle correctly.

Humans are capable of identifying particles to a limited extent — several parameters like momentum and position might tell us that a certain particle is an electron. But base that characterization on 10 parameters that are intimately tied together, then it’s another story, altogether.

“That’s where artificial intelligence really shines, especially if those input parameters are correlated, like the momentum of particles around an electron and the momentum of the electron itself,” says Hopkins. ​“These correlations are difficult to deal with analytically, but since we have so much simulation data, we can teach artificial intelligence and it can tell us, this is an electron with this likelihood because I have all of this input information.”

Exascale computing and the path forward

In advance of Aurora, the team continues work on the programming languages for the new architectures and the code to run on the Intel hardware that will be used on Aurora, as well as on hardware from other vendors.

“Part of the R&D that we do with our partner, Intel, is to make sure that the hardware is doing what we expect it to do and doing it efficiently,” says Childers. ​“Having a machine like Aurora will give us plenty of compute power and plenty of nodes to effectively reduce the time to solution, especially when we move to the upgraded LHC.”

The solution is an answer to a fundamental question — is there more beyond the Standard Model? — and one that could have unimagined repercussions a hundred years from now, notes Hopkins.

“Fundamental research can give us knowledge that may lead to societal transformation, but if we don’t do the research, it won’t lead to anything,” he says.

The ALCF is a DOE Office of Science User Facility.

Funding for this project was provided by DOE Office of Science: Offices of High Energy Physics and Advanced Scientific Computing Research. ATLAS is an international collaboration that benefits from DOE support.

About ALCF

The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

Click here to learn more.


Source: JOHN SPIZZIRRI, ALCF

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Energy Exascale Earth System Model Version 2 Promises Twice the Speed

October 18, 2021

The Energy Exascale Earth System Model (E3SM) is an ongoing Department of Energy (DOE) earth system modeling, simulation and prediction project aiming to “assert and maintain an international scientific leadership posi Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

NSF Awards $11M to SDSC, MIT and Univ. of Oregon to Secure the Internet

October 14, 2021

From a security standpoint, the internet is a problem. The infrastructure developed decades ago has cracked, leaked and been patched up innumerable times, leaving vulnerabilities that are difficult to address due to cost Read more…

SC21 Announces Science and Beyond Plenary: the Intersection of Ethics and HPC

October 13, 2021

The Intersection of Ethics and HPC will be the guiding topic of SC21's Science & Beyond plenary, inspired by the event tagline of the same name. The evening event will be moderated by Daniel Reed with panelists Crist Read more…

AWS Solution Channel

Cost optimizing Ansys LS-Dyna on AWS

Organizations migrate their high performance computing (HPC) workloads from on-premises infrastructure to Amazon Web Services (AWS) for advantages such as high availability, elastic capacity, latest processors, storage, and networking technologies; Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

The Blueprint for the National Strategic Computing Reserve

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing... Read more…

UCLA Researchers Report Largest Chiplet Design and Early Prototyping

October 12, 2021

What’s the best path forward for large-scale chip/system integration? Good question. Cerebras has set a high bar with its wafer scale engine 2 (WSE-2); it has 2.6 trillion transistors, including 850,000 cores, and was fabricated using TSMC’s 7nm process on a roughly 8” x 8” silicon footprint. Read more…

What’s Next for EuroHPC: an Interview with EuroHPC Exec. Dir. Anders Dam Jensen

October 7, 2021

One year after taking the post as executive director of the EuroHPC JU, Anders Dam Jensen reviews the project's accomplishments and details what's ahead as EuroHPC's operating period has now been extended out to the year 2027. Read more…

University of Bath Unveils Janus, an Azure-Based Cloud HPC Environment

October 6, 2021

The University of Bath is upgrading its HPC infrastructure, which it says “supports a growing and wide range of research activities across the University.” Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire