Argonne Team Brings Leadership Computing to CERN’s Large Hadron Collider

By Madeleine O’Keefe

September 27, 2018

CERN’s Large Hadron Collider (LHC), the world’s largest particle accelerator, expects to produce around 50 petabytes of data this year. This is equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

A team of collaborators from the U.S. Department of Energy’s (DOE) Argonne National Laboratory is working to address this issue with computing resources at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Since 2015, this team has worked with the ALCF on multiple projects to explore ways supercomputers can help meet the growing needs of the LHC’s ATLAS experiment.

This track is an example of simulated data modeled for the ATLAS detector at CERN’s Large Hadron Collider. Image courtesy of ALCF.

The efforts are especially important given what is coming up for the accelerator. In 2026, the LHC will undergo an ambitious upgrade to become the High-Luminosity LHC (HL-LHC). The aim of this upgrade is to increase the LHC’s luminosity—the number of events detected per second—by a factor of 10. “This means that the HL-LHC will be producing about 20 times more data per year than what ATLAS will have on disk at the end of 2018,” says Taylor Childers, a member of the ATLAS collaboration and computer scientist at the ALCF who is leading the effort at the facility. “CERN’s computing resources are not going to grow by that factor.”

Luckily for CERN, the ALCF already operates some of the world’s most powerful supercomputers for science, and the facility is in the midst of planning for an upgrade of its own. In 2021, Aurora—the ALCF’s next-generation system, and the first exascale machine in the country—is scheduled to come online. It will provide the ATLAS experiment with an unprecedented resource for analyzing the data coming out of the LHC—and soon, the HL-LHC.

Why ALCF?

CERN may be best known for smashing particles, which physicists do to study the fundamental laws of nature and gather clues about how the particles interact. This involves a lot of computationally intense calculations that benefit from the use of the DOE’s powerful computing systems.

A diagram of the ATLAS detector, which stands at 82 feet tall and 144 feet long. Protons enter the detector from each side and collide in the center. Image courtesy of ALCF.

The ATLAS detector is an 82-foot-tall, 144-foot-long cylinder with magnets, detectors, and other instruments layered around the central beampipe like an enormous 7,000-ton Swiss roll. When protons collide in the detector, they send a spray of subatomic particles flying in all directions, and this particle debris generates signals in the detector’s instruments. Scientists can use these signals to discover important information about the collision and the particles that caused it in a computational process called reconstruction. Childers compares this process to arriving at the scene of a car crash that has nearly completely obliterated the vehicles and trying to figure out the makes and models of the cars and how fast they were going. Reconstruction is also performed on simulated data in the ATLAS analysis framework, called Athena.

An ATLAS physics analysis consists of three steps. First, in event generation, researchers use the physics that they know to model the kinds of particle collisions that take place in the LHC. In the next step, simulation, they generate the subsequent measurements the ATLAS detector would make. Finally, reconstruction algorithms are run on both simulated and real data, the output of which can be compared to see differences between theoretical prediction and measurement.

“If we understand what’s going on, we should be able to simulate events that look very much like the real ones,” says Tom LeCompte, a physicist in Argonne’s High Energy Physics division and former physics coordinator for ATLAS.

“And if we see the data deviate from what we know, then we know we’re either wrong, we have a bug, or we’ve found new physics,” says Childers.

Some of these simulations, however, are too complicated for the Worldwide LHC Computing Grid, which LHC scientists have used to handle data processing and analysis since 2002. The Grid is an international distributed computing infrastructure that links 170 computing centers across 42 countries, allowing data to be accessed and analyzed in near real-time by an international community of more than 10,000 physicists working on various LHC experiments.

The Grid has served the LHC well so far, but as demand for new science increases, so does the required computing power.

That’s where the ALCF comes in.

In 2011, when LeCompte returned to Argonne after serving as ATLAS physics coordinator, he started looking for the next big problem he could help solve. “Our computing needs were growing faster than it looked like we would be able to fulfill them, and we were beginning to notice that there were problems we were trying to solve with existing computing that just weren’t able to be solved,” he says. “It wasn’t just an issue of having enough computing; it was an issue of having enough computing in the same place. And that’s where the ALCF really shines.”

LeCompte worked with Childers and ALCF computer scientist Tom Uram to use Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, to carry out calculations to improve the performance of the ATLAS software. Together they scaled Alpgen, a Monte Carlo-based event generator, to run efficiently on Mira, enabling the generation of millions of particle collision events in parallel. “From start to finish, we ended up processing events more than 20 times as fast, and used all of Mira’s 49,152 processors to run the largest-ever event generation job,” reports Uram.

But they weren’t going to stop there. Simulation, which takes up around five times more Grid computing than event generation, was the next challenge to tackle.

Moving forward with Theta

In 2017, Childers and his colleagues were awarded a two-year allocation from the ALCF Data Science Program (ADSP), a pioneering initiative designed to explore and improve computational and data science methods that will help researchers gain insights into very large datasets produced by experimental, simulation, or observational methods. The goal is to deploy Athena on Theta, the ALCF’s 11.69-petaflops Intel-Cray supercomputer, and develop an end-to-end workflow to couple all the steps together to improve upon the current execution model for ATLAS jobs which involves a many­step workflow executed on the Grid.

“Each of those steps—event generation, simulation, and reconstruction—has input data and output data, so if you do them in three different locations on the Grid, you have to move the data with it,” explains Childers. “Ideally, you do all three steps back-to-back on the same machine, which reduces the amount of time you have to spend moving data around.”

Enabling portions of this workload on Theta promises to expedite the production of simulation results, discovery, and publications, as well as increase the collaboration’s data analysis reach, thus moving scientists closer to new particle physics.

One challenge the group has encountered so far is that, unlike other computers on the Grid, Theta cannot reach out to the job server at CERN to receive computing tasks. To solve this, the ATLAS software team developed Harvester, a Python edge service that can retrieve jobs from the server and submit them to Theta. In addition, Childers developed Yoda, an MPI-enabled wrapper that launches these jobs on each compute node.

Harvester and Yoda are now being integrated into the ATLAS production system. The team has just started testing this new workflow on Theta, where it has already simulated over 12 million collision events. Simulation is the only step that is “production-ready,” meaning it can accept jobs from the CERN job server.

The team also has a running end-to-end workflow—which includes event generation and reconstruction—for ALCF resources. For now, the local ATLAS group is using it to run simulations investigating if machine learning techniques can be used to improve the way they identify particles in the detector. If it works, machine learning could provide a more efficient, less resource-intensive method for handling this vital part of the LHC scientific process.

“Our traditional methods have taken years to develop and have been highly optimized for ATLAS, so it will be hard to compete with them,” says Childers. “But as new tools and technologies continue to emerge, it’s important that we explore novel approaches to see if they can help us advance science.”

Upgrade computing, upgrade science

As CERN’s quest for new science gets more and more intense, as it will with the HL-LHC upgrade in 2026, the computational requirements to handle the influx of data become more and more demanding.

“With the scientific questions that we have right now, you need that much more data,” says LeCompte. “Take the Higgs boson, for example. To really understand its properties and whether it’s the only one of its kind out there takes not just a little bit more data but takes a lot more data.”

This makes the ALCF’s resources—especially its next-generation exascale system, Aurora—more important than ever for advancing science.

Aurora, scheduled to come online in 2021, will be capable of one billion billion calculations per second—that’s 100 times more computing power than Mira. It is just starting to be integrated into the ATLAS efforts through a new project selected for the Aurora Early Science Program (ESP) led by Jimmy Proudfoot, an Argonne Distinguished Fellow in the High Energy Physics division. Proudfoot says that the effective utilization of Aurora will be key to ensuring that ATLAS continues delivering discoveries on a reasonable timescale. Since increasing compute resources increases the analyses that are able to be done, systems like Aurora may even enable new analyses not yet envisioned.

The ESP project, which builds on the progress made by Childers and his team, has three components that will help prepare Aurora for effective use in the search for new physics: enable ATLAS workflows for efficient end-to-end production on Aurora, optimize ATLAS software for parallel environments, and update algorithms for exascale machines.

“The algorithms apply complex statistical techniques which are increasingly CPU-intensive and which become more tractable—and perhaps only possible—with the computing resources provided by exascale machines,” explains Proudfoot.

In the years leading up to Aurora’s run, Proudfoot and his team, which includes collaborators from the ALCF and Lawrence Berkeley National Laboratory, aim to develop the workflow to run event generation, simulation, and reconstruction. Once Aurora becomes available in 2021, the group will bring their end-to-end workflow online.

The stated goals of the ATLAS experiment—from searching for new particles to studying the Higgs boson—only scratch the surface of what this collaboration can do. Along the way to groundbreaking science advancements, the collaboration has developed technology for use in fields beyond particle physics, like medical imaging and clinical anesthesia.

These contributions and the LHC’s quickly growing needs reinforce the importance of the work that LeCompte, Childers, Proudfoot, and their colleagues are doing with ALCF computing resources.

“I believe DOE’s leadership computing facilities are going to play a major role in the processing and simulation of the future rounds of data that will come from the ATLAS experiment,” says LeCompte.

This research is supported by the DOE Office of Science. ALCF computing time and resources were allocated through the ASCR Leadership Computing Challenge, the ALCF Data Science Program, and the Early Science Program for Aurora.

The author Madeleine O’Keefe is an intern with Argonne National Laboratory. The article is republished here with permission.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UT Dallas Grows HPC Storage Footprint for Animation and Game Development

October 28, 2020

Computer-generated animation and video game development are extraordinarily computationally intensive fields, with studios often requiring large server farms with hundreds of terabytes – or even petabytes – of storag Read more…

By Staff report

Frame by Frame, Supercomputing Reveals the Forms of the Coronavirus

October 27, 2020

From the start of the pandemic, supercomputing research has been targeting one particular protein of the coronavirus: the notorious “S” or “spike” protein, which allows the virus to pry its way into human cells a Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. The acquisition helps AMD keep pace during a time of consolida Read more…

By John Russell

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chip maker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the Europe Read more…

By George Leopold

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

AWS Solution Channel

Rapid Chip Design in the Cloud

Time-to-market and engineering efficiency are the most critical and expensive metrics for a chip design company. With this in mind, the team at Annapurna Labs selected Altair AcceleratorRead more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. Th Read more…

By John Russell

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This