Physics Data Processing at NERSC Dramatically Cuts Reconstruction Time

February 14, 2018

Feb. 14, 2018 — In a recent demonstration project, physicists from Brookhaven National Laboratory (BNL) and Lawrence Berkeley National Laboratory (Berkeley Lab) used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC) to reconstruct data collected from a nuclear physics experiment, an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries.

The researchers reconstructed multiple datasets collected by the STAR (Solenoidal Tracker At RHIC) detector during particle collisions at the Relativistic Heavy Ion Collider (RHIC), a nuclear physics research facility at BNL. By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed raw data into “physics-ready” data at the petabyte scale in a fraction of the time it would have taken using in-house high-throughput computing resources—even with a two-way transcontinental journey via ESnet, the Department of Energy’s high-speed, high-performance data-sharing network that is managed by Berkeley Lab.

Preparing raw data for analysis typically takes many months, making it nearly impossible to provide such short-term responsiveness, according to Jérôme Lauret, a senior scientist at BNL and co-author on a paper outlining this work that was published in the Journal of Physics.

“This is a key usage model of high performance computing (HPC) for experimental data, demonstrating that researchers can get their raw data processing or simulation campaigns done in a few days or weeks at a critical time instead of spreading out over months on their own dedicated resources,” said Jeff Porter, a member of the data and analytics services team at NERSC and co-author on the Journal of Physics paper.

Billions of Data Points

The STAR experiment is a leader in the study of strongly interacting QCD matter that is generated in energetic heavy ion collisions. STAR consists of a large, complex set of detector systems that measure the thousands of particles produced in each collision event. Detailed analyses of billions of such collisions have enabled STAR scientists to make fundamental discoveries and measure the properties of the quark-gluon plasma. Since RHIC started running in the year 2000, this raw data processing, or reconstruction, has been carried out on dedicated computing resources at the RHIC and ATLAS Computing Facility (RACF) at BNL. High-throughput computing clusters crunch the data event by event and write out the coded details of each collision to a centralized mass storage space accessible to STAR physicists around the world.

In recent years, however, STAR datasets have reached billions of events, with data volumes at the multi-petabyte scale. The raw data signals collected by the detector electronics are processed using sophisticated pattern recognition algorithms to generate the higher-level datasets that are used for physics analysis. So the STAR computing team investigated the use of external resources to meet the demand for timely access to physics-ready data, ultimately turning to NERSC. Among other things, NERSC operates the PDSF cluster for the HEP/NP experiment community, which represents the second largest compute cluster available to the STAR collaboration.

A Processing Framework

Unlike the high-throughput computers at the RACF and PDSF, which analyze events one by one, HPC resources like those at NERSC break large problems into smaller tasks that can run in parallel. So the challenge was to parallelize the processing of STAR event data in a way that can scale out to run on large amounts of data with reproducible results.

The processing framework run at NERSC was built upon several core features. Shifter, a Linux container system developed at NERSC, provided a simple solution to the difficult problem of porting complex software to new computing systems and keep its expected behavior. Scalability was achieved by eliminating bottlenecks in accessing both the event data and experiment databases that record environmental changes—voltage, temperature, pressure and other detector conditions—during data taking. To do this, the workload was broken up into data chunks, sized to run on a single node onto which a snapshot of the STAR database could also be stored. Each node was then self-sufficient, allowing the work to automatically expand out to as many nodes as available without any direct intervention.

“Several technologies developed in-house at NERSC allowed us to build a highly fault-tolerant, multi-step, data-processing pipeline that could scale to practically unlimited number of nodes with the potential to dramatically fold the time it takes to process data for many experiments,” noted Mustafa Mustafa a Berkeley Lab physicist who helped design the system.

Another challenge in migrating the task of raw data reconstruction to an HPC environment was getting the data from BNL in New York to NERSC in California and back. Both the input and output datasets are huge. The team started small with a proof-of-principle experiment—just a few hundred jobs—to see how their new workflow programs would perform. Colleagues at RACF, NERSC and ESnet—including Damian Hazen of NERSC and Eli Dart of ESnet—helped identify hardware issues and optimize the data transfer and the end-to-end workflow.

After fine-tuning their methods based on the initial tests, the team started scaling up, initially using 6,400 computing cores on Cori; in their most recent test they utilized 25,600 cores. The end-to-end efficiency of the entire process—the time the program was running (not sitting idle, waiting for computing resources) multiplied by the efficiency of using the allotted supercomputing slots and getting useful output all the way back to BNL—was 98 percent.

“This was a very successful large-scale data processing run on NERSC HPC,“ said Jan Balewski, a member of the data science engagement group at NERSC who worked on this project. “One that we can look to as a reference as we actively test alternative approaches to support scaling up the computing campaigns at NERSC by multiple physics experiments.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.


Source: NERSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AMD Epyc CPUs Now on Bare Metal IBM Cloud Servers

April 1, 2020

AMD’s expanding presence in the datacenter and cloud computing markets took a step forward with today’s announcement that its 7nm 2nd Gen Epyc 7642 CPUs are now available on IBM Cloud bare metal servers. AMD, whose Read more…

By Doug Black

Supercomputer Testing Probes Viral Transmission in Airplanes

April 1, 2020

It might be a long time before the general public is flying again, but the question remains: how high-risk is air travel in terms of viral infection? In an article for the Texas Advanced Computing Center (TACC), Faith Si Read more…

By Staff report

ECP Milestone Report Details Progress and Directions

April 1, 2020

The Exascale Computing Project (ECP) milestone report issued last week presents a good snapshot of progress in preparing applications for exascale computing. There are roughly 30 ECP application development (AD) subproj Read more…

By John Russell

Russian Supercomputer Employed to Develop COVID-19 Treatment

March 31, 2020

From Summit to [email protected], global supercomputing is continuing to mobilize against the coronavirus pandemic by crunching massive problems like epidemiology, therapeutic development and vaccine development. The latest a Read more…

By Staff report

What’s New in HPC Research: Supersonic Jets, Skin Modeling, Astrophysics & More

March 31, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Conversation: ANL’s Rick Stevens on DoE’s AI for Science Project

March 23, 2020

With release of the Department of Energy’s AI for Science report in late February, the effort to build a national AI program, modeled loosely on the U.S. Exascale Initiative, enters a new phase. Project leaders have already had early discussions with Congress... Read more…

By John Russell

Servers Headed to Junkyard Find 2nd Life Fighting Cancer in Clusters

March 20, 2020

Ottawa-based charitable organization Cancer Computer is on a mission to stamp out cancer and other life-threatening diseases, including coronavirus, by putting Read more…

By Tiffany Trader

Kubernetes and HPC Applications in Hybrid Cloud Environments – Part II

March 19, 2020

With the rise of cloud services, CIOs are recognizing that applications, middleware, and infrastructure running in various compute environments need a common management and operating model. Maintaining different application and middleware stacks on-premises and in cloud environments, by possibly using different specialized infrastructure and application... Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This