Argonne Training Program Prepares Researchers for Scientific Computing in the Exascale Era

October 17, 2019

Oct. 17, 2019 — Petro Junior Milan tears his eyes from his laptop and flexes his fingers, giving them a few seconds’ reprieve from his nearly 11 days of nonstop typing at the 2019 Argonne Training Program on Extreme-Scale Computing (ATPESC), an annual event organized by the U.S. Department of Energy’s (DOE) Argonne National Laboratory and funded by DOE’s Exascale Computing Project (ECP).

Around him, fellow ATPESC participants are also rapidly typing, attempting to capture everything Sameer Shende, director of the Performance Research Laboratory at the University of Oregon and the president and director of ParaTools, Inc., is sharing about performance analysis tools for scientific applications on large-scale supercomputers.

One month earlier, Milan was in his office at Georgia Tech struggling with an intractable problem: improving the parallelized, multi-physics code for his simulations of turbulent reacting flows in liquid rocket engines. Now, Shende’s lectures — on tracing tools to analyze the behavior and time complexity of parallel programs — are providing some insight that might help Milan solve his problem. After the lecture, the two discussed options for improving Milan’s simulations.

Like Milan, many computer scientists and graduate students require more in-depth training and hands-on experience with high-performance computing (HPC) tools needed to advance science in the emerging exascale era. ATPESC, now in its seventh year, plays an important role in growing the community of researchers who can use supercomputers to tackle complex problems in science and engineering. The annual training event, which was held at the Q Center in St. Charles, Illinois, this summer, has now hosted nearly 500 participants since its inception.

With support from the ECPATPESC is structured to dovetail with the nation’s efforts to develop a capable computing ecosystem for future exascale supercomputers, including Aurora at the Argonne Leadership Computing Facility (ALCF) and Frontier at the Oak Ridge Leadership Computing Facility (OLCF), both DOE Office of Science User Facilities.

Lasting two weeks, the training program provides participants with invaluable HPC skills and tools that they can later apply to their home institutions and research projects. While the days are long — beginning at 8:30 a.m. and often extending to 9:30 p.m. — they are packed with expert lectures, hands-on HPC coding sessions and nightly dinner talks.

After attending ATPESC for a week, Kristofer Zieb, a postdoctoral researcher at Lawrence Livermore National Laboratory (LLNL), said, ​I feel like I went through grad school all over again.” The tightly condensed, lecture-filled days may be rigorous, but the results of ATPESC show. ​When I get back to the lab, I will definitely be a more competent and contributing member of the HPC community,” Zieb said.

The transformation that the ATPESC participants experience over the two weeks of the training program is remarkable,” said ATPESC program director Marta García, a computational scientist at Argonne. ​This is an intensive, once-in-a-lifetime experience that impacts their careers and helps them better prepare for complex hardware and software ecosystems.”

This year, Argonne welcomed 73 participants, comprising graduate students, postdoctoral researchers, professors and early-career scientists.  ATPESC’s 66 lecturers included renowned scientists, HPC experts and other field leaders. Extending from July 28 to Aug. 9, the program curriculum covered the following tracks:

  • Hardware Architectures
  • Programming Models and Languages
  • Data-intensive Computing and Input/Output (I/O)
  • Visualization and Data Analysis
  • Numerical Algorithms and Software for Extreme-Scale Science
  • Performance Tools and Debuggers
  • Software Productivity
  • Machine Learning and Deep Learning for Science (added in 2019)

Each track session featured detailed lectures that culminated in a hands-on HPC coding exercise during which participants were encouraged to use their own codes.

The participants also toured the Argonne campus, exploring the Laboratory’s highly advanced technology and research facilities, including the Advanced Photon Source (APS), ALCF, Argonne Tandem Linear Accelerator System (ATLAS) and Nuclear Energy Exhibition Hall. Like the ALCF, the APS and ATLAS are DOE Office of Science User Facilities.

In addition to the tour, the participants utilized hundreds of thousands of cores of computing power from the ALCF’s Mira and Theta systems, as well as the OLCF’s Summit system and the National Energy Research Scientific Computing Center’s (NERSC) Cori system (also a DOE Office of Science User Facility).

ATPESC is an intensive, hands-on, extraordinary training program, providing a unique perspective on extreme-scale computing,” said Rosangela Follmann, a visiting professor in the School of Information and Technology at Illinois State University. In the fall, she will be teaching a parallel computing class in which she will apply what she learned at ATPESC.

Most people are not exposed to the breadth of HPC tools and topics in their degree programs,” added Cyrus Harrison, an LLNL scientist who lectured on visualization and data analysis. According to Harrison, ATPESC is valuable and successful, bringing together vast knowledge for the HPC discipline.

Daniel Barry, a Ph.D. student in Data Science and Engineering at the University of Tennessee, Knoxville, agreed, ​ATPESC is an absolutely fantastic opportunity for anyone who wants to refine their skills or learn certain areas of HPC more thoroughly.”

Before attending ATPESC, Barry tried to learn more about software tools for supercomputing via online documentation, but this approach was not as productive as the ATPESC experience. ​A lot of explanations I’ve seen online are missing the crucial details that make a difference in understanding the nuanced scenarios that occur in the codes for high-performance computational workloads. ATPESC has been designed in such a way that is easy to understand and program effectively in these scenarios.”

Even the lecturers gained from their student interactions. ​It’s a lot of fun for the whole track team to interact with the attendees,” said Argonne senior computational scientist Lois Curfman McInnes, who coordinates the track on numerical algorithms and software for extreme-scale science. ​I enjoyed learning about the experiences and interests of the attendees and how their new directions can impact our research.”

Although the event has limited space, ATPESC’s broad curriculum is available to the public. Each year since its inception, the program has posted lecture slides and videos online. Videos of the 2019 lectures will be available soon. To learn more about the program, visit the ATPESC website.

ATPESC program director García concluded, ​What I admire most in the participants every year is their passion, hard work, open-mindedness, creative thinking and dedication to improve their codes and their disciplines ― and to take what they learn and improve our society. On behalf of the 100 volunteers who are involved in the preparation for ATPESC, we wanted to say: Thank you for believing in this program and in its benefit to the scientific community worldwide.”

About The Exascale Computing Project 

The Exascale Computing Project is a collaborative effort of two DOE organizations — the Office of Science and the National Nuclear Security Administration. ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

Established by Congress in 2000, the National Nuclear Security Administration (NNSA) is a semi-autonomous agency within the U.S. Department of Energy responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the U.S. and abroad. Visit nnsa​.ener​gy​.gov for more information.

About Argonne National Laboratory

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

About The U.S. Department of Energy’s Office of Science 

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science


Source: Victoria Martin, Argonne National Laboratory

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

D-Wave Delivers 5000-qubit System; Targets Quantum Advantage

September 29, 2020

D-Wave today launched its newest and largest quantum annealing computer, a 5000-qubit goliath named Advantage that features 15-way qubit interconnectivity. It also introduced the D-Wave Launch program intended to jump st Read more…

By John Russell

What’s New in Computing vs. COVID-19: AMD, Remdesivir, Fab Spending & More

September 29, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

Global QC Market Projected to Grow to More Than $800 million by 2024

September 28, 2020

The Quantum Economic Development Consortium (QED-C) and Hyperion Research are projecting that the global quantum computing (QC) market - worth an estimated $320 million in 2020 - will grow at an anticipated 27% CAGR betw Read more…

By Staff Reports

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Committee last week accepted a subcommittee report calling for a t Read more…

By John Russell

Supercomputer Research Aims to Supercharge COVID-19 Antiviral Remdesivir

September 25, 2020

Remdesivir is one of a handful of therapeutic antiviral drugs that have been proven to improve outcomes for COVID-19 patients, and as such, is a crucial weapon in the fight against the pandemic – especially in the abse Read more…

By Oliver Peckham

AWS Solution Channel

The Water Institute of the Gulf runs compute-heavy storm surge and wave simulations on AWS

The Water Institute of the Gulf (Water Institute) runs its storm surge and wave analysis models on Amazon Web Services (AWS)—a task that sometimes requires large bursts of compute power. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

D-Wave Delivers 5000-qubit System; Targets Quantum Advantage

September 29, 2020

D-Wave today launched its newest and largest quantum annealing computer, a 5000-qubit goliath named Advantage that features 15-way qubit interconnectivity. It a Read more…

By John Russell

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Commit Read more…

By John Russell

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the new platforms will deliver 50 percent and 40 percent more... Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie, Tiffany Trader and Todd R. Weiss

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics code. These optimizations will be incorporated into release 2.15 with patches available for earlier versions. Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This