Preparing for Exascale Science on Day 1

By Linda Barney

October 14, 2020

Science simulation, visualization, data, and learning applications will greatly benefit from the massive computational resources available with future exascale systems. Researchers in the Argonne Leadership Computing Facility’s (ALCF) Aurora Early Science Program (ESP) are blazing the trail toward reaping those benefits from the U.S. Department of Energy’s (DOE) Argonne National Laboratory’s upcoming Aurora exascale supercomputer.

Work by ESP researchers will help to ensure that critical scientific applications are ready for the scale and architecture of the Aurora machine at the time of deployment. There are currently around 250 researchers involved in pre-Aurora ESP research. According to Timothy Williams, Deputy Division Director of Argonne’s Computational Science (CPS) Division and ALCF Co-Manager for the ESP, “As one of the first exascale systems for science in the world, Aurora should deliver siginifcant scientific results, via the Early Science Program.” The ESP is already producing some exciting research and providing insights for system architecture and infrastructure changes slated for the future Aurora supercomputer.

ESP projects represent research so sophisticated that it has outgrown the capability of today’s leadership-class supercomputers—the selected ESP research projects require exascale computational capabilities. Research Principal Investigators (PIs) submit proposals to ALCF describing their research into a specific scientific problem and why it needs to run on an exascale system.

The ESP awards pre-production computing time to research teams working to prepare key applications and software for the Aurora supercomputer. ESP researchers are granted access to hardware and software running on a pre-Aurora configured supercomputer. Argonne’s Theta supercomputer has been extensively used by the ALCF staff and ESP researchers who are preparing for Aurora.

ESP research projects are in the areas of chemistry, physics (high energy physics, fusion energy, cosmology), biosciences (cancer treatment informatics, modeling metastasis, brain connectomics, molecular dynamics of cell membrane transport proteins), engineering (aerodynamics, nuclear reactor coolant, combustion in coal boilers), materials science (functional materials, semi-conductors).

William Tang, professor of astrophysical sciences at Princeton University and principal research physicist with the DOE’s Princeton Plasma Physics Laboratory (PPPL), is leading an ESP project that is one of the more successful efforts in artificial intelligence (AI) for science using pre-exascale systems. His work is focused on using deep learning and exascale computing power to improve the behavior of fusion reactors aiming to produce sustainable clean energy.  Tang’s AI research studies disruptions in confinement devices called tokamaks, which use a powerful magnetic field to confine hot plasma to produce controlled thermonuclear fusion power.

Engineers working with the potential energy source have estimated a window of only 30 milliseconds to control instabilities that can disrupt the energy production process and damage the plasma confinement device. As part of the ESP research, Tang and colleagues use Princeton’s Fusion Recurrent Neural Network (FRNN) code containing convolutional and recurrent neural network components to integrate both spatial and temporal information for predicting disruptions in tokamak plasmas. The hope is to increase warning times and work toward heading off disruptions before they happen—keeping the fusion reactions going and producing sustainable clean energy.

Princeton’s Fusion Recurrent Neural Network (FRNN) code uses convolutional and recurrent neural network components to integrate both spatial and temporal information for predicting disruptions in tokamak plasmas with unprecedented accuracy and speed on top supercomputers. (Image: Eliot Feibush, Princeton Plasma Physics Laboratory) . Courtesy Eliot Feibush, Princeton Plasma Physics Laboratory

Another of the ALCF’s notable ESP projects is led by Katrin Heitmann, Deputy Division Director in the High Energy Physics Division at ANL. Heitmann and team perform research using computational cosmology to understand the large-scale behavior of the universe. The research seeks to understand fundamental aspects the cosmos such as dark matter, dark energy and to help understand why the universe’s rate of expansion is accelerating.

The cosmology simulations are carried out using the Hardware/Hybrid Accelerated Cosmology Code (HACC) developed at Argonne, based on an early effort at Los Alamos. HACC is the only cosmology code suite designed for extreme-scale simulations regardless of a supercomputing system’s architecture. The team also uses advanced data science techniques in conjunction with observational data. These techniques have been developed in collaboration with statisticians over a period of many years. More recently, AI methods have been trained using a large set of images generated from cosmological simulations run with HACC.

Moving toward exascale requires not only moving applications to new computer architecture, but it also requires:

  • Code and workflow development
  • Preliminary studies
  • Scaling and optimization

The ESP provides resources and support across these requirements to help research teams prepare their applications for the architecture of the new supercomputer.

The ALCF computational scientists work with ESP researchers to help with troubleshooting, coding, optimizations for parallelization and GPU acceleration, getting the ESP research applications to run in the pre-Aurora environment. Members of the ALCF team also provide support for projects with big data, deep learning (DL), or machine learning (ML) requirements. “Each of the computational scientists working with researchers speaks the language of the relevant domain sciences as well as high-performance computing. In most projects, preliminary studies must be done in advance to verify that the planned exascale research campaigns will succeed,” states Williams.

The ALCF provides a variety of Aurora-related training opportunities including hackathons, workshops, dungeon sessions, and webinars. Some focus around developing, porting, optimizing code with the Aurora SDK and early Intel GPU hardware housed at Argonne’s Joint Laboratory for System Evaluation (JLSE).

Williams indicates, “The ALCF Data Science team (headed by Venkat Vishwanath, ALCF Co-Manager for the ESP program) is establishing a data science supercomputing software environment on Theta, which is the closest environment to what we plan to have on Aurora—it includes the Balsam workflow manager, support for optimized Python functionalities, ML/DL frameworks, parts of the Big Data stack—all optimized for HPC and scientific applications.”

The Exascale Computing Project (ECP) is developing an exascale software stack, including software needed by application developers writing parallel applications targeting diverse exascale architectures. ALCF partners with and participates in the ECP to deploy this stack for Aurora. Software is also being developed for large scale and in-situ visualization and analytics projects.

The future Aurora supercomputer will also include the Intel Distributed Asynchronous Object Storage (DAOS) I/O technology, which alleviates bottlenecks involved with data-intensive workloads. DAOS, supported on Intel Optane persistent memory, enables a software-defined object store built for large-scale, distributed Non-Volatile Memory (NVM). The combination of Intel Optane persistent memory and DAOS, recently set a new world record, soaring to the top of the Virtual Institute for I/O IO-500 list. DAOS will be the primary data storage platform for ESP and production science projects on Aurora—a major advance beyond conventional parallel file systems.

Argonne is a key participant in the development of oneAPI, a unified and scalable programming model to harness the power of diverse computing architectures in the era of HPC/AI convergence. The oneAPI initiative – supported by over 30 major companies and research organizations and growing – will define programming for an increasingly AI-infused, multi-architecture world. The oneAPI unified programming model is designed to simplify development across diverse CPU, GPU, FPGA, and AI architectures

“Through Argonne’s deep investment in science projects using data-intensive and machine-learning methods, Aurora will advance the state of the art for complex scientific workflows at large scale—especially those including experimental/observational data. Aurora will play a big role here,” states Williams.

References

Author: Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training, and web design firm in Beaverton, OR.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with the enterprise strengths of its recent acquisitions, notably Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Preparing for Exascale Science on Day 1

October 14, 2020

Science simulation, visualization, data, and learning applications will greatly benefit from the massive computational resources available with future exascal Read more…

By Linda Barney

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This