Exascale Frontier Supercomputer Hosts Trio of New Cosmological Codes

By Oak Ridge National Laboratory

April 27, 2023

Oak Ridge National Laboratory’s exascale Frontier supercomputer – the first public exascale system in the world – debuted almost a year ago. Now, more and more high-profile uses cases on Frontier are starting to emerge. Below, we’re including a blog post from the team at ORNL that highlights new cosmological codes that have been run on the groundbreaking system. You can find the original post on the ORNL site here.


A trio of new and improved cosmological simulation codes was unveiled in a series of presentations at the annual April Meeting of the American Physical Society in Minneapolis. Chaired by the Oak Ridge Leadership Computing Facility’s director of science, Bronson Messer, the session covering these next-generation codes heralds a new era of exascale computational astrophysics that promises to advance our understanding of the universe with models of unprecedented scale and resolution.

Powered by the incoming generation of exascale — a billion-billion floating point operations per second — supercomputers, the updated versions of Cholla, HACC and Parthenon are the culmination of years of work by developers to prepare their codes for exascale’s thousandfold increase from petascale computing speed. With their successful early runs on the OLCF’s Frontier supercomputer, located at the Department of Energy’s Oak Ridge National Laboratory, the codes are ready to explore virtual domains of the cosmos that were previously beyond science’s reach.

“These newly improved astrophysical codes provide some of the clearest demonstrations of the most empowering features of exascale computing for science,” said Messer, a computational astrophysicist, distinguished scientist at ORNL and member of the team that won a 2022 R&D 100 Award for the Flash-X software. “All these teams are simulating an array of physical processes happening on scales ranging over many orders of magnitude — from the size of stars to the size of the universe — while incorporating feedback between one set of physics to others and vice versa. They represent some of the most challenging problems that will be attacked on Frontier, and I expect the results to be remarkably impactful.”

HACC/CRK-HACC

This cosmological hydrodynamic simulation produced by CRK-HACC is actually a snapshot of the universe. A large cluster is zoomed in. The targeted simulation on Frontier will be 140 times larger, with hundreds of thousands of clusters. Credit: Michael Buehlmann/HACC Argonne.

HACC, for Hardware/Hybrid Accelerated Cosmology Code, is a veteran simulator of the cosmos that focuses on large-scale structure formation in the dark sector, which includes dark energy, dark matter, neutrinos and the origins of primordial fluctuations.

HACC’s origins date back to the Roadrunner supercomputer at Los Alamos National Laboratory, which was the first machine to break the petaflop barrier — a million billion floating-point operations per second — in 2008. Currently being developed by researchers at Argonne National Laboratory with support from DOE’s Exascale Computing Project, or ECP, HACC has been optimized for Frontier’s AMD Instinct™ GPU accelerators, and optimizations for Argonne’s Aurora supercomputer and its Intel GPUs are in the works.

With development support from the ECP’s ExaSky project, HACC leverages exascale’s increased computing abilities by packing in more physics models than the original code’s gravity solver. As survey data of the universe becomes more detailed and complex, the more sophisticated such simulation tools must become to keep pace. Astrophysicists use observations to validate the virtual mock-ups of the universe, constraining parameters used in the simulations; if their measurements don’t match the simulation’s, then there’s a disparity to resolve.

One of HACC’s biggest goals is to provide survey-scale mock catalogs for current large-scale, large-structure formation surveys such as the Rubin Observatory LSSTSPHEREx, and CMB-S4.

“Being able to mock those surveys requires a tremendous amount of volume to simulate and a lot of physics to compute. And none of these things are achievable with the previous generation of supercomputers,” said Nicholas Frontiere, a computational scientist at Argonne and co-team leader for CRK-HACC development, which adds hydrodynamics modeling. “It’s only at the exascale regime that you can really start simulating the volumes that are required for these types of surveys.”

HACC’s future sounds pretty straightforward: more is better.

“The next horizon for us is including more and more detailed astrophysics in our simulations so that even with the same volumes and simulations, you can get better resolution,” Frontiere said. “So, most of our research is really adding more physics, which is something we would never have been able to consider without running at the scales we are now.”

Cholla

This image depicts a visualization of an outflow of galactic wind at a single point in time using Cholla. Credit: Evan Schneider/University of Pittsburgh.

Initially developed in 2014 by an astrophysics doctoral student at the University of Arizona, the GPU-accelerated fluid dynamics solver Cholla, for Computational Hydrodynamics On ParaLLel Architectures, was intended to help users better understand how the universe’s gases evolve over time. That student, Evan Schneider, is now an assistant professor in the University of Pittsburgh’s Department of Physics and Astronomy, and Cholla has become an astrophysics powerhouse.

Schneider intends to use Cholla to simulate an entire galaxy the size of the Milky Way at the scale of a single star cluster; modeling a massive galaxy at this resolution would be a first for computational astrophysics. Doing so will require more than just optimizing the code to run on Frontier, an effort that was supported by the Frontier Center for Accelerated Application Readiness, or CAAR, program.

Cholla has also attracted helping hands on its way to exascale — in particular, those of Bruno Villasenor, who was studying dark matter as a doctoral student at the University of California, Santa Cruz. He and his Ph.D. adviser, Brant Robertson, decided to use Cholla for their simulations of the Lyman-Alpha Forest, which is a series of absorption features formed as the light from distant quasars encounters material along its journey to Earth. But to do so required several more physics models. So, Villasenor integrated them into Cholla.

“Bruno added gravity, added particles, and added cosmology so that we could do these big cosmological boxes. And so that really changed Cholla from a pure fluid dynamics code into an astrophysics code,” Schneider said.

Now, with its new capabilities powered by Frontier’s exascale speed, Cholla is poised to accomplish breakthrough work that was inconceivable on previous systems.

“Resolution is the name of the game. The holy grail for me is to be able to run a simulation of a Milky Way-sized galaxy with individual supernova explosions resolved. And so far, people have only been able to do that for tiny galaxies because you must have a high-enough resolution to cover the entire disk at something like a parsec scale,” Schneider said. “It sounds simple because it is just the difference between running a simulation with 4,000-cubed cells and a simulation with 10,000-cubed cells, but that’s roughly 60 billion cells to 1 trillion cells, total. You really need the jump to exascale to be able to do that.”

That leap is about to happen, and Schneider and her team can’t wait to get started.

“It’s really exciting to work on building something for a long time and then finally being able to see it at scale. Everybody’s just excited to see what we’re going to be able to do,” Schneider said.

Parthenon

AthenaPK (with the Parthenon framework) created this simulation of a cold, dense cloud of plasma hit by a diffuse, hot, supersonic wind. At top left, cloud density overlaid with the simulation mesh that is finer around the cloud. At bottom left, streamlines of the wind, shown in white, with areas of strong vorticity, in orange, lead to turbulence in the wake. At bottom right, magnetic field lines drape around the cloud, shielding it from the wind. And at top right is the observation of the jellyfish galaxy ESO 137-001, which exhibits similar behavior. Credits: simulations from Philipp Grete/AthenaPK; galaxy observation from NASA/ESA Hubble Space Telescope, Chandra X-ray Observatory.

At its core, the open-source Parthenon is an adaptive mesh refinement code for grid-based simulations with the ability to refine resolution only in a certain region of a simulation grid to increase the speed and accuracy of its calculations. Its development team, including Forrest Glines, a Metropolis Postdoctoral Fellow at Los Alamos, and Philipp Grete, a Marie Skłodowska-Curie Actions Postdoctoral Fellow at the Hamburg Observatory, uses Parthenon in its own code, called AthenaPK, to simulate different astrophysical systems — primarily turbulence and feedback from active galactic nuclei, or AGN, jets.

But what makes Parthenon unique in exascale-class computational astrophysics is its performance portability via Kokkos, which allows Parthenon to serve as a framework for other fluid dynamics codes to leverage mesh refinement no matter what architecture they’re running on — NVIDIA GPUs, AMD GPUs, Intel GPUs, Arm CPUs or just traditional CPUs.

“Parthenon’s performance portability allows researchers to run on any supercomputer platform that the underlying Kokkos framework supports. Developers don’t have to worry about reimplementing their simulation code for each new platform,” Glines said. “The faster codes driven by Parthenon allow more simulations with higher resolution and thus higher-fidelity models of the physical systems they’re studying.”

Parthenon is already being used in a variety of codes, including Phoebus, which is a general relativistic magnetohydrodynamics, or GRMHD, code being developed at Los Alamos National Lab, and KHARMA, which is another GRMHD code being developed at the University of Illinois Urbana-Champaign. KHARMA was already used in an Innovative and Novel Computational Impact on Theory and Experiment, or INCITE,  project last year.

Meanwhile, the team’s AthenaPK software is being used in a 2023 INCITE project on Frontier to study “feedback and energetics from magnetized AGN jets in galaxy groups and clusters.”

“We’re particularly excited about our own project because without Parthenon and without AthenaPK, the computational physics challenge — talking about resolving both the jet and the surrounding diffuse plasma at sufficiently high resolution to study self-regulation — would not have been possible on any other machine or with any other code that we are aware of right now,” Grete said.

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.


This article was originally posted on ORNL’s page and is accessible here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ASC23: Application Results

June 2, 2023

The ASC23 organizers put together a slate of fiendishly difficult applications for the students this year. The apps were a mix of traditional HPC packages, like WRF-Hydro and FVCOM, plus machine learning centric programs Read more…

Q&A with Marco Pistoia, an HPCwire Person to Watch in 2023

June 2, 2023

HPCwire Person to Watch Marco Pistoia wears a lot of hats at JPMorgan Chase & Co.: managing director, distinguished engineer, head of global technology applied research and head of quantum computing. That work with J Read more…

HPC Career Notes: June 2023 Edition

June 1, 2023

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

Intersect360: HPC Market ‘Returning to Stable Growth’

June 1, 2023

The folks at Intersect360 Research released their latest report and market update just ahead of ISC 2023, which was held in Hamburg, Germany, last week. The headline: “We’re returning to stable growth,” per Addison Read more…

Lori Diachin to Lead the Exascale Computing Project as It Nears Final Milestones

May 31, 2023

The end goal is in sight for the multi-institutional Exascale Computing Project (ECP), which launched in 2016 with a mandate from the Department of Energy (DOE) and National Nuclear Security Administration (NNSA) to achi Read more…

AWS Solution Channel

Shutterstock 1493175377

Introducing GPU health checks in AWS ParallelCluster 3.6

GPU failures are relatively rare but when they do occur, they can have severe consequences for HPC and deep learning tasks. For example, they can disrupt long-running simulations and distributed training jobs. Read more…

 

Shutterstock 1415788655

New Thoughts on Leveraging Cloud for Advanced AI

Artificial intelligence (AI) is becoming critical to many operations within companies. As the use and sophistication of AI grow, there is a new focus on the infrastructure requirements to produce results fast and efficiently. Read more…

ASC23: LINPACK Results

May 30, 2023

With ISC23 now in the rearview mirror, let’s get back to the results from the ASC23 Student Cluster Competition. In our last articles, we looked at the competition and applications, plus introduced the teams, now it’ Read more…

ASC23: Application Results

June 2, 2023

The ASC23 organizers put together a slate of fiendishly difficult applications for the students this year. The apps were a mix of traditional HPC packages, like Read more…

Intersect360: HPC Market ‘Returning to Stable Growth’

June 1, 2023

The folks at Intersect360 Research released their latest report and market update just ahead of ISC 2023, which was held in Hamburg, Germany, last week. The hea Read more…

Lori Diachin to Lead the Exascale Computing Project as It Nears Final Milestones

May 31, 2023

The end goal is in sight for the multi-institutional Exascale Computing Project (ECP), which launched in 2016 with a mandate from the Department of Energy (DOE) Read more…

At ISC, Sustainable Computing Leaders Discuss HPC’s Energy Crossroads

May 30, 2023

In the wake of SC22 last year, HPCwire wrote that “the conference’s eyes had shifted to carbon emissions and energy intensity” rather than the historical Read more…

Nvidia Announces Four Supercomputers, with Two in Taiwan

May 29, 2023

At the Computex event in Taipei this week, Nvidia announced four new systems equipped with its Grace- and Hopper-generation hardware, including two in Taiwan. T Read more…

Nvidia to Offer a ‘1 Exaflops’ AI Supercomputer with 256 Grace Hopper Superchips

May 28, 2023

We in HPC sometimes roll our eyes at the term “AI supercomputer,” but a new system from Nvidia might live up to the moniker: the DGX GH200 AI supercomputer. Read more…

Closing ISC Keynote by Sterling and Suarez Looks Backward and Forward

May 25, 2023

ISC’s closing keynote this year was given jointly by a pair of distinguished HPC leaders, Thomas Sterling of Indiana University and Estela Suarez of Jülich S Read more…

The Grand Challenge of Simulating Nuclear Fusion: An Overview with UKAEA’s Rob Akers

May 25, 2023

As HPC and AI continue to rapidly advance, the alluring vision of nuclear fusion and its endless zero-carbon, low-radioactivity energy is the sparkle in many a Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

ISC 2023 Booth Videos

Cornelis Networks @ ISC23
Dell Technologies @ ISC23
Intel @ ISC23
Lenovo @ ISC23
ISC23 Playlist
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire