TACC Event Highlights Frontera’s Impact: From Earthquake Simulations to Brain Mapping, Scientists Share Breakthroughs

October 18, 2023

Oct. 18, 2023 — What will the future bring for U.S. scientists using supercomputers to scale up their computations to the highest level? And what technologies should cyberinfrastructure providers deploy to match their ambitions?

These questions and more were explored at the 3rd annual Frontera User Meeting August 3-4, 2023, held at the Texas Advanced Computing Center (TACC).

Paul Woodward, University of Minnesota, describes star convection simulations using TACC’s Frontera supercomputer. Credit: TACC.

“It’s a great opportunity to hear about how Frontera is performing and for users to hear from each other about how they’re maximizing the system,” said Dan Stanzione, executive director of TACC and the principal investigator of the National Science Foundation (NSF)-funded Frontera supercomputer.

Frontera is the most powerful supercomputer ever deployed by the NSF, and it’s the fastest U.S. academic system according to the latest (June 2023) Top500 rankings. Frontera serves as the leading capability system in the national cyberinfrastructure intended for large applications that require thousands of compute nodes.

Over the past 12 months, Frontera has provided rock steady service with 99 percent uptime and an average continuous utilization of 95 percent of its cycles. It delivered more than 72 million node hours and completed over one million jobs, with a cumulative completion of more than 5.8 million jobs over its four years of life.

As of September 2023, Frontera has progressed through more than 80 percent of its projected lifespan with new technology coming that will extend its operation through late 2025.

Approximately 30 scientists participated in the 2023 Frontera User Meeting. The event featured 13 invited speakers who shared their recent experiences and findings while utilizing Frontera.

Scientists tour the data center with TACC’s Frontera supercomputer. Credit: TACC.

The presentations included projects taking advantage of the many ways that users get allocations on the system. Some focused on smaller “startup” activities for groups beginning the transition to very large-scale computing. Others, such as the Large-Scale Community Partnership allocation, are long-term collaborations with major experimental facilities and require over a million node-hours of computing resources.

Other presentations focused on more extensive initiatives, such as the Leadership Resource Allocations, which received up to five million node-hours of computational support. Additionally, certain awardees, known as Texascale Days recipients, were granted access to Frontera’s full capacity, including its impressive 8,000+ nodes.

The presentations encompassed many domains of science ranging from cosmology to hurricanes, earthquakes to the memory center of the human brain, and more. All credited the unprecedented access to computing resources at the scale provided by Frontera as a cornerstone in allowing new understanding and discoveries in cutting-edge research.

Hurricane Storm Surge

Simulation snapshot generated by Frontera of new model that combines storm surge and river flooding data along the Texas coast. Credit: Eirik Valseth, Oden Institute.

Eirik Valseth, a research associate in the Computational Hydraulics Group, Oden Institute of UT Austin, described new work on Frontera to develop compound storm surge models that add river flooding effects with extreme resolution for the Texas coast. His group is also using Frontera to generate five-day hindcast and seven-day forecasts for global ocean storm surge in collaboration with The University of Notre Dame and the U.S. National Oceanic and Atmospheric Administration, in efforts to allow better planning for hurricanes.

Big One Along the San Andreas Fault

Yifeng Cui, the director of the High Performance GeoComputing Laboratory at the San Diego Supercomputer Center (SDSC), described nonlinear earthquake simulations performed by his team on Frontera during TACC’s Texascale Days. The simulations scaled up to 7,680 nodes and ran for 22.5 hours to simulate 83 seconds of shaking during a magnitude 7.8 quake on the southern San Andreas fault. More accurate simulations allow communities to plan better to withstand these large earthquakes, thus saving lives and property.

Child Brain Development

Jessica Church-Lang, an associate professor in the Department of Psychology at UT Austin, is using Frontera to analyze anonymized fMRI image data of brain activity in children to find connections between its various systems including control, visual, motor, auditory, and more. Frontera has helped her to construct 3D brain models from the fMRI images. “It takes about five hours, per child, on Frontera to run the analysis. It used to take three days on older computers. And this is just one step of our processing pipeline.”

Brain Bubbles

Frontera is helping scientists probe the mysteries of how the brain forms thoughts in research led by Jose Rizo-Rey, a professor of biophysics at UT Southwestern Medical Center. His research, using all-atom molecular dynamics simulations on Frontera, investigates tiny bubbles called “vesicles” that shuttle neurotransmitters across the gap between neurons, carrying the signal the brain uses to communicate with itself and other parts of the body.

Simulation of vesicle-flat bilayer interface of membrane fusion. Credit: Jose Rizo-Rey, University of Texas Southwestern Medical Center.

“The process of fusion can happen in just a few micro seconds,” Rizo-Rey said. “That’s why we hope that we can simulate this with Frontera.”

Memories, Models, and Optimizations

Research Engineer Ivan Raikov, Department of Neurosurgery at Stanford University, presented his progress on developing a large-scale model of the rodent hippocampus, a region of the brain associated with short-term memory and spatial navigation. The project is creating the first-of-its-kind biophysically detailed, full scale model of the hippocampal formation with as close as possible to a 1-to-1 scale representation of every neuron. “We start with a full-scale hippocampal model with one million neurons,” Raikov said. “It takes about six hours to simulate 10 seconds of hippocampal activity on 1,024 nodes of Frontera.”

Turbulent Times

P.K. Yeung, professor of aerospace engineering at Georgia Tech, presented his work using Frontera to study turbulent dispersion, an example of which is the spread of a candle’s smoke or how far disease agents travel through the atmosphere. Yeung’s simulations on Frontera track the motion of systems of more than a billion particles, calculating the trajectory and acceleration of each fluid element passing through a turbulent, high-rotation zone in what is known as Lagrangian intermittency in turbulence.

Star Turnover

Paul Woodward, the director of the Laboratory for Computational Science & Engineering and a professor in the School of Physics and Astronomy, University of Minnesota, performed 3D hydrodynamical simulations on runs of up to 3,510 compute nodes on Frontera of rotating, massive, main sequence stars to study convection in the interior of the star. “Frontera is powerful enough to permit us to run our non-rotating simulation forward in time for about three years, which is an amazing thing to have done,” Woodward said.

Black Hole Cosmology

The PRIYA cosmological suite developed on Frontera incorporates multiple models with different parameters to form some of the largest cosmological simulations to date. Credit: Simeon Bird, UC Riverside.

Simeon Bird, an assistant professor in the Department of Physics & Astronomy, UC Riverside, presented a new suite of cosmological simulations called PRIYA (Sanskrit for ‘beloved’). The PRIYA simulations performed on Frontera are some of the largest cosmological simulations performed, needing over 100,000 core-hours to simulate a system of 3072^3 (about 29 billion) particles in a ‘box’ 120 megaparsecs on edge, or about 3.91 million light years across. “We run multiple models, interpolate them together and compare them to observational data of the real universe such as from the Sloan Digital Sky Survey and the Dark Energy Spectroscopic Instrument,” Bird said.

Space Plasma

Half of all the universe’s matter — composed of protons and neutrons — resides in space as plasma. The solar wind from stars such as our sun shapes clouds of space plasma. And on a much larger scale, cosmic magnetic fields knead space plasma across galaxies. “Some of our recently published work has made use of Frontera to study the turbulent dynamos in conducting plasma, which amplify cosmic magnetic fields and could help answer the question of the origin of magnetic fields in the universe,” said graduate student Michael Zhang, Princeton Program in Plasma Physics, Princeton University.

Tight Junctions

Tight junctions are multiprotein complexes in cells that control the permeability of ions and small molecules between cells, as well as supporting transport of nutrients, ions, and water. Sarah McGuinness, a PhD candidate in biomedical engineering at the University of Illinois, Chicago, presented progress using molecular dynamics simulations on Frontera to research Claudin-15, a protein which polymerizes into strands to form the backbone of tight junctions. “Computational simulations allow investigators to observe protein dynamics and atomic resolution with resources like Frontera,” McGuinness said.

Sarah McGuinness, University of Illinois, Chicago, presented at the 2023 Frontera User Meeting on using molecular dynamics simulations to research the ion channel Claudin-15, important in polymerization of molecular strands that form the backbone of tight junctions. Credit: TACC.

Protein Sequencing

Behzad Mehrafrooz, a PhD student at the Center for Biophysics and Quantitative Biology, University of Illinois at Urbana-Champaign, outlined his group’s latest work extending the reach of nanopores to sequence entire proteins, which are much larger and more complex than DNA. “Thanks to Frontera, it was one of the longest, if not the longest molecular dynamics simulations for nanopore sequencing yet made,” Mehrafrooz said. “And it confirmed the rapid, unidirectional translocation induced by guanidinium chloride and helped unravel the molecular mechanism behind it.”

Viral Packaging

Kush Coshic, a PhD student in the Aksimentiev Lab at the University of Illinois at Urbana-Champaign, described simulations that took more than four months to perform using Frontera’s GPU nodes to simulate the genomic packaging of a model herpes-like virus, applicable to developing new therapeutics. “Frontera enables us to perform unprecedented high throughput analysis of a 27 million atom system,” Coshic said.

Spectral Function

“We’ve developed a new algorithm for calculating spectral functions with continuous momentum resolution that complements existing many-body techniques,” said Edwin Huang, an assistant professor in the Department of Physics & Astronomy at Notre Dame University. His team’s determinantal quantum Monte Carlo solver for computing the spectral function of fermionic models with local interactions required sampling over a billion state configurations on Frontera.

Path to Horizon

Planning is underway for a massive new system as part of the NSF-funded Leadership Class Computing Facility (LCCF), with a projected 10X the capabilities of Frontera. Early users of the new system, called Horizon, can expect it to start in the second half of 2025 and enter full production in 2026.

“There are still opportunities to talk about what goes into Horizon,” Stanzione said. “One of the points of this meeting is to continue requirement gathering.”

To unlock the potential of Horizon, the future system will need to provide robust support for both CPU- and GPU-based codes. Software performance directions being explored are mixed precision matrix operations in GPUs, which can offer a 30X advantage in performance over single precision vector units.

“Software enables science and it will drive our decisions about future systems. The most important thing for TACC is that we get to hear from users about what is working with Frontera, what can be improved, and what needs to change to meet their needs in future systems,” Stanzione concluded.


Source: Jorge Salazar, TACC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Core42 Building an 172 Million-core AI Supercomputer in Texas

May 20, 2024

UAE-based Core42 is building an AI supercomputer with 172 million cores which will become operational later this year. The system, Condor Galaxy 3, was announced earlier this year and will have 192 nodes with Cerebras Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Student Cluster Competition

May 16, 2024

The 2024 ISC 2024 competition welcomed 19 virtual (remote) and eight in-person teams. The in-person teams participated in the conference venue and, while the virtual teams competed using the Bridges-2 supercomputers at t Read more…

Grace Hopper Gets Busy with Science 

May 16, 2024

Nvidia’s new Grace Hopper Superchip (GH200) processor has landed in nine new worldwide systems. The GH200 is a recently announced chip from Nvidia that eliminates the PCI bus from the CPU/GPU communications pathway.  Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of the last panels at ISC 2024 — the discussion was fascinat Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can uncover patterns, generate insights, and make predictions that Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC 2024: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire