Nov. 18, 2019 — The Frontera supercomputer at the Texas Advanced Computing Center (TACC) earned the #5 ranking on the November 2019 Top500 list, with 23.5 petaflops of performance on the High Performance LINPACK (HPL) benchmark, and was again named the most powerful university system in the world.

Frontera is the most powerful supercomputer at any university in the world, and the #5 fastest overall. Image courtesy of TACC.

The ranking was announced at SC19, the International Conference for High Performance Computing, Networking, Storage, and Analysis, held this week in Denver, Colorado.

Frontera was the fastest non-accelerated system on the list and the most powerful system for both Dell and Intel. Supported by $120 million in awards from the National Science Foundation (NSF), Frontera is the U.S.’s leadership-class academic supercomputer and enables the nation’s most experienced computational researchers to address Grand Challenges in science and engineering, such as cancer studies, climate simulations, and fundamental particle physics.

Stampede2 — TACC’s second fastest supercomputer and another NSF-supported machine — was ranked #18; and for the first time, Longhorn, a new IBM/NVIDIA system launched at TACC this fall, was ranked #120. The three systems are the first, second, and sixth most powerful academic supercomputers in the U.S., respectively.

Frontera was dedicated on Sept. 3, 2019, and entered full production on Oct. 1. Longhorn began enabling research in November, as did Frontera’s four petaflop liquid-immersion-cooled NVIDIA subsystem, built to accelerate artificial intelligence and machine learning research.

Results of First Frontera Large-Scale System Runs

In addition to the Top500 results, TACC is proud to share the results of Frontera’s first massive science runs. In October, five teams from across the U.S. successfully performed large-scale calculations that, in many cases, were the largest ever in their field of science. The efforts, akin to Gordon Bell Prize computations, included:


Frontiers of Coarse-Graining Principal Investigator (PI): Gregory Voth, University of Chicago

Still from a simulation of an HIV capsid computed on Frontera by Gregory Voth, from the University of Chicago. Image courtesy of TACC.

The project studies how mature HIV-1 capsid proteins self-assemble into large fullerene-cone structures. The researchers combined simulations of atomic-resolution models with coarse-grained representations.The team computed on 4,000 Frontera nodes (more than 200,000 processing cores) and simulated viral capsids containing RNA and stabilizing cellular factors in full atomic detail for over 500 nanoseconds.

These were the first molecular simulations of HIV capsids that contain biological components of the virus within the capsid, including genetic cargo.

“State-of-the-art supercomputing resources like Frontera are an invaluable resource for researchers,” said Alvin Yu, a postdoctoral scholar in the Voth Group. “Molecular processes that determine the chemistry of life are often interconnected and difficult to probe in isolation. Frontera enables large-scale simulations that examine these processes, and this type of science simply cannot be performed on smaller supercomputing resources.”


3-D Stellar Hydrodynamics PI: Paul Woodward, University of Minnesota

The project studies the process of convective boundary mixing and shell mergers in massive stars.

Woodward’s team computed on more than 7,300 Frontera nodes (out of a total 8,008 nodes) for more than 80 consecutive hours without failures. They experienced 588 gigaflops per node — or four petaflops of sustained performance — for more than three days straight.


Center for the Physics of Living Cells PI: Aleksei Aksimentiev, University of Illinois at Urbana-Champaign

Aksimentiev’s team explores the mechanism of selective transport in the nuclear pore complex, which regulates the transport of molecules in and out of the nucleus of a biological cell.

The team simulated their computational model using up to 7,780 nodes on Frontera. It was one of the largest bio-molecular simulations ever performed and exhibited close to linear scaling on up to half of the machine. The team plans to build a new system twice as large to take advantage of future large-scale runs on Frontera.


Lattice Gauge Theory at the Intensity Frontier PI: Carleton DeTar, University of Utah

DeTar’s team ran ab initio numerical simulations of quantum chromodynamics (QCD) that help obtain precise predictions for the decays of mesons that contain a heavy bottom quark. They compare numerical predictions with results of experimental measurements to look for discrepancies that point to new fundamental particles and interactions.

The researchers carried out the initial steps in the shuffle of an exascale-size lattice using more than 3,400 nodes on Frontera — a problem 16 times larger than any they had previously calculated.

The computations showed that given sufficient resources the team can run an exascale-level calculation on Frontera. “In addition to demonstrating feasibility, we obtained a useful result,” DeTar said. “We are now in good position for a future exascale run. We have working code and a working starting gauge configuration file.”


Prediction and Control of Turbulence-Generated Sound PI: Dan Bodony, University of Illinois at Urbana-Champaign

Bodony simulated fluid-structure interactions related to hypersonic vehicle designs. The simulations replicated a companion experiment performed at NASA Langley in their 20-inch Mach 6 tunnel.

The team saw superlinear speedup on up to 2,000 nodes and linear speedup up to 4,000 nodes.


“On Frontera, we’re running some of the largest science problems ever,” said Dan Stanzione, TACC executive director. “TACC worked hard to support calculations that use at least half the system, and in some cases up to 97 percent of the entire system — on the order of a couple of hundred thousand cores.

“This is an unprecedented scale for us and for our users. From molecular dynamics to particle physics to cosmological stellar dynamics, this is the scale we built Frontera for – the biggest problems in the world. It’s what differentiates how we use Frontera from every other system.”

Frontera Allocations Process

Recently, TACC announced the new allocation process that will determine who will be able to compute on Frontera over the next five years. As outlined in a “Dear Colleague Letter” from NSF, the process includes four tracks to accommodate a range of research needs for large-scale discovery science:

  • Leadership Resource Allocation (LRAC) – Large allocations to science teams with a strong scientific justification for access to a leadership-class computing resource to enable research that would otherwise not be possible.
  • Pathways – Small allocations to science teams with a strong scientific justification for access to a leadership-class computing resource, but who have not yet demonstrated code readiness to effectively use such a resource.
  • Large-Scale Community Partnerships (LSCP) – Extended allocations of up to three years to support long-lived science and engineering experiments.
  • Director Discretionary Allocations (DD) – Allocations for projects that don’t fit well into the three tracks described above, such as: areas of urgent need, educational usage, and industrial collaborations/ research. Submissions will be accepted on a rolling basis.

“The new allocation process will allow for a range of uses, from single projects that consume five percent of the system’s total time, to collaborations with large international science efforts, to on-ramps to new large-scale users,” Stanzione said. “It will ensure that Frontera is fully maximized for science.”

[Read more about the allocation process at: https://fronteraweb.tacc.utexas.edu/allocations/]

Frontera Fellowship

TACC also announced the launch of the Frontera Computational Science Fellowships, a year-long opportunity for talented Ph.D. students to compute on the most powerful academic supercomputer in the world.

Fellowship at a Glance:

  • 50,000 node-hours on Frontera.
  • Paid summer residence at TACC.
  • Training on the latest tools, topics, and trends in advanced computing.
  • Collaboration with highly motivated researchers and graduate students.
  • Networking with academic and industry professionals.
  • Presentation and publication opportunities.
  • Fellows will receive a $34,000 stipend.
  • Up to $12,000 in tuition allowance throughout the year.
  • Travel support to present research results at a Frontera user community event and/or professional conference.

Nominations open on Nov. 18, 2019 and close Feb. 7, 2020. Applications will only be accepted from students who are studying in the United States or its territories at a US institution.

For more information on program details, eligibility, and how to apply, visit: https://fronteraweb.tacc.utexas.edu/fellowship/

Future Leadership-Class Facility Planning

At SC19, TACC continues its efforts to engage the community in plans to design and build the next major supercomputer for the academic research community: a system 10 times more capable than Frontera to be deployed in the 2025 timeframe.

TACC leadership and members of the Frontera science team will host a Birds of a Feather event (Thursday, Nov. 21, 2019, from 12:15pm – 1:15pm in Room 702) explaining the planning process for a national-scale HPC facility to provide roughly 0.5 – 1.0 exaFLOPS in computing capability in the 2025 timeframe. TACC is seeking input from both the science and technology communities regarding the key capabilities this facility must provide to effectively support large scale open science.

TACC Presents….

Other TACC-led events at SC19 include workshops on “Tools and Best Practices for Distributed Deep Learning on Supercomputers”; “Aggregating Local Storage for Scalable Deep Learning I/O”; and “Tools for Monitoring CPU Usage and Affinity in Multicore Supercomputers”; and BoFs on “Accelerated Ray Tracing for Scientific Simulations” and “Getting Scientific Software Installed.”

Presentations in the TACC booth highlight software development efforts at TACC, including XALT, a lightweight tool that tracks how researchers use HPC systems; IPT, a high-productivity tool that can semi-automatically parallelize certain types of serial C/C++ programs; and Tapis, TACC’s new NSF-funded API-develop project.

For a full list of TACC-led SC19 events and booth presentations visit: https://www.tacc.utexas.edu/sc19.


Source: Aaron Dubrow, TACC