The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fastest university supercomputer in the United States and one of the most powerful HPC systems in the world. A month ago we learned that TACC had won the latest “track-1” NSF award, the successor to the Blue Waters machine at the National Center for Supercomputing Applications, and now we have the details of TACC’s winning proposal.
The $60 million NSF award is the first step in a multi-phase process to provide researchers with a “leadership-class” computing resource for open science and engineering research. Expected to enter production in 2019 and to operate for five years, Frontera will provide extreme-scale computing capabilities to support discoveries in all fields of science, enabling researchers to address pressing challenges in medicine, materials design, natural disasters and climate change.
The primary computing system will be supplied by Dell EMC and powered by more than 16,000 Intel Xeon processors. Expected peak performance is between 35-40 petaflops, pending finalized Cascade Lake SKUs from Intel. The x86 cluster is getting one more crank out of Moore’s law, leveraging the higher clock rates of the next-gen Xeon chips to get 3x speedup over Blue Waters at about one-third the cost. Compared with TACC’s flagship system, Stampede 2, deployed last summer, Frontera will offer double the performance at half the cost.
In addition to the ~8,064 dual-socket Xeon nodes that comprise the primary system, Frontera will also include a small “single-precision GPU subsystem,” to support molecular dynamics and machine learning applications. The subsystem will be powered by Nvidia technology and we expect to learn additional details ahead of SC18.
Data Direct Networks will contribute the primary storage system (50+ PB disk, 3PB of flash, 1.5/TB sec of I/O capability), and Mellanox will provide its high-performance HDR InfiniBand technology in a fat-tree topology (200 Gb/s links between switches). Direct water cooling of primary compute racks will be supplied by CoolIT, while GPU nodes will rely on oil immersion cooling from GRC (formerly Green Revolution Cooling).
At peak operation, Frontera will consume almost 6 MW of power. TACC purchases about 30 percent of its power from wind credits from wind power in West Texas and also draws on solar power from panels in its parking lot.
Cloud providers Amazon, Google, and Microsoft will have roles in the project, both as a repository for long-term data and as a resource for the newest technologies. As TACC Director Dan Stanzione noted in a pre-briefing, “they give us access to the newest architectures because they’re deploying all the time.” This will be helpful as TACC goes through the five-year planning process for a phase 2 system (more on this below).
Partner institutions include the California Institute of Technology, Cornell University, Princeton University, Stanford University, the University of Chicago, the University of Utah, the University of California, Davis, Ohio State University, Georgia Institute of Technology, and Texas A&M University.
The $60 million NSF award – Towards a Leadership-Class Computing Facility Phase 1 – funds the acquisition and deployment of Frontera. A second award to cover operations for the next five year is still to come. As mentioned, there’s also a planned phase 2 NSF award for the 2023-2024 timeframe that will fund a successor capable of solving computational science problems 10 times faster than the phase 1 system. It is not clear at this time if the phase 2 selection process will be opened up to other sites.
Frontera is the third computer in a row at TACC to earn the distinction of being the fastest at any U.S. university. The university’s Stampede 2 machine is currently number 15 on the Top500 list delivering 10.7 Linpack petaflops (18.3 peak petaflops). With an expected Linpack number in the high 20s (according to Stanzione, who acknowledged the limitations of the linear algebra benchmark), Frontera, if built today, would rank fifth on the global listing of top computers.
The next-gen system is expected to be deployed and operational by next summer. “By this time next year, I certainly hope to be in full production and accepted,” Stanzione shared.
Leadership science and engineering
NSF is proud of its role advancing open science and engineering through the petascale-class science program started under Blue Waters. “Cyberinfrastructure is incredibly important for pushing forward the boundaries of science and engineering research,” said NSF’s Assistant Director for Computer and Information Science and Engineering (CISE) Jim Kurose in an interview with HPCwire. Referencing a sampling of the standout science conducted on Blue Waters, Kurose noted the critical role of leadership-class computing and all the other facets of cyberinfrastructure. “For a certain class of problems — capsid problems, astrophysics and galaxy dynamics problems, arctic mapping — they are at such a scale that you need a petascale type of capability to solve them,” said Kurose.
The allocation process for NSF leadership-class computing facility systems (formerly called track-1) is managed by PRAC (pronounced P-RACK), the Petascale Computing Resource Allocations committee. As with NSF’s first track-1 machine, Blue Waters, 80 percent of Frontera cycles go through the NSF allocations process and 20 percent is discretionary. Of that 20 percent, Stanzione said they’ll reserve about 15 percent for discretionary national science work, and about 5 percent for Texas and local users. He would also like Frontera to be “a little more tightly coupled with XSEDE than the past system was.” [Note: Allocations for XSEDE resources — known as innovative HPC resources in NSF parlance — are managed by the XSEDE Resource Allocations Committee (XRAC).]
According to NSF, early projects on Frontera will explore fundamental open questions in many areas of physics, ranging from the structure of elementary objects to the structure of the entire universe. Other key areas of investigation include environmental modeling, improved hurricane forecasting and the new area of multi-messenger astronomy.