TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

By Tiffany Trader

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fastest university supercomputer in the United States and one of the most powerful HPC systems in the world. A month ago we learned that TACC had won the latest “track-1” NSF award, the successor to the Blue Waters machine at the National Center for Supercomputing Applications, and now we have the details of TACC’s winning proposal.

The $60 million NSF award is the first step in a multi-phase process to provide researchers with a “leadership-class” computing resource for open science and engineering research. Expected to enter production in 2019 and to operate for five years, Frontera will provide extreme-scale computing capabilities to support discoveries in all fields of science, enabling researchers to address pressing challenges in medicine, materials design, natural disasters and climate change.

The primary computing system will be supplied by Dell EMC and powered by more than 16,000 Intel Xeon processors. Expected peak performance is between 35-40 petaflops, pending finalized Cascade Lake SKUs from Intel. The x86 cluster is getting one more crank out of Moore’s law, leveraging the higher clock rates of the next-gen Xeon chips to get 3x speedup over Blue Waters at about one-third the cost. Compared with TACC’s flagship system, Stampede 2, deployed last summer, Frontera will offer double the performance at half the cost.

TACC’s chilled water plant, capable of producing 160,000 gallons/hour of 42-degree water (source: TACC presentation slide)

In addition to the ~8,064 dual-socket Xeon nodes that comprise the primary system, Frontera will also include a small “single-precision GPU subsystem,” to support molecular dynamics and machine learning applications. The subsystem will be powered by Nvidia technology and we expect to learn additional details ahead of SC18.

Data Direct Networks will contribute the primary storage system (50+ PB disk, 3PB of flash, 1.5/TB sec of I/O capability), and Mellanox will provide its high-performance HDR InfiniBand technology in a fat-tree topology (200 Gb/s links between switches). Direct water cooling of primary compute racks will be supplied by CoolIT, while GPU nodes will rely on oil immersion cooling from GRC (formerly Green Revolution Cooling).

At peak operation, Frontera will consume almost 6 MW of power. TACC purchases about 30 percent of its power from wind credits from wind power in West Texas and also draws on solar power from panels in its parking lot.

Cloud providers Amazon, Google, and Microsoft will have roles in the project, both as a repository for long-term data and as a resource for the newest technologies. As TACC Director Dan Stanzione noted in a pre-briefing, “they give us access to the newest architectures because they’re deploying all the time.” This will be helpful as TACC goes through the five-year planning process for a phase 2 system (more on this below).

Partner institutions include the California Institute of Technology, Cornell University, Princeton University, Stanford University, the University of Chicago, the University of Utah, the University of California, Davis, Ohio State University, Georgia Institute of Technology, and Texas A&M University.

The $60 million NSF award – Towards a Leadership-Class Computing Facility Phase 1 – funds the acquisition and deployment of Frontera. A second award to cover operations for the next five year is still to come. As mentioned, there’s also a planned phase 2 NSF award for the 2023-2024 timeframe that will fund a successor capable of solving computational science problems 10 times faster than the phase 1 system. It is not clear at this time if the phase 2 selection process will be opened up to other sites.

Frontera is the third computer in a row at TACC to earn the distinction of being the fastest at any U.S. university. The university’s Stampede 2 machine is currently number 15 on the Top500 list delivering 10.7 Linpack petaflops (18.3 peak petaflops). With an expected Linpack number in the high 20s (according to Stanzione, who acknowledged the limitations of the linear algebra benchmark), Frontera, if built today, would rank fifth on the global listing of top computers.

The next-gen system is expected to be deployed and operational by next summer. “By this time next year, I certainly hope to be in full production and accepted,” Stanzione shared.

Leadership science and engineering

NSF is proud of its role advancing open science and engineering through the petascale-class science program started under Blue Waters. “Cyberinfrastructure is incredibly important for pushing forward the boundaries of science and engineering research,” said NSF’s Assistant Director for Computer and Information Science and Engineering (CISE) Jim Kurose in an interview with HPCwire. Referencing a sampling of the standout science conducted on Blue Waters, Kurose noted the critical role of leadership-class computing and all the other facets of cyberinfrastructure. “For a certain class of problems — capsid problems, astrophysics and galaxy dynamics problems, arctic mapping — they are at such a scale that you need a petascale type of capability to solve them,” said Kurose.

The allocation process for NSF leadership-class computing facility systems (formerly called track-1) is managed by PRAC (pronounced P-RACK), the Petascale Computing Resource Allocations committee. As with NSF’s first track-1 machine, Blue Waters, 80 percent of Frontera cycles go through the NSF allocations process and 20 percent is discretionary. Of that 20 percent, Stanzione said they’ll reserve about 15 percent for discretionary national science work, and about 5 percent for Texas and local users. He would also like Frontera to be “a little more tightly coupled with XSEDE than the past system was.” [Note: Allocations for XSEDE resources — known as innovative HPC resources in NSF parlance — are managed by the XSEDE Resource Allocations Committee (XRAC).]

According to NSF, early projects on Frontera will explore fundamental open questions in many areas of physics, ranging from the structure of elementary objects to the structure of the entire universe. Other key areas of investigation include environmental modeling, improved hurricane forecasting and the new area of multi-messenger astronomy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding Read more…

By Tiffany Trader

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

SC 30th Anniversary Perennials 1988-2018

November 8, 2018

Many conferences try, fewer succeed. Thirty years ago, no one knew if the first SC would also be the last. Thirty years later, we know it’s the biggest annual Read more…

By Doug Black & Tiffany Trader

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This