DOE to Field Pre-Exascale Supercomputers Within Four Years

By Michael Feldman

January 16, 2013

The national labs at Oak Ridge (ORNL), Argonne (ANL) and Lawrence Livermore (LLNL) are banding together for their next refresh of supercomputers. In late 2016 or early 2017, all three Department of Energy (DOE) centers are looking to deploy their first 100-plus petaflop systems, which will serve as precursors to their exascale machine further down the line. The labs will issue a request for proposal (RFP) later this year with the goal of awarding the work to two prime subcontractors.

The trio of lab partners, known as CORAL (Collaboration Oak Ridge Argonne Livermore), sent out a Request for Information (RFI) in December 2012 to gather information for the upcoming RFP. It’s possible three separate RFPs will be issued, corresponding to systems hosted at each lab, but according to the RFI addendum, the DOE is “strongly considering” wrapping the multiple acquisitions under a single RFP.

The CORAL partnership between ORNL, ANL and LLNL to secure these pre-exascale machines mirrors the approach of their DOE siblings, NERSC, Los Alamos and Sandia National Labs to acquire their next round of supercomputers. In the latter case, those centers are teaming up to deploy two new machines (NERSC-8 and Trinity) before the end of 2015, about a year ahead of their CORAL counterparts. Because of the time difference and the somewhat different user bases, NERSC-8 and Trinity are almost certainly going to be sub-100-petaflop systems.

The CORAL supercomputers are initially spec’d at 100 to 300 petaflops, along with 5 to 10 petabytes of memory and 70 to 150 PB of storage. “The expectation is that the proposed 2016-2017 system will be roughly an order of magnitude less in time-to-solution than today’s systems at our facilities,” states the RFI. If everything goes as planned, that means the top supercomputer at ORNL in four years will be about 10 times as powerful its current top machine, Titan, which currently delivers 24 peak petaflops and holds title to the most powerful computer on the planet.

Of course, the labs’ focus on “time to solution” is centered around the traditional DOE application domains DOE like molecular dynamics, cosmology, CFD combustion, and others that map to the agency’s Office of Science and NNSA missions. Since these are all Fortran and C/C++ codes, which employ mostly MPI and OpenMP to extract parallelism, the new platforms must be designed to support both legacy codes as well as any future frameworks for exascale computing.

Although the CORAL lab acquisitions have been combined, two distinct solutions will be chosen. One of them will be delivered as separate systems to both ORNL and ANL, while LLNL will choose one of two solutions for its own use. Theoretically that could mean that all three labs could deploy the same machine, but since the feds likes to spread the supercomputing love around, it most likely means two system vendors will get the opportunity to deliver these pre-exascale machines.

More than likely, we’re talking about IBM and Cray as the primes here, although SGI could also make a reasonable case for a leading-edge supercomputer. None of these vendors have revealed platforms topping 100 petaflops yet. Cray’s latest supercomputer, the XC30 maxes out at 100 petaflops, and even at that level of performance, would rely on GPUs or Intel coprocessors that are still under development. IBM is no doubt working on its successor to Blue Gene/Q. But whether Big Blue’s exascale roadmap continues to follow that architecture, incorporates their Power server technology, or comes up with something entirely novel, remains to be seen.

To help foster some of this development, part of the CORAL effort will be to fund non-recurring engineering (NRE) costs associated with these pre-exascale supercomputers. The intent is to pour up to $100 million into these NRE activities, the money to be split between the two prime subcontractors. Some of this could certainly filter down to processor vendors, memory makers, and interconnect providers as well.

It’s up to the bidding vendors to impress the labs on how best to apply the NRE funding, for example, better programmability, improving memory performance, adding embedded network controllers, maximizing data transfers between heterogeneous components, developing more efficient power management, and so on. Alternatively, the NRE could be directed at accelerating schedules, improving system cost, or TCO. The idea is to fund technologies or processes that the IT market would not be expected to deliver naturally.

Both the CORAL and NERSC-8/Trinity efforts are very much in the tradition of the “swim lanes” procurement approach — encouraging the development of competing supercomputing architectures by various labs and vendors. The DOE has simplified the process somewhat by splitting the six leading centers into two teams, each of which will seed money into exascale research via their preferred choice of industry players.

Since these systems will pave the way for exascale technologies, there’s a lot at stake here for the vendors. This isn’t, however, just restricted to a few elite machines for a handful of labs. Petascale supercomputers will become increasingly commonplace during the second half of this decade, and they will be based on many of the same technologies that will drive exascale systems. Those companies tapped by the DOE to develop these next-generation supercomputers will be in a prime position to build not just the first exaflop-capable platforms, but also a whole array of HPC products for a much wider market.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC impact at SC18. Most noteworthy is that five of 13 CAAR applic Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

SC 30th Anniversary Perennials 1988-2018

November 8, 2018

Many conferences try, fewer succeed. Thirty years ago, no one knew if the first SC would also be the last. Thirty years later, we know it’s the biggest annual Read more…

By Doug Black & Tiffany Trader

CEA’s Pick of ThunderX2-based Atos System Boosts Arm

November 8, 2018

Europe’s bet on Arm took another step forward today with selection of an Atos BullSequana X1310 system by CEA’s (French Alternative Energies and Atomic Ener Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This