US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

By Tiffany Trader

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with the release of the much-anticipated CORAL-2 request for proposals (RFP). Although funding is not yet secured, the anticipated budget range for each system is significant: $400 million to $600 million per machine including associated non-recurring engineering (NRE).

CORAL of course refers to the joint effort to procure next-generation supercomputers for Department of Energy’s National Laboratories at Oak Ridge, Argonne, and Livermore. The fruits of the original CORAL RFP include Summit and Sierra, ~200 petaflops systems being built by IBM in partnership with Nvidia and Mellanox for Oak Ridge and Livermore, respectively, and “A21,” the retooled Aurora contract with prime Intel (and partner Cray), destined for Argonne in 2021 and slated to be the United States’ first exascale machine.

The heavyweight supercomputers are required to meet the mission needs of the Advanced Scientific Computing Research (ASCR) Program within the DOE’s Office of Science and the Advanced Simulation and Computing (ASC) Program within the National Nuclear Security Administration.

The CORAL-2 collaboration specifically seeks to fund non-recurring engineering and up to three exascale-class systems: one at Oak Ridge, one at Livermore and a potential third system at Argonne if it chooses to make an award under the RFP and if funding is available. The Exascale Computing Project (ECP), a joint DOE-NNSA effort, has been organizing and leading R&D in the areas of the software stack, applications, and hardware to ensure “capable,” i.e., productively usable, exascale machines that can solve science problems 50x faster (or more complex) over today’s ~20-petaflops DOE systems (i.e., Sequoia and Titan). In terms of peak Linpack, 1.3 exaflops is the “desirable” target set by the DOE.

Like the original CORAL program, which kicked off in 2012, CORAL-2 has a mandate to field architecturally diverse machines in a way that manages risk during a period of rapid technological evolution. “Regardless of which system or systems are being discussed, the systems residing at or planned to reside at ORNL and ANL must be diverse from one another,” notes the CORAL-2 RFP cover letter [PDF]. Sharpening the point, that means the Oak Ridge system must be distinct from A21 and from a potential CORAL-2 machine at Argonne. It is conceivable, then, that this RFP may result in one, two or three different architectures, depending of course on the selections made by the labs and whether Argonne’s CORAL-2 machine comes to fruition.

“Diversity,” according to the RFP documents, “will be evaluated by how much the proposed system(s) promotes a competition of ideas and technologies; how much the proposed system(s) reduces risk that may be caused by delays or failure of a particular technology or shifts in vendor business focus, staff, or financial health; and how much the proposed system(s) diversity promotes a rich and healthy HPC ecosystem.”

Here is a listing of current and future CORAL machines:

Proposals for CORAL-2 are due in May with bidders to be selected later this year. Acquisition contracts are anticipated for 2019.

If Argonne takes delivery of A21 in 2021 and deploys an additional machine (or upgrade) in the third quarter of 2022, it would be fielding two exascale machines/builds in less than two years.

“Whether CORAL-2 winds up being two systems or three may come down to funding, which is ‘expected’ at this point, but not committed,” commented HPC veteran and market watcher Addison Snell, CEO of Intersect360 Research. “If ANL does not fund an exascale system as part of CORAL-2, I would nevertheless expect an exascale system there in a similar timeframe, just possibly funded separately.”

Several HPC community leaders we spoke with shared more pointed speculation on what the overture for a second exascale machine at Argonne so soon on the heels of A21 may indicate, insofar as there may be doubt about whether Intel’s “novel architecture” will satisfy the full scope of DOE’s needs. Given the close timing and the reality of lengthy procurement cycles, the decision on a follow-on will have to be made without the benefit of experience with A21.

Argonne’s Associate Laboratory Director for Computing, Environment and Life Sciences Rick Stevens, commenting for this piece, underscored the importance of technology diversity and shined a light on Argonne’s thinking. “We are very interested in getting as broad range of responses as possible to consider for our planning. We would love to have multiple choices to consider for the DOE landscape including exciting options for potential upgrades to Aurora,” he said.

If Intel, working with Cray, is able to fulfill the requirements for a 1-exaflops A21 machine in 2021, the pair may be in a favorable position to fulfill the more rigorous “capable exascale” requirements outlined by ECP and CORAL-2.

The overall bidding pool for CORAL-2 is likely to include IBM, Intel, Cray and Hewlett Packard Enterprise (HPE); upstart system-maker Nvidia may also have a hand to play. HPE could come in with a GPU-based machine or an implementation of its memory-centric architecture, known as The Machine. In IBM’s court, the successor architectures to Power9 are no doubt being looked at as candidates.

And while it’s always fun dishing over the sexy processing elements (with flavors from Intel, Nvidia, AMD and IBM on the tasting menu), Snell pointed out it is perhaps more interesting to prospect the interconnect topologies in the field. “Will we be looking at systems based on an upcoming version of a current technology, such as InfiniBand or OmniPath, or a future technology like Gen-Z, or something else proprietary?” he pondered.

Stevens weighed in on the many technological challenges still at hand, ranging from memory capacity, power consumption, and systems balance, but he noted that, fortunately, the DOE has been investing in many of these issues for many years, through the PathForward program and its predecessors, created to foster the technology pipeline needed for extreme-scale computing. It’s no accident or coincidence that we’ve already run through all the names in the current “Forward” program: AMD, Cray, HPE, IBM, Intel, and Nvidia.

“Hopefully the vendors will have some good options for us to consider,” said Stevens, adding that Argonne is seeking a broad set of responses from as many vendors as possible. “This RFP is really about opening up the aperture to new architectural concepts and to enable new partnerships in the vendor landscape. I think it’s particularly important to notice that we are interested in systems that can support the integration of simulation, data and machine learning. This is reflected in both the technology specifications as well as the benchmarks outlined in the RFP.”

Other community members also shared their reactions.

“It is good to see a commitment to high-end computing by DOE, though I note that the funding has not yet been secured,” said Bill Gropp, director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (home to the roughly 13-petaflops Cray “Blue Waters” supercomputer). “What is needed is a holistic approach to HEC; this addresses the next+1 generation of systems but not work on applications or algorithms.”

“What stands out [about the CORAL-2 RFP] is that it doesn’t take advantage of the diversity of systems to encourage specialization in the hardware to different data structure/algorithm choices,” Gropp added. “Once you decide to acquire several systems, you can consider specialization. Frankly, for example, GPU-based systems are specialized; they run some important algorithms very well, but are less effective at others. Rather than deny that, make it into a strength. There are hints of this in the way the different classes of benchmarks are described and the priorities placed on them [see page 23 of the RFP’s Proposal Evaluation and Proposal Preparation Instructions], but it could be much more explicit.

“Also, this line on page 23 stands out: “The technology must have potential commercial viability.” I understand the reasoning behind this, but it is an additional constraint that may limit the innovation that is possible. In any case, this is an indirect requirement. DOE is looking for viable technologies that it can support at reasonable cost. But this misses the point that using commodity (which is how commercial viability is often interpreted) technology has its own costs, in the part of the environment that I mentioned above and that is not covered by this RFP.”

Gropp, who is awaiting the results of the NSF Track 1 RFP that will award the follow-on to Blue Waters, also pointed out that NSF has only found $60 million for the next-generation system, and has (as of November 2017) cut the number of future track 2 systems to one. “I hope that DOE can convince Congress to not only appropriate the funds for these systems, but also for the other science agencies,” he said.

Adding further valuable insight into the United States’ strategy to field next-generation leadership-class supercomputers especially with regard to the “commercial viability” precept is NNSA Chief Scientist Dimitri Kusnezov. Interviewed by the Supercomputing Frontiers Europe 2018 conference in Warsaw, Poland, last month (link to video), Kusnezov characterized DOE and NNSA’s $258 million funding of the PathFoward program as “an investment with the private sector to buy down risk in next-generation technologies.”

“We would love to simply buy commercial,” he said. “It would be more cost-effective for us. We’d run in the cloud if that was the answer for us, if that was the most cost-effective way, because it’s not about the computer, it’s about the outcomes. The $250 million [spent on PathForward] was just a piece of ongoing and much larger investments we are making to try and steer, on the sides, vendor roadmaps. We have a sense where companies are going. They share with us their technology investments, and we ask them if there are things we can build on those to help modify it so they can be more broadly serviceable to large scalable architectures.

“$250 million dollars is not a lot of money in the computer world. A billion dollars is not a lot of money in the computer world, so you have to have measured expectations on what you think you can actually impact. We look at impacting the high-end next-generation roadmaps of companies where we can, to have the best output. The best outcome for us is we invest in modifications, lower-power processors, memory closer to the processor, AI-injected into the CPUs in some way, and, in the best case, it becomes commercial, and there’s a market for it, a global market ideally because then the price point comes down and when we build something there, it’s more cost-effective for us. We’re trying to avoid buying special-purpose, single-use systems because they’re too expensive and it doesn’t make a lot of sense. If we can piggyback on where companies want to go by having a sense of what might ultimately have market value for them, we leverage a lot of their R&D and production for our value as well.

“This investment we are doing buys down risk. If other people did it for us that would even be better. If they felt the urgency and invested in the areas we care about, we’d be really happy. So we fill in the gaps where we can. …But ultimately it’s not about the computer, it’s really about the purpose…the problems you are solving and do they make a difference.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This