DOE Sets Exascale Pricetag

By Tiffany Trader

September 16, 2013

The United States Department of Energy has announced a plan to field an exascale system by 2022, but says in order to meet this objective it will require an investment of $1 billion to $1.4 billion for targeted research and development. The DOE’s June 2013 “Exascale Strategy” report to Congress was recently obtained by FierceGovernmentIT.

The report makes it clear that exascale systems, one-hundred to one-thousand times faster than today’s petascale supercomputers, are needed to maintain a competitive advantage in both the science and the security domain. The DOE notes that exascale computing will be essential to the processing of future datasets in areas like combustion, climate and astrophysics and claims that there is “significant leverage in addressing the challenges of large scale simulations and large scale data analysis together.”

But before practical exascale machines can become a reality, there are several pretty major obstacles that need to be addressed. Among these are the energy issue; system balance and the memory wall; resiliency and coping with run-time errors; and exploiting massive parallelism. All of these issues require focused research and development.

Reducing power requirements is one of the foremost objectives of any exascale endeavor. The report points out that an exascale supercomputer built with current technology would consume almost a gigawatt of power, approximately half the output of Hoover Dam. With a standard technology progression over the next decade, experts estimate that an exascale supercomputer could be constructed with power requirements in the 200 megawatt range at an estimated cost of $200-$300 million per year. Whether funding bodies will be willing to spend this much money remains to be seen, but the DOE would like to see that power requirement cut by a factor of 10, down to 20 megawatt neighborhood where current best-in-class systems reside.

As a point of comparison, the largest US supercomputer, Titan, installed at Oak Ridge National Laboratory, requires 8.2 MW to reach 17.59 petaflops. The world’s fastest system, China’s 33.86 petaflop Tianhe-2, has a peak power load of 17.8 MW, but that figure goes up to 24 MW when cooling is added.

The DOE report recommends five main areas of focus which add up to a comprehensive exascale roadmap with the goal of fielding such a system by the beginning of the next decade (circa 2022).

  • Provide computational capabilities that are 50 to 100 times greater than today’s systems at DOE’s Leadership Computing Facilities.
  • Have power requirements that are a factor of 10 below the 2010 industry projections for such systems which assumed incremental efficiency improvements.
  • Execute simulations and data analysis applications that require advanced computing capabilities such as performing accurate full reactor core calculations, validating and improving combustion models for mixed combustion regimes with strong turbulence-chemistry interactions, designing enzymes for conversion of biomass, and incorporating more realistic decisions based on available energy sources into the energy grid.
  • Provide the capacity and capability needed to analyze ever-growing data streams.
  • Advance the state-of-art hardware and software information security capabilities.

The plan described in the report covers the research, development and engineering that is needed to achieve an exascale computing system by 2022, but the acquisition of such a system would be separate from this effort. The suggested approach is to continue fielding systems at intermediate stages of performance, for example 100 petaflops, 250 petaflops, 500 petaflops, and so on, up to exascale. Currently, the US invests between $180M to $200M annually to acquire and operate HPC machines through the NNSA Advanced Simulation and Computing (ASC) and Office of Science Advanced Scientific Computing Research (ASCR) programs.

The R&D required to prepare the way for an exascale supercomputer comes with a price tag of between one billion and 1.4 billion dollars, a figure arrived at by surveying key stakeholders in the computing industry. This is the cost to the DOE with an expectation that there will be some “cost-share contribution” from vendors and some software componentry development left to the software ecosystem to resolve. Responsibility for the program will be jointly shared by the DOE’s Office of Science and the National Nuclear Security Administration (NNSA).

Related Items

Consolidating HPC’s Gains

Some Like IT Cold: Intelligence Agencies Pursue Low-Power Exascale

Senator Says US Congress Doesn’t ‘Get’ Supercomputers

Green500 Founder on Getting to Exascale: ‘Something’s Gotta Change’

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communicat Read more…

By John Russell

US Seeks to Automate Video Analysis

January 16, 2018

U.S. military and intelligence agencies continue to look for new ways to use artificial intelligence to sift through huge amounts of video imagery in hopes of freeing analysts to identify threats and otherwise put their Read more…

By George Leopold

URISC@SC17 and the #LongestLastMile

January 11, 2018

A multinational delegation recently attended the Understanding Risk in Shared CyberEcosystems workshop, or URISC@SC17, in Denver, Colorado. URISC participants and presenters from 11 countries, including eight African nations, 12 U.S. states, Canada, India and Nepal, also attended SC17, the annual international conference for high performance computing, networking, storage and analysis that drew nearly 13,000 attendees. Read more…

By Elizabeth Leake, STEM-Trek Nonprofit

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

The @hpcnotes Predictions for HPC in 2018

January 4, 2018

I’m not averse to making predictions about the world of High Performance Computing (and Supercomputing, Cloud, etc.) in person at conferences, meetings, causa Read more…

By Andrew Jones

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This