Revisiting the 2008 Exascale Computing Study at SC18

By Scott Gibson

November 29, 2018

Jeffrey Vetter, Distinguished R&D Staff Member at Oak Ridge National Laboratory, led the SC18 Birds of a Feather session “Revisiting the 2008 ExaScale Computing Study and Venturing Predictions for 2028.”

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the then-emerging petascale systems at a system power of no more than 20 MW. On November 14 at the SC18 supercomputing conference in Dallas, some of the original contributors to the report participated in a Birds of a Feather session in which they reflected on the document, sharing what they deemed to be its hits and misses and making predictions for 2028.

Session leader, Jeffrey Vetter of Oak Ridge National Laboratory, said the 2008 report, titled “Exascale Computing Study: Technology Challenges in Achieving Exascale Systems,” has been cited more than 1,000 times and that many people look to it to understand what research agendas they should undertake and to consider what are the most salient challenges to be faced in high-performance computing.

The study was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Processing Techniques Office (IPTO) with Bill Harrod as program manager. The report represents the ideas of people from universities, industry, and research labs collected during periodic meetings conducted during the course of more than a year.

Harrod, who is now program manager for the Intelligence Advanced Research Projects Activity (IARPA), told the BoF audience that consideration of petascale system specifications as they existed at the time informed the study group members’ assumptions about exascale. Petascale systems operated at about 13 MW with several hundred cabinets. Thus, the anticipated parameters for exascale were 1018 operations/second at 20 MW and with fewer than 500 cabinets. The pivotal big-picture questions, Harrod said, were whether an exascale system was needed and could it be used for scientific discovery and other practical purposes.

Two other studies, on software and resiliency, respectively, followed the study upon which the 2008 report was based. The resounding, overarching comment concerning the findings of the three studies, Harrod said, was that co-design would be essential. He added that although the co-design concept was not revolutionary, it was determined to be critical for ensuring hardware design would correspond properly with the intended uses for the system, and it became an integral aspect of the US Department of Energy’s Exascale Computing Initiative (ECI) and Exascale Computing Project (ECP).

Peter Kogge of the University of Notre Dame led the Exascale Computing study and served as editor of the 2008 report. In his presentation for the BoF, he outlined four key challenges that surfaced from the study: energy and power, memory, concurrency, and resiliency. He also summarized the 2008 computing environment and what it was anticipated to look like by 2015, noting that the study team did not focus on application needs and the Roofline model. For matrix multiply like the High-Performance Linpack (HPL) benchmark, he said, having a large enough cache would supersede concerns about memory speed; and to reach a peak of 1 exaflops, the goal was to hit 20 pJ/flop.

The team assembled what Kogge referred to as an aggressive strawman with an architecture that was largely influenced by study contributor Bill Dally (then with Stanford University, now with Nvidia), who participated in the BoF. The architecture was characterized by multicore, no coherency, and shared global address space. Reaching the 1 exaflops peak meant 68 MW power usage from 583 racks. Relative to programming, about 1 billion threads needed to be maintained. A wire interconnect was assumed.

Kogge provided details from the report on the aggressive strawman system, which he said he considered to be “remarkably prescient” with respect to what ultimately materialized in the evolution toward exascale.

A 2015 paper for the International Supercomputing Conference (ISC) by Kogge titled “Updating Energy Model for Future Exascale Systems” examined an update of the models that the Exascale Computing study team had built to project performance for only the heavyweight (Xeon chips) sockets. The paper received a Gauss Award.

The study group’s final analysis showed that an exaflops could be reached by 2020, but with a peak of 180 MW to 430 MW.

The Study Contributors’ Assessments of Hits and Misses

Bill Harrod

At the inception of the DARPA studies, the target year for reaching exascale was 2015, but based on the results of the software study it was adjusted to 2018. Today, projections are focused on the 2021–2023 time frame. Harrod said that although the projections have evolved, the studies paved the way for DARPA’s Ubiquitous High-Performance Computing (UHPC) Exascale Projects and laid the foundation for DOE’s ECI and ECP. They have, he added, greatly enhanced the environment for exascale development.

In terms of hits and misses, the importance of co-design has played out at DOE and many other places, including the FastForward and PathForward programs, Harrod said. As a key miss of the study, he highlighted the fact that it did not foresee the impact of artificial intelligence (AI).

Peter Kogge

The study group’s approach in focusing on the heavyweight systems was dead-on through 2015, and the aggressive strawman they developed greatly resembles today’s GPU, Kogge said. In addition, he said the study group was right to point out that some form of memory stacking would be necessary, and that interconnects, at least locally within racks, would still largely be copper. Among the misses, he highlighted the heterogeneous systems and the SIMT threading model, which constitutes what is done with GPUs today.

Keren Bergman (Columbia University)

Bergman said that as someone whose background is in optical networks, she considered the close examination of the energy consumption of the interconnects in this study to be enlightening. With respect to the study’s hits, she opined that the deep discussions captured the growing challenge of data movement. However, in her view, one of the study’s sizable misses was the cost associated with manufacturability. She said substantial innovations would be required to integrate photonics into chips and remedy one of the last real bottlenecks.

Dean Klein (Micron/now retired)

Klein, who was vice president of memory system development at Micron at the time of the study and today in retirement mentors and motivates engineering students, highlighted as a hit the study group’s awareness that the energy of memory subsystems would drive compromises in the memory in systems, and as a miss the idea of NAND flash playing a role in supercomputing.

Bill Dally

The prescience of the study’s aggressive silicon strawman made it a hit, Dally said. Conversely, he viewed as shortcomings the paucity of capable networks due to funding, failure to anticipate AI, and an overly conservative approach in addressing software.

Exascale Study Contributors’ Predictions for 2028

The belief that complementary metal-oxide-semiconductor (CMOS) technology for constructing integrated circuits would remain predominant was a recurring notion, as the BoF contributors offered diverse predictions for 2028 based on the perspectives of their areas of expertise.

The contributors also responded to comments and questions from the audience.

Scott Gibson is a science writer and communications specialist with Oak Ridge National Laboratory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Quantum Clouds, Interatomic Models, Genetic Algorithms & More

February 14, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

The Massive GPU Cloudburst Experiment Plays a Smaller, More Productive Encore

February 13, 2020

In November, researchers at the San Diego Supercomputer Center (SDSC) and the IceCube Particle Astrophysics Center (WIPAC) set out to break the internet – or at least, pull off the cloud HPC equivalent. As part of thei Read more…

By Oliver Peckham

ORNL Team Develops AI-based Cancer Text Mining Tool on Summit

February 13, 2020

A group of Oak Ridge National Laboratory researchers working on the Summit supercomputer has developed a new neural network tool for fast extraction of information from cancer pathology reports to speed research and clin Read more…

By John Russell

Nature Serves up Another Challenge to Quantum Computing?

February 13, 2020

Just when you thought it was safe to assume quantum computing – though distant – would eventually succumb to clever technology, another potentially confounding factor pops up. It’s the Heisenberg Limit (HL), close Read more…

By John Russell

Researchers Enlist Three Supercomputers to Apply Deep Learning to Extreme Weather

February 12, 2020

When it comes to extreme weather, an errant forecast can have serious effects. While advance warning can give people time to prepare for the weather as it did with the polar vortex last year, the absence of accurate adva Read more…

By Oliver Peckham

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Eni to Retake Industry HPC Crown with Launch of HPC5

February 12, 2020

With the launch of its Dell-built HPC5 system, Italian energy company Eni regains its position atop the industrial supercomputing leaderboard. At 52-petaflops peak, HPC5 should easily crack the top ten fold of the next T Read more…

By Tiffany Trader

The Massive GPU Cloudburst Experiment Plays a Smaller, More Productive Encore

February 13, 2020

In November, researchers at the San Diego Supercomputer Center (SDSC) and the IceCube Particle Astrophysics Center (WIPAC) set out to break the internet – or Read more…

By Oliver Peckham

Eni to Retake Industry HPC Crown with Launch of HPC5

February 12, 2020

With the launch of its Dell-built HPC5 system, Italian energy company Eni regains its position atop the industrial supercomputing leaderboard. At 52-petaflops p Read more…

By Tiffany Trader

Trump Budget Proposal Again Slashes Science Spending

February 11, 2020

President Donald Trump’s FY2021 U.S. Budget, submitted to Congress this week, again slashes science spending. It’s a $4.8 trillion statement of priorities, Read more…

By John Russell

Policy: Republicans Eye Bigger Science Budgets; NSF Celebrates 70th, Names Idea Machine Winners

February 5, 2020

It’s a busy week for science policy. Yesterday, the National Science Foundation announced winners of its 2026 Idea Machine contest seeking directions for futu Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Intel Stopping Nervana Development to Focus on Habana AI Chips

February 3, 2020

Just two months after acquiring Israeli AI chip start-up Habana Labs for $2 billion, Intel is stopping development of its existing Nervana neural network proces Read more…

By John Russell

Lise Supercomputer, Part of HLRN-IV, Begins Operations

January 29, 2020

The second phase of the build-out of HLRN-IV – the planned 16 peak-petaflops supercomputer serving the North-German Supercomputing Alliance (HLRN) – is unde Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This