Revisiting the 2008 Exascale Computing Study at SC18

By Scott Gibson

November 29, 2018

Jeffrey Vetter, Distinguished R&D Staff Member at Oak Ridge National Laboratory, led the SC18 Birds of a Feather session “Revisiting the 2008 ExaScale Computing Study and Venturing Predictions for 2028.”

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the then-emerging petascale systems at a system power of no more than 20 MW. On November 14 at the SC18 supercomputing conference in Dallas, some of the original contributors to the report participated in a Birds of a Feather session in which they reflected on the document, sharing what they deemed to be its hits and misses and making predictions for 2028.

Session leader, Jeffrey Vetter of Oak Ridge National Laboratory, said the 2008 report, titled “Exascale Computing Study: Technology Challenges in Achieving Exascale Systems,” has been cited more than 1,000 times and that many people look to it to understand what research agendas they should undertake and to consider what are the most salient challenges to be faced in high-performance computing.

The study was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Processing Techniques Office (IPTO) with Bill Harrod as program manager. The report represents the ideas of people from universities, industry, and research labs collected during periodic meetings conducted during the course of more than a year.

Harrod, who is now program manager for the Intelligence Advanced Research Projects Activity (IARPA), told the BoF audience that consideration of petascale system specifications as they existed at the time informed the study group members’ assumptions about exascale. Petascale systems operated at about 13 MW with several hundred cabinets. Thus, the anticipated parameters for exascale were 1018 operations/second at 20 MW and with fewer than 500 cabinets. The pivotal big-picture questions, Harrod said, were whether an exascale system was needed and could it be used for scientific discovery and other practical purposes.

Two other studies, on software and resiliency, respectively, followed the study upon which the 2008 report was based. The resounding, overarching comment concerning the findings of the three studies, Harrod said, was that co-design would be essential. He added that although the co-design concept was not revolutionary, it was determined to be critical for ensuring hardware design would correspond properly with the intended uses for the system, and it became an integral aspect of the US Department of Energy’s Exascale Computing Initiative (ECI) and Exascale Computing Project (ECP).

Peter Kogge of the University of Notre Dame led the Exascale Computing study and served as editor of the 2008 report. In his presentation for the BoF, he outlined four key challenges that surfaced from the study: energy and power, memory, concurrency, and resiliency. He also summarized the 2008 computing environment and what it was anticipated to look like by 2015, noting that the study team did not focus on application needs and the Roofline model. For matrix multiply like the High-Performance Linpack (HPL) benchmark, he said, having a large enough cache would supersede concerns about memory speed; and to reach a peak of 1 exaflops, the goal was to hit 20 pJ/flop.

The team assembled what Kogge referred to as an aggressive strawman with an architecture that was largely influenced by study contributor Bill Dally (then with Stanford University, now with Nvidia), who participated in the BoF. The architecture was characterized by multicore, no coherency, and shared global address space. Reaching the 1 exaflops peak meant 68 MW power usage from 583 racks. Relative to programming, about 1 billion threads needed to be maintained. A wire interconnect was assumed.

Kogge provided details from the report on the aggressive strawman system, which he said he considered to be “remarkably prescient” with respect to what ultimately materialized in the evolution toward exascale.

A 2015 paper for the International Supercomputing Conference (ISC) by Kogge titled “Updating Energy Model for Future Exascale Systems” examined an update of the models that the Exascale Computing study team had built to project performance for only the heavyweight (Xeon chips) sockets. The paper received a Gauss Award.

The study group’s final analysis showed that an exaflops could be reached by 2020, but with a peak of 180 MW to 430 MW.

The Study Contributors’ Assessments of Hits and Misses

Bill Harrod

At the inception of the DARPA studies, the target year for reaching exascale was 2015, but based on the results of the software study it was adjusted to 2018. Today, projections are focused on the 2021–2023 time frame. Harrod said that although the projections have evolved, the studies paved the way for DARPA’s Ubiquitous High-Performance Computing (UHPC) Exascale Projects and laid the foundation for DOE’s ECI and ECP. They have, he added, greatly enhanced the environment for exascale development.

In terms of hits and misses, the importance of co-design has played out at DOE and many other places, including the FastForward and PathForward programs, Harrod said. As a key miss of the study, he highlighted the fact that it did not foresee the impact of artificial intelligence (AI).

Peter Kogge

The study group’s approach in focusing on the heavyweight systems was dead-on through 2015, and the aggressive strawman they developed greatly resembles today’s GPU, Kogge said. In addition, he said the study group was right to point out that some form of memory stacking would be necessary, and that interconnects, at least locally within racks, would still largely be copper. Among the misses, he highlighted the heterogeneous systems and the SIMT threading model, which constitutes what is done with GPUs today.

Keren Bergman (Columbia University)

Bergman said that as someone whose background is in optical networks, she considered the close examination of the energy consumption of the interconnects in this study to be enlightening. With respect to the study’s hits, she opined that the deep discussions captured the growing challenge of data movement. However, in her view, one of the study’s sizable misses was the cost associated with manufacturability. She said substantial innovations would be required to integrate photonics into chips and remedy one of the last real bottlenecks.

Dean Klein (Micron/now retired)

Klein, who was vice president of memory system development at Micron at the time of the study and today in retirement mentors and motivates engineering students, highlighted as a hit the study group’s awareness that the energy of memory subsystems would drive compromises in the memory in systems, and as a miss the idea of NAND flash playing a role in supercomputing.

Bill Dally

The prescience of the study’s aggressive silicon strawman made it a hit, Dally said. Conversely, he viewed as shortcomings the paucity of capable networks due to funding, failure to anticipate AI, and an overly conservative approach in addressing software.

Exascale Study Contributors’ Predictions for 2028

The belief that complementary metal-oxide-semiconductor (CMOS) technology for constructing integrated circuits would remain predominant was a recurring notion, as the BoF contributors offered diverse predictions for 2028 based on the perspectives of their areas of expertise.

The contributors also responded to comments and questions from the audience.

Scott Gibson is a science writer and communications specialist with Oak Ridge National Laboratory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Army Seeks AI Ground Truth

April 3, 2020

Deep neural networks are being mustered by U.S. military researchers to marshal new technology forces on the Internet of Battlefield Things. U.S. Army and industry researchers said this week they have developed a “c Read more…

By George Leopold

Piz Daint Tackles Marsquakes

April 3, 2020

Even as researchers use supercomputers to probe the mysteries of earthquakes here on Earth, others are setting their sights on quakes just a little farther away. Researchers at ETH Zürich in Switzerland have applied sup Read more…

By Oliver Peckham

HPC Career Notes: April 2020 Edition

April 2, 2020

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

By Mariana Iriarte

AMD Epyc CPUs Now on Bare Metal IBM Cloud Servers

April 1, 2020

AMD’s expanding presence in the datacenter and cloud computing markets took a step forward with today’s announcement that its 7nm 2nd Gen Epyc 7642 CPUs are now available on IBM Cloud bare metal servers. AMD, whose Read more…

By Doug Black

Supercomputer Testing Probes Viral Transmission in Airplanes

April 1, 2020

It might be a long time before the general public is flying again, but the question remains: how high-risk is air travel in terms of viral infection? In an article for the Texas Advanced Computing Center (TACC), Faith Si Read more…

By Staff report

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

ECP Milestone Report Details Progress and Directions

April 1, 2020

The Exascale Computing Project (ECP) milestone report issued last week presents a good snapshot of progress in preparing applications for exascale computing. There are roughly 30 ECP application development (AD) subproj Read more…

By John Russell

ECP Milestone Report Details Progress and Directions

April 1, 2020

The Exascale Computing Project (ECP) milestone report issued last week presents a good snapshot of progress in preparing applications for exascale computing. Th Read more…

By John Russell

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

LLNL Leverages Supercomputing to Identify COVID-19 Antibody Candidates

March 30, 2020

As COVID-19 sweeps the globe to devastating effect, supercomputers around the world are spinning up to fight back by working on diagnosis, epidemiology, treatme Read more…

By Staff report

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Conversation: ANL’s Rick Stevens on DoE’s AI for Science Project

March 23, 2020

With release of the Department of Energy’s AI for Science report in late February, the effort to build a national AI program, modeled loosely on the U.S. Exascale Initiative, enters a new phase. Project leaders have already had early discussions with Congress... Read more…

By John Russell

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This