The Exascale Revolution

By Tiffany Trader

October 23, 2014

The post-petascale era is marked by systems with far greater parallelism and architectural complexity. Failing some game-changing innovation, crossing the next 1000x performance barrier will be more challenging than previous efforts. At the 2014 Argonne National Laboratory Training Program on Extreme Scale Computing (ATPESC), held in August, Professor Pete Beckman delivered a talk on “Exascale Architecture Trends” and their impact on the programming and executing of computational science and engineering applications.

It’s a unique point in time, says Beckman, director of the Exascale Technology and Computing Institute. While we can’t completely future-proof code, there are trends that will impact programming best practices.

When it comes to the current state of HPC, Beckman shares a chart from Peter Kogge of Notre Dame detailing three major trends, which can be traced back to 2004.

  • The power ceiling.
  • The clock ceiling.
  • Sockets and cores are growing.

As Kogge illustrates, there was a fundamental shift in 2004. Computing reached a point where the chips can’t get any hotter, the clock stopped scaling and there was no more free performance lunch.

“Now the parallelism in your application is increasing dramatically with every generation,” says Beckman. “We have this problem, we can’t make things take much more power per package, we’ve hit the clock ceiling, we’re now scaling by adding parallelism, and there’s a power problem at the heart of this, which translates into all sorts of other problems, with memory and so on.”

To illustrate the power issue, Beckman compares the IBM Blue Gene/Q system to its predecessor the Blue Gene/P system. Blue Gene/Q is about 20 times faster and uses four times more power, making it five times more power efficient. This seems like very good progress. But with further extrapolation, it is evident that an exascale system built on this 5x trajectory would consume 64MW of power. To add further perspective, consider a MW costs about $1 million a year in electricity, putting this cost at $64 million a year.

Power Problem Blue Gene Beckman

Beckman emphasizes the international nature of this problem. Japan, for example, has set an ambitious target of 2020 for its exascale computing strategy, which is being led by RIKEN Advanced Institute for Computational Science. Although they have not locked down all the necessary funding, they estimate a project cost of nearly $1.3 billion.

Regions around the world have come to the conclusion that the exascale finish line is unlike previous 1000x efforts and will require international collaboration. Beckman points to TOP500 list stagnation has indicative of the difficulty of this challenge. In light of this, Japan and the US have signed a formal agreement to collaborate on HPC system software development. The agreement signed at ISC includes significant collaboration.

Europe is likewise pursuing similar agreements with the US and Japan. As part of its Horizon 2020 program, Europe is planning to invest 700 million Euros between 2014 and 2020 to fund next-generation systems. Part of this initiative includes a special interest in establishing a Euro-centric HPC vendor base.

No discussion of the global exascale race would be complete without mentioning China, which has operated the fastest computer in the world, Tianhe-2, for the last three iterations of the TOP500 list. Tianhe-2 is energy-efficient for its size with a power draw of 24MW power including cooling, however the expense has resulted in it’s not being turned on all the time.

Principally an Intel-powered system, Tianhe-2 also contains homegrown elements developed by China’s National University of Defense Technology (NUDT), including SPARC-derived CPUs, a high-speed interconnect, and its operating system, which is a Linux variant. China continues to invest heavily in HPC technology. Beckman says we can expect to see one of the next machine’s from China – likely in the top 10 – comprised entirely of native technology.

Can the exponential progress continue?

Looking at the classic History of Supercomputing chart, it looks like systems will continue to hit their performance marks if their massive power footprints are tolerable. At the device level, there is stress with regard to feature sizes nearing some fundamental limits. “Unless there is a revolution of some sort, we really can’t get off the curve that is heading towards a 64MW supercomputer,” says Beckman. “It’s about power, both in the number of chips and the total dissipation of each of chips.”

Beckman cites some of the forces of change with regard to software, including memory, threads, messaging, resilience and power. At the level of the programming model and the OS interface, Beckman suggests the need for coherence islands as well as persistence.

With increased parallelism, the notion that equal work is equal time is going away, and variability (noise, jitter) is the new norm. “The architecture will begin to show even more variability between components and your algorithms and your approaches, whether it’s tasks or threads, will address that in the future,” Beckman tells his audience, “and as we look toward exascale, the programmer who can master this feature well, will do well.”

Attracting and training the next generation of HPC users is a top priority for premier HPC centers like Argonne National Laboratory. One way that Argonne tackles this challenge is by holding an intensive summer school in extreme-scale computing. Tracing its summer program back to the 1980s, the presentations are worthwhile not just for the target audience – a select group of mainly PhD students and postdocs – but for anyone who is keenly interested in the state of HPC, where it’s come from and where it’s going.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and implementation of artificial intelligence (AI) tools, while the Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are many interesting stories, and only a few ever become headli Read more…

Quantum Tech Sector Hiring Stays Soft

June 13, 2024

New job announcements in the quantum tech sector declined again last month, according to an Quantum Economic Development Consortium (QED-C) report issued last week. “Globally, the number of new, public postings for Qu Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. A typical supercomputer lifecycle is about five to six years Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently king of accelerated computing) wins again, sweeping all nine Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research computing centers, national labs, federal agencies, and univ Read more…

Shutterstock_666139696

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Highlights from GlobusWorld 2024: The Conference for Reimagining Research IT

June 11, 2024

The Globus user conference, now in its 22nd year, brought together over 180 researchers, system administrators, developers, and IT leaders from 55 top research Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

ASC24 Expert Perspective: Dongarra, Hoefler, Yong Lin

June 7, 2024

One of the great things about being at an ASC (Asia Supercomputer Community) cluster competition is getting the chance to interview various industry experts and Read more…

HPC and Climate: Coastal Hurricanes Around the World Are Intensifying Faster

June 6, 2024

Hurricanes are among the world's most destructive natural hazards. Their environment shapes their ability to deliver damage; conditions like warm ocean waters, Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire