Meet the Exascale Apps

By Gary Johnson

April 12, 2012

In what will be a three-decade span between gigascale and exascale computing, HPC capability will have increased by a factor of one billion, but the apps that are projected to use this enormous increase in capability look pretty much like the gigascale ones. Are we missing opportunities as we push the apex of HPC higher?

Gigascale to Terascale

In February of 1991, the Office of Science and Technology Policy released the first “Blue Book” supplement to the President’s FY 1992 Budget Request for the new High Performance Computing and Communications Program. It was entitled “Grand Challenges: High Performance Computing and Communications” and contained a listing of the computational science and engineering challenges then seen as drivers for federal expenditures on HPC. Figure 2 from that report is reproduced below.

 Petascale to Exascale

In preparation for the current attempt to secure federal funding for exascale computing, the Department of Energy conducted a series of workshops entitled “Scientific Grand Challenges Workshop Series”. While this series only focused on science and engineering areas of importance to DOE’s mission, that mission is broad enough to view the grand challenges discussed there as typical of the applications areas foreseen as drivers for the move to exascale.

With the use of a bit of poetic license to prevent the reader’s eyes from glazing over, the table below attempts to convey the general character to these early 1990s gigascale to terascale applications and the exascale applications considered for the 2018-2025 timeframe (depending on whose guess about the arrival of exascale computing one chooses).

We see that over a span of 28 to 35 years, depending on how you count, the applications list remains substantially the same. A few of the 90s applications have dropped off the list – either through success or loss of interest. A couple of well-established applications: Nuclear Physics and Nuclear Energy Systems have been added in response to renewed interest in nuclear energy. To be sure, the other areas listed – the ones surviving multiple decades – have grown in complexity and broadened in applicability. What seems to be missing is the addition of any fundamentally new applications.

Over the decades since the publication of that first Blue Book, “apexscale” HPC has grown in capability by a factor of 1,000,000. In another decade, when exascale machines occupy the apex, they will be a factor of 1,000,000,000 more capable than those early 90s machines. Certainly, this enormous increase must present the opportunity to do a few fundamentally new things.

Capability Computing Usage Modes

In general, as HPC grows in capability, it can be used in three distinct ways:

  • Do what we’re currently doing, but faster or cheaper;
  • Undertake the logical extension of what we’re currently doing to use additional computing capabilities; or
  • Use the new and vastly more capable resource to do something we hadn’t seriously considered trying before.

Clearly and justifiably, we are using apexscale HPC in the first two ways. But what about the third? Have we run out of new ideas? Certainly not. But getting new apps on the agenda seems to have been either remarkably hard or of surprisingly little interest.

Exascale Readiness

Whether any new application candidate is, from inception, “exascale ready” seems considerably less important than its potential scalability. We are, after all, living in an age of scalable computing. Observe that many of the gigascale apps of the early 90s have readily survived, and thrived on, the transition to petascale and (soon) exascale. Did we coincidentally choose the complete collection of applications with this sort of potential for scalability back then or could there be others lurking in the wings?

Opportunities

Thinking of what we hadn’t thought of is always difficult and fraught with peril (you don’t know what you don’t know). However, the commercial and open science worlds have provided us with a few possibilities.

Big Data

Although several federally-funded applications areas have well-established needs for data crunching (e.g., high-energy physics, bioinformatics, and national security), the current opportunity in “Big Data” comes from the commercial world. Think: Social Data Analysis, Personal Analytics, Biobank, the Quantified Self, 23andMe, Healthrageous, Integrated Personal Omics, MyLifeBits. These are probably just the tip of the big data iceberg.

IBM has already launched Watson, with (beyond Jeopardy) foci on health care and financial services. Cray and Sandia National Laboratories have started a Supercomputing Institute for Learning and Knowledge Systems. NeuStar and the University of Illinois Urbana-Champaign have created a Big Data Research Facility. The federal government is also getting onboard with its recently announced Big Data Initiative. In fact, it’s interesting to note that the “Blue Book” accompanying the President’s FY 2013 budget request is strongly focused on big data and not the grand challenges of earlier blue books.

So, Big Data is probably a “no brainer” for the new applications category. Some of it may not be exascale yet, but there’s lots of room to grow.

Brain in a Box

This new application candidate has been advocated by Henry Markram at the Swiss Federal Institute of Technology in Lausanne (EFPL). Its official title is the Human Brain Project (HBP).

As described in a recent Nature article, it’s “an effort to build a supercomputer simulation that integrates everything known about the human brain, from the structures of ion channels in neural cell membranes up to mechanisms behind conscious decision-making.” Markram’s precursor Blue Brain Project at EFPL estimates that this is an exascale application (see figure below).

IBM is also a player in the activity, with its cognitive computing project called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE). This project claims that “By reproducing the structure and architecture of the brain—the way its elements receive sensory input, connect to each other, adapt these connections, and transmit motor output—the SyNAPSE project models computing systems that emulate the brain’s computing efficiency, size and power usage without being programmed.”

Thus, some form of simulation of the complete human brain seems like a keeper for our new applications short list.

Global-scale Systems

Under this heading, a couple of systems immediately come to mind: the global energy system and the global social system. Each seems worthy of a modeling effort.

In this vein, the European Commission has recently funded a “Big Science” pilot project, called FutureICT, “to understand and manage complex, global, socially interactive systems, with a focus on sustainability and resilience.” FutureICT intends to accomplish these goals “by developing new scientific approaches and combining these with the best established methods in areas like multi-scale computer modeling, social supercomputing, large-scale data mining and participatory platforms.” Sounds like there’s potential for an exascale application here.

To the best of our knowledge, there is no current effort to simulate the complete global energy system. However, given the critical nature of energy, from resource discovery and recovery, through transportation of energy materials, to production and distribution of energy, and disposition of by-products, it seems like having one or more full-scale, high- fidelity simulation tools on hand might be a good idea. Perhaps this will be part of the FutureICT project.

The Whole Planet

Thanks to a concerted international effort spanning a couple of decades, we now have some pretty good global climate models. This community effort has also set a shining example for “team science.”

Lately, the climate modeling community has begun using the term “Earth systems science,” as more phenomenology is added to the basic coupled ocean-atmosphere simulations. Laudable and valuable as these efforts may be, they still leave most of the planet out of the models. So, maybe we should model the whole planet.

The opportunity for such a whole planet model is made visible when one looks at the imagery of our Blue Marble. One immediately notices how thin the shell of the atmosphere is in comparison to the dimensions of our planet. The Earth’s volumetric mean radius is 6371 km. Current climate models reach about 30 km above the surface. The deepest point any ocean model needs to reach is about 12 km below the surface. So, our current modeling efforts are focused on a shell that is, at best, about 0.66 percent of the Earth’s radius. This shell represents about 1.96 percent of the Earth’s volume and 0.02 percent of its mass.

Note that the sort of whole planet model proposed here represents an extreme example of a multi-physics, multi-scale problem. The relevant temporal and spatial scales range from sub-millisecond molecular interactions to multi-millennia ice sheet models to million cubic kilometer modeling of the ionosphere.

The advantages of a fully integrated whole planet model are readily apparent and include applications for:

  • Disaster management and mitigation
  • Energy exploitation
  • Minerals exploration and recovery
  • Siting of critical facilities (e.g., nuclear power plants and waste repositories)
  • Understanding the impact of climate change on built infrastructure
  • Understanding the interactions among human, ecological and physical systems

The availability of such models would also serve to advance fundamental scientific understanding of our planet and its dynamics. Furthermore, undertaking to build such models would provide researchers in all of the relevant disciplines with a clear context for thinking about their research activities and how they contribute to the overall planet modeling effort.

Since the earth system models already in development will require trans-petascale computing capabilities, it is clear that exascale capability will be a bare minimum requirement for whole planet models.

The idea of building the sort of top-down whole planet model suggested here has also occurred to others. See, for example, the agenda of the Geneva-based International Centre for Earth Simulation (ICES). Furthermore, no discussion of this topic would be complete without paying homage to the ground-breaking efforts of Japan’s Earth Simulator Center.

Thinking outside the box

Making the case for new applications is a game that anyone can play. Here we have attempted to make the point that there may be worthwhile candidates lurking out there, beyond the view of our current exascale effort and its list of drivers.

If you don’t like these examples, please feel free to critique and improve them. If you have additional applications candidates, please make them known. The more frank and constructive discussion we have on this topic, the better and richer the future of HPC will be.

About the author

Gary M. Johnson is the founder of Computational Science Solutions, LLC, whose mission is to develop, advocate, and implement solutions for the global computational science and engineering community.

Dr. Johnson specializes in management of high performance computing, applied mathematics, and computational science research activities; advocacy, development, and management of high performance computing centers; development of national science and technology policy; and creation of education and research programs in computational engineering and science.

He has worked in Academia, Industry and Government. He has held full professorships at Colorado State University and George Mason University, been a researcher at United Technologies Research Center, and worked for the Department of Defense, NASA, and the Department of Energy.

He is a graduate of the U.S. Air Force Academy; holds advanced degrees from Caltech and the von Karman Institute; and has a Ph.D. in applied sciences from the University of Brussels.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire