Meet the Exascale Apps

By Gary Johnson

April 12, 2012

In what will be a three-decade span between gigascale and exascale computing, HPC capability will have increased by a factor of one billion, but the apps that are projected to use this enormous increase in capability look pretty much like the gigascale ones. Are we missing opportunities as we push the apex of HPC higher?

Gigascale to Terascale

In February of 1991, the Office of Science and Technology Policy released the first “Blue Book” supplement to the President’s FY 1992 Budget Request for the new High Performance Computing and Communications Program. It was entitled “Grand Challenges: High Performance Computing and Communications” and contained a listing of the computational science and engineering challenges then seen as drivers for federal expenditures on HPC. Figure 2 from that report is reproduced below.

 Petascale to Exascale

In preparation for the current attempt to secure federal funding for exascale computing, the Department of Energy conducted a series of workshops entitled “Scientific Grand Challenges Workshop Series”. While this series only focused on science and engineering areas of importance to DOE’s mission, that mission is broad enough to view the grand challenges discussed there as typical of the applications areas foreseen as drivers for the move to exascale.

With the use of a bit of poetic license to prevent the reader’s eyes from glazing over, the table below attempts to convey the general character to these early 1990s gigascale to terascale applications and the exascale applications considered for the 2018-2025 timeframe (depending on whose guess about the arrival of exascale computing one chooses).

We see that over a span of 28 to 35 years, depending on how you count, the applications list remains substantially the same. A few of the 90s applications have dropped off the list – either through success or loss of interest. A couple of well-established applications: Nuclear Physics and Nuclear Energy Systems have been added in response to renewed interest in nuclear energy. To be sure, the other areas listed – the ones surviving multiple decades – have grown in complexity and broadened in applicability. What seems to be missing is the addition of any fundamentally new applications.

Over the decades since the publication of that first Blue Book, “apexscale” HPC has grown in capability by a factor of 1,000,000. In another decade, when exascale machines occupy the apex, they will be a factor of 1,000,000,000 more capable than those early 90s machines. Certainly, this enormous increase must present the opportunity to do a few fundamentally new things.

Capability Computing Usage Modes

In general, as HPC grows in capability, it can be used in three distinct ways:

  • Do what we’re currently doing, but faster or cheaper;
  • Undertake the logical extension of what we’re currently doing to use additional computing capabilities; or
  • Use the new and vastly more capable resource to do something we hadn’t seriously considered trying before.

Clearly and justifiably, we are using apexscale HPC in the first two ways. But what about the third? Have we run out of new ideas? Certainly not. But getting new apps on the agenda seems to have been either remarkably hard or of surprisingly little interest.

Exascale Readiness

Whether any new application candidate is, from inception, “exascale ready” seems considerably less important than its potential scalability. We are, after all, living in an age of scalable computing. Observe that many of the gigascale apps of the early 90s have readily survived, and thrived on, the transition to petascale and (soon) exascale. Did we coincidentally choose the complete collection of applications with this sort of potential for scalability back then or could there be others lurking in the wings?

Opportunities

Thinking of what we hadn’t thought of is always difficult and fraught with peril (you don’t know what you don’t know). However, the commercial and open science worlds have provided us with a few possibilities.

Big Data

Although several federally-funded applications areas have well-established needs for data crunching (e.g., high-energy physics, bioinformatics, and national security), the current opportunity in “Big Data” comes from the commercial world. Think: Social Data Analysis, Personal Analytics, Biobank, the Quantified Self, 23andMe, Healthrageous, Integrated Personal Omics, MyLifeBits. These are probably just the tip of the big data iceberg.

IBM has already launched Watson, with (beyond Jeopardy) foci on health care and financial services. Cray and Sandia National Laboratories have started a Supercomputing Institute for Learning and Knowledge Systems. NeuStar and the University of Illinois Urbana-Champaign have created a Big Data Research Facility. The federal government is also getting onboard with its recently announced Big Data Initiative. In fact, it’s interesting to note that the “Blue Book” accompanying the President’s FY 2013 budget request is strongly focused on big data and not the grand challenges of earlier blue books.

So, Big Data is probably a “no brainer” for the new applications category. Some of it may not be exascale yet, but there’s lots of room to grow.

Brain in a Box

This new application candidate has been advocated by Henry Markram at the Swiss Federal Institute of Technology in Lausanne (EFPL). Its official title is the Human Brain Project (HBP).

As described in a recent Nature article, it’s “an effort to build a supercomputer simulation that integrates everything known about the human brain, from the structures of ion channels in neural cell membranes up to mechanisms behind conscious decision-making.” Markram’s precursor Blue Brain Project at EFPL estimates that this is an exascale application (see figure below).

IBM is also a player in the activity, with its cognitive computing project called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE). This project claims that “By reproducing the structure and architecture of the brain—the way its elements receive sensory input, connect to each other, adapt these connections, and transmit motor output—the SyNAPSE project models computing systems that emulate the brain’s computing efficiency, size and power usage without being programmed.”

Thus, some form of simulation of the complete human brain seems like a keeper for our new applications short list.

Global-scale Systems

Under this heading, a couple of systems immediately come to mind: the global energy system and the global social system. Each seems worthy of a modeling effort.

In this vein, the European Commission has recently funded a “Big Science” pilot project, called FutureICT, “to understand and manage complex, global, socially interactive systems, with a focus on sustainability and resilience.” FutureICT intends to accomplish these goals “by developing new scientific approaches and combining these with the best established methods in areas like multi-scale computer modeling, social supercomputing, large-scale data mining and participatory platforms.” Sounds like there’s potential for an exascale application here.

To the best of our knowledge, there is no current effort to simulate the complete global energy system. However, given the critical nature of energy, from resource discovery and recovery, through transportation of energy materials, to production and distribution of energy, and disposition of by-products, it seems like having one or more full-scale, high- fidelity simulation tools on hand might be a good idea. Perhaps this will be part of the FutureICT project.

The Whole Planet

Thanks to a concerted international effort spanning a couple of decades, we now have some pretty good global climate models. This community effort has also set a shining example for “team science.”

Lately, the climate modeling community has begun using the term “Earth systems science,” as more phenomenology is added to the basic coupled ocean-atmosphere simulations. Laudable and valuable as these efforts may be, they still leave most of the planet out of the models. So, maybe we should model the whole planet.

The opportunity for such a whole planet model is made visible when one looks at the imagery of our Blue Marble. One immediately notices how thin the shell of the atmosphere is in comparison to the dimensions of our planet. The Earth’s volumetric mean radius is 6371 km. Current climate models reach about 30 km above the surface. The deepest point any ocean model needs to reach is about 12 km below the surface. So, our current modeling efforts are focused on a shell that is, at best, about 0.66 percent of the Earth’s radius. This shell represents about 1.96 percent of the Earth’s volume and 0.02 percent of its mass.

Note that the sort of whole planet model proposed here represents an extreme example of a multi-physics, multi-scale problem. The relevant temporal and spatial scales range from sub-millisecond molecular interactions to multi-millennia ice sheet models to million cubic kilometer modeling of the ionosphere.

The advantages of a fully integrated whole planet model are readily apparent and include applications for:

  • Disaster management and mitigation
  • Energy exploitation
  • Minerals exploration and recovery
  • Siting of critical facilities (e.g., nuclear power plants and waste repositories)
  • Understanding the impact of climate change on built infrastructure
  • Understanding the interactions among human, ecological and physical systems

The availability of such models would also serve to advance fundamental scientific understanding of our planet and its dynamics. Furthermore, undertaking to build such models would provide researchers in all of the relevant disciplines with a clear context for thinking about their research activities and how they contribute to the overall planet modeling effort.

Since the earth system models already in development will require trans-petascale computing capabilities, it is clear that exascale capability will be a bare minimum requirement for whole planet models.

The idea of building the sort of top-down whole planet model suggested here has also occurred to others. See, for example, the agenda of the Geneva-based International Centre for Earth Simulation (ICES). Furthermore, no discussion of this topic would be complete without paying homage to the ground-breaking efforts of Japan’s Earth Simulator Center.

Thinking outside the box

Making the case for new applications is a game that anyone can play. Here we have attempted to make the point that there may be worthwhile candidates lurking out there, beyond the view of our current exascale effort and its list of drivers.

If you don’t like these examples, please feel free to critique and improve them. If you have additional applications candidates, please make them known. The more frank and constructive discussion we have on this topic, the better and richer the future of HPC will be.

About the author

Gary M. Johnson is the founder of Computational Science Solutions, LLC, whose mission is to develop, advocate, and implement solutions for the global computational science and engineering community.

Dr. Johnson specializes in management of high performance computing, applied mathematics, and computational science research activities; advocacy, development, and management of high performance computing centers; development of national science and technology policy; and creation of education and research programs in computational engineering and science.

He has worked in Academia, Industry and Government. He has held full professorships at Colorado State University and George Mason University, been a researcher at United Technologies Research Center, and worked for the Department of Defense, NASA, and the Department of Energy.

He is a graduate of the U.S. Air Force Academy; holds advanced degrees from Caltech and the von Karman Institute; and has a Ph.D. in applied sciences from the University of Brussels.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NSF Awards $10M to Extend Chameleon Cloud Testbed Project

September 19, 2017

The National Science Foundation has awarded a second phase, $10 million grant to the Chameleon cloud computing testbed project led by University of Chicago with partners at the Texas Advanced Computing Center (TACC), Ren Read more…

By John Russell

NERSC Simulations Shed Light on Fusion Reaction Turbulence

September 19, 2017

Understanding fusion reactions in detail – particularly plasma turbulence – is critical to the effort to bring fusion power to reality. Recent work including roughly 70 million hours of compute time at the National E Read more…

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conferen Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

U of Illinois, NCSA Launch First US Nanomanufacturing Node

September 14, 2017

The University of Illinois at Urbana-Champaign together with the National Center for Supercomputing Applications (NCSA) have launched the United States's first computational node aimed at the development of nanomanufactu Read more…

By Tiffany Trader

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakt Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work... Read more…

By Jan Rowell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

MIT-IBM Watson AI Lab Targets Algorithms, AI Physics

September 7, 2017

Investment continues to flow into artificial intelligence research, especially in key areas such as AI algorithms that promise to move the technology from speci Read more…

By George Leopold

Need Data Science CyberInfrastructure? Check with RENCI’s xDCI Concierge

September 6, 2017

For about a year the Renaissance Computing Institute (RENCI) has been assembling best practices and open source components around data-driven scientific researc Read more…

By John Russell

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Leading Solution Providers

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This