Where Has HPC’s Math Gone?

By Gary Johnson

August 19, 2013

When we think about progress in HPC, most of us use hardware speed, as reported in listings like the Top500, as our yardstick.  But, is that the whole story – or even its most important component?  HPC hardware and the attendant systems software and tools suites are certainly necessary for progress.  But to harness HPC for practical problem solving, we also need the right math, as expressed in our solvers and applications algorithms.  Hardware is tangible and visible while math is seen through the mind’s eye – and is easily overlooked.  Lately, there hasn’t been much public discussion of HPC’s math.  Where has it gone?  Has it matured to the point of invisibility – or is it still a vibrant and dynamic part of HPC?  Let’s take a look.

“Unglamorous but Critical”

From the early days of HPC, math was clearly seen as a vital element.  In December of 1982, the Report of the Panel on Large Scale Computing in Science and Engineering, also known as the Lax Report, was published. One of its recommendations called for (emphasis mine):

Increased research in computational mathematics, software, and algorithms necessary to the effective and efficient use of supercomputer systems

Twenty years later, in July of 2003, the Department of Energy (DOE)’s Office of Science published: A Science-Based Case for Large-Scale Simulation, also known as the SCaLeS Report (Volume 1, Volume 2).  Among other things, it reiterated the critical role of solvers in HPC (emphasis mine):

Like the engine hidden beneath the hood of a car, the solver is an unglamorous but critical component of a scientific code, upon which the function of the whole critically depends. As an engine needs to be properly matched to avoid overheating and failure when the vehicle’s performance requirements are pushed, so a solver appropriate to the simulation at hand is required as the computational burden gets heavier with new physics or as the distance across the data structures increases with enhanced resolution.

Solvers & Speedup

When improvements in HPC hardware performance are discussed, mention is often made of Moore’s Law and the desire to keep pace with it.  Perhaps less well known is the observation that algorithm speedups have historically matched hardware speedups due to Moore’s Law.  For example, consider this excerpt from the SCaLeS Report:

The choice of appropriate mathematical tools can make or break a simulation code. For example, over a four-decade period of our brief simulation era, algorithms alone have brought a speed increase of a factor of more than a million to computing the electrostatic potential induced by a charge distribution, typical of a computational kernel found in a wide variety of problems in the sciences. The improvement resulting from this algorithmic speedup is comparable to that resulting from the hardware speedup due to Moore’s Law over the same length of time (see Figure 13).

Figure 13.

Top: A table of the scaling of memory and processing requirements for the solution of the electrostatic potential equation on a uniform cubic grid of n × n × n cells.

Bottom: The relative gains of some solution algorithms for this problem and Moore’s Law for improvement of processing rates over the same period (illustrated for the case where n = 64).

Algorithms yield a factor comparable to that of the hardware, and the gains typically can be combined (that is, multiplied together). The algorithmic gains become more important than the hardware gains for larger problems. If adaptivity is exploited in the discretization, algorithms may do better still, though combining all of the gains becomes more subtle in this case.

Time to Solution

So, if hardware gains and algorithmic gains could be “multiplied together,” what would that imply?  If we are currently targeting a 1,000 fold increase in hardware speed over the present decade and if algorithmic gains keep pace, then in ten years we’ll have improved our problem solving capability by a factor of 1,000,000.  Thus we’d be able to solve today’s problems in one millionth of their current solution time or use today’s time to solution to tackle problems a million times harder.  Sounds pretty impressive.  Is the necessary math on track to make this happen? 

Obviously, things aren’t as simplistic as I’ve made them out to be.  To get the multiplicative effect, algorithms and hardware architectures should be independent of one another.  But in real HPC life, algorithms and hardware architectures interact.  Fast algorithms are usually “complicated” and complicated algorithms are best implemented on “simple” uncomplicated architectures.  Historically, when new, more complicated, hardware architectures are introduced we revert to simpler and slower solvers.  Consequently, the optimistic estimates of improvement in time to solution may not materialize.  In fact, time to solution could go up.  This effect can go largely unnoticed by the general community because simpler solvers can require lots of mathematical operations and faster architectures spit out more operations per second.  Thus in this situation, applications codes can run “fast” but produce solutions slowly.

As we move toward extreme scale HPC hardware, the interaction of algorithms and hardware architectures is becoming more important than ever.  Last year, DOE’s Office of Advanced Scientific Computing Research (ASCR) published a Report on the Workshop on Extreme-Scale Solvers: Transition to Future Architectures.  In it, the following observation is made (emphasis mine):

The needs of extreme-scale science are expected to drive a hundredfold increase in computational capabilities by mid-decade and a factor of 1,000 increase within ten years. These 100 PF (and larger) supercomputers will change the way scientific discoveries are made; moreover, the technology developed for those systems will provide desktop performance on par with the fastest systems from just a few years ago. Since numerical solvers are at the heart of the codes that enable these discoveries, the development of efficient, robust, high-performance, portable solvers has a tremendous impact on harnessing these computers to achieve new science. But future architectures present major challenges to the research and development of such solvers. These architectural challenges include extreme parallelism, data placement and movement, resilience, and heterogeneity. 

Solver Dominance

The extreme-scale solver report goes on to address the issue of solver dominance:

Increasing the efficiency of numerical solvers will significantly improve the ability of computational scientists to make scientific discoveries, because such solvers account for so much of the computation underlying scientific applications. 

This figure, taken from the extreme-scale solver report, shows that for a typical application as processor count and problem size increase, the time spent in the application’s solver (blue), relative to the time spent in the rest of the application’s code (pink), grows and soon dominates the total execution time.

What’s to be done about this – especially as we anticipate the move to exascale architectures? 

ExaMath

In an attempt to find some answers, ASCR has formed an Exascale Mathematics Working Group (EMWG) “for the purpose of identifying mathematics and algorithms research opportunities that will enable scientific applications to harness the potential of exascale computing.”

At ASCR’s request, the EMWG has organized a DOE Workshop on Applied Mathematics Research for Exascale Computing (ExaMath13).  ExaMath13 is taking place on 21-22 August and encompasses 40 presentations, selected on the basis of two-page position papers submitted to the EMWG a few months ago.  About 2/3s of the presenters are from the DOE Labs with the rest coming from universities.  The seventy five submitted position papers from which the 40 presentations were selected may be found at the EMWG website.  They make interesting reading and reinforce one’s optimism about the applied math community’s commitment to meeting the challenges posed by exascale architectures.

As the ExaMath problem is complex, it’s not surprising that most of the position papers deal with intricate mathematics.  However, a few also address the bigger picture.  To mention just one of those, Ulrich Ruede’s paper, entitled: New Mathematics for Exascale Computational Science?, summarizes the challenges faced by the applied math community particularly well:

I believe that the advent of exascale forces mathematics to address the performance abyss that widens increasingly between existing math theory and the practical use of HPC systems. Tweaking codes is not enough – we must turn back and analyze where we have not yet thought deeply enough, developing a new interdisciplinary algorithm and performance engineering methodology. Beyond this, exascale opens fascinating new opportunities in fundamental research that go far beyond just increasing the mesh resolution.

So, it looks like HPC’s math is back in the foreground.  There are lots of bright folks in the applied math community.  Let’s see what they come up with to address the difficulties posed by ExaMath.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in four different configurations, including a Grace Hopper HGX Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to an Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that encompasses the company’s rigorous accounting of the supply Read more…

AWS Solution Channel

Shutterstock 1044740602

DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances

Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stage Read more…

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that en Read more…

ISC 2022: International Association of Supercomputing Centers to Debut

May 23, 2022

At ISC 2022 in Hamburg, Germany, representatives from four supercomputing centers across three countries plan to debut the International Association of Supercom Read more…

ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

HPE Announces New HPC Factory in Czech Republic

May 18, 2022

A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Leading Solution Providers

Contributors

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Nvidia Announces ‘Eos’ Supercomputer

March 22, 2022

At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire