Sandia-Developed Benchmark Re-ranks Top Computers

February 27, 2018

ALBUQUERQUE, N.M., Feb. 27 — A Sandia National Laboratories software program now installed as an additional test for the TOP500 supercomputer challenge has become increasingly prominent. The program’s full name — High Performance Conjugate Gradients, or HPCG — doesn’t come trippingly to the tongue, but word is seeping out that this relatively new benchmarking program is becoming as valuable as its venerable partner — the High Performance LINPACK program — which some say has become less than satisfactory in measuring many of today’s computational challenges.

TOP500 LINPACK and HPCG charts of the fastest supercomputers of 2017. The rearranged order and drastic reduction in estimated speed for the HPCG benchmarks are the result of a different method of testing modern supercomputer programs. (Image courtesy of Sandia National Laboratories)

“The LINPACK program used to represent a broad spectrum of the core computations that needed to be performed, but things have changed,” said Sandia researcher Mike Heroux, who created and developed the HPCG program. “The LINPACK program performs compute-rich algorithms on dense data structures to identify the theoretical maximum speed of a supercomputer. Today’s applications often use sparse data structures, and computations are leaner.”

The term “sparse” means that a matrix under consideration has mostly zero values. “The world is really sparse at large sizes,” said Heroux. “Think about your social media connections: there may be millions of people represented in a matrix, but your row — the people who influence you — are few. So, the effective matrix is sparse. Do other people on the planet still influence you? Yes, but through people close to you.”

Similarly, for a scientific problem whose solution requires billions of equations, most of the matrix coefficients are zero. For example, when measuring pressure differentials in a 3-D mesh, the pressure on each node is directly dependent on its neighbors’ pressures. The pressure in faraway places is represented through the node’s near neighbors. “The cost of storing all matrix terms, as the LINPACK program does, becomes prohibitive, and the computational cost even more so,” said Heroux. A computer may be very fast in computing with dense matrices, and thus score highly on the LINPACK test, but in practical terms the HPCG test is more realistic.

To better reflect the practical elements of current supercomputing application programs, Heroux developed HPCG’s preconditioned iterative method for solving systems containing billions of linear equations and billions of unknowns. “Iterative” means the program starts with an initial guess to the solution, and then computes a sequence of improved answers. Preconditioning uses other properties of the problem to quickly converge to an acceptably close answer.

“To solve the problems we need to for our mission, which might range from a full weapons simulation to a wind farm, we need to describe physical phenomena to high fidelity, such as the pressure differential of a fluid flow simulation,” said Heroux. “For a mesh in a 3-D domain, you need to know at each node on the grid the relations to values at all the other nodes. A preconditioner makes the iterative method converge more quickly, so a multigrid preconditioner is applied to the method at each iteration.”

Supercomputer vendors like NVIDIA Corp., Fujitsu Ltd., IBM, Intel Corp. and Chinese companies write versions of HPCG’s program that are optimal for their platform. While it might seem odd for students to modify a test to suit themselves, it’s clearly desirable for supercomputers of various designs to personalize the test, as long as each competitor touches all the agreed-upon calculation bases.

“We have checks in the code to detect optimizations that are not permitted under published benchmark policy,” said Heroux.

On the HPCG TOP500 list, the Sandia and Los Alamos National Laboratory supercomputer Trinity has risen to No. 3, and is the top Department of Energy system. Trinity is No. 7 overall in the LINPACK ranking. HPCG better reflects the Trinity design choices.

Sandia National Laboratories computational researcher Mike Heroux created the HPCG program that re-arranges supercomputer rankings. (Photo courtesy of Sandia National Laboratories)

Heroux says he wrote the base HPCG code 15 years ago, originally as a teaching code for students and colleagues who wanted to learn the anatomy of an application that uses scalable sparse solvers. Jack Dongarra and Piotr Luszczek of the University of Tennessee have been essential collaborators on the HPCG project. In particular, Dongarra, whose visibility in the high-performance computing community is unrivaled, has been a strong promoter of HPCG.

“His promotional contributions are essential,” said Heroux. “People respect Jack’s knowledge and it helped immensely in spreading the word. But if the program wasn’t solid, promotion alone wouldn’t be enough.”

Heroux invested his time in developing HPCG because he had a strong desire to better assure the U.S. stockpile’s safety and effectiveness. The supercomputing community needed a new benchmark that better reflected the needs of the national security scientific computing community.

“I had worked at Cray Inc. for 10 years before joining Sandia in ’98,” he says, “when I saw the algorithmic work I cared about moving to the labs for the Accelerated Strategic Computing Initiative (ASCI). When the US decided to observe the Comprehensive Nuclear Test Ban Treaty, we needed high-end computing to better ensure the nuclear stockpile’s safety and effectiveness. I thought it was a noble thing, that I would be happy to be part of it, and that my expertise could be applied to develop next-generation simulation capabilities. ASCI was the big new project in the late 1990s if I wanted to do something meaningful in my area of research and development.”

Heroux is now director of software technology for the Department of Energy’s Exascale Computing Project. There, he works to harmonize the computing work of the DOE national labs — Oak Ridge, Argonne, Lawrence Berkeley, Pacific Northwest, Brookhaven and Fermi, along with the three National Nuclear Security Administration labs.

“Today, we have an opportunity to create an integrated effort among the national labs,” said Heroux. “We now have daily forums at the project level, and the people I work with most closely are people from the other labs. Because the Exascale Computing Project is integrated, we have to deliver software to the applications and the hardware at all labs. The Department of Energy’s attempt at a multi-lab, multi-university project gives an organizational structure for us to work together as a cohesive unit so that software is delivered to fit the key applications.”

Among Heroux’s achievements, he served for six years as editor-in-chief of ACM’s Transactions on Mathematical Software. He is a senior scientist at Sandia.

About Sandia National Laboratories

Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.


Source: Sandia National Laboratories

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia Showcases Work with Quantum Centers at ISC24

May 13, 2024

With quantum computing surging in Europe, Nvidia took advantage of ISC24 to showcase its efforts working with quantum development centers. Currently, Nvidia GPUs are dominant inside classical systems used for quantum sim Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger systems (e.g. exascale), according to Hyperion Research’s ann Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Oak Ridge National Laboratory in Tennessee, USA, retains its Read more…

Harvard/Google Use AI to Help Produce Astonishing 3D Map of Brain Tissue

May 10, 2024

Although LLMs are getting all the notice lately, AI techniques of many varieties are being infused throughout science. For example, Harvard researchers, Google, and colleagues published a 3D map in Science this week that Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of that at the upcoming ISC High Performance 2024, which is hap Read more…

Processor Security: Taking the Wong Path

May 9, 2024

More research at UC San Diego revealed yet another side-channel attack on x86_64 processors. The research identified a new vulnerability that allows precise control of conditional branch prediction in modern processors.� Read more…

ISC24: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire