Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

By Tiffany Trader

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular HPC applications and benchmarks and are sharing their results in a paper, available on arXiv.org.

Their method was to use the application kernel module of the XD Metrics on Demand (XDMoD) tool to run tests before and after the installation of the vulnerability patches. They recorded the performance difference for the following applications and benchmarks: NWChem, NAMD, the HPC Challenge Benchmark suite (HPCC) [which includes the memory bandwidth micro-benchmark STREAM and the NASA parallel benchmarks (NPB)], IOR, MDTest and interconnect/MPI benchmarks (IMB).

Most of the application kernels were executed on one or two nodes (8 and 16 cores respectively) of a development cluster at the Center for Computational Research. Each node has two Intel L5520 CPUs (Nehalem EP) connected by QDR Mellanox InfiniBand, and can access 3 PB IBM of shared GPFS storage system. The operating system is CentOS Linux release 7.4.1708.

The worst case performance hit went as high as 54 percent for select functions (e.g., MPI random access, memory copying and file metadata operations), while real-world applications showed a 2-3 percent decrease in performance for single node jobs and a 5-11 performance decrease for parallel two-node jobs. The authors indicate there may be a way to recoup some of this loss via compiler and MPI libraries.

Also notable, Fourier transformation (FFT), matrix multiplication and matrix
transposition get slower,  6.4 percent, 2 percent and 10 percent slower (on two nodes) respectively.

The findings of the SUNY team align with those of Red Hat, which earlier this month released the results from benchmark tests it conducted specifically to measure the impact of the kernel patches. Red Hat found that CPU-intensive HPC workloads suffered only a 2-5 percent hit “because jobs run mostly in user space and are scheduled using CPU-pinning or NUMA control.” In comparison, database analytics were found to take a modest 3-7 percent hit and OLTP database workloads suffered the most (8-19 percent degradation).

The SUNY researchers have plans to conduct additional testing “with a larger number of nodes and for more application kernels” once the updates are applied to their production system.

The XD Metrics on Demand (XDMoD) tool employed for the testing was originally developed to provide independent audit capability for the XSEDE program. It was later open-sourced and is now used widely across research and commercial HPC sites. The tool includes an application kernel performance monitoring module that “allows automatic performance monitoring of HPC resources through the periodic execution of application kernels, which are based on benchmarks or real-world applications implemented with sensible input parameters.”

The paper was authored by Nikolay A. Simakov, Martins D. Innus, Matthew D. Jones, Joseph P. White, Steven M. Gallo, Robert L. DeLeon and Thomas R. Furlani. It is available on arxiv.org.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with HPE for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&D on energy technologies s Read more…

By Tiffany Trader

Training Time Slashed for Deep Learning

August 14, 2018

Fast.ai, an organization offering free courses on deep learning, claimed a new speed record for training a popular image database using Nvidia GPUs running on public cloud infrastructure. A pair of researchers trained Read more…

By George Leopold

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learning. The CERN team demonstrated that AI-based models have the Read more…

By Rob Farber

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

Rigetti Eyes Scaling with 128-Qubit Architecture

August 10, 2018

Rigetti Computing plans to build a 128-qubit quantum computer based on an equivalent quantum processor that leverages emerging hybrid computing algorithms used to test programs and potential applications. Founded in 2 Read more…

By George Leopold

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with HPE for a new 8-petaflops (peak) supercomputer that will be Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Google is First Partner in NIH’s STRIDES Effort to Speed Discovery in the Cloud

July 31, 2018

The National Institutes of Health, with the help of Google, last week launched STRIDES - Science and Technology Research Infrastructure for Discovery, Experimen Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This