What’s New in HPC Research: Air Pollution Prediction, nOS-V, cuHARM, Quantum Ray Tracing & More

By Mariana Iriarte

May 19, 2022

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.


Parallel space-time likelihood optimization for air pollution prediction on large-scale systems

A team of researchers from the extreme computing research center at the King Abdullah University of Science and Technology in Saudi Arabia present “a parallel implementation of geostatistical space-time modeling that can predict air pollution using observations in a specific space-time domain, illustrating the importance of relaxing the assumption of independence of space and time.” In this conference paper for the Platform for Advanced Scientific Computing Conference, the researchers “use the proposed implementation to model two air pollution datasets from the Middle East and US regions with 550 spatial locations ×730 time slots and 945 spatial locations ×500 time slots, respectively.” They demonstrated that the “approach satisfies high prediction accuracy on both synthetic datasets and real particulate matter (PM) datasets in the context of the air pollution problem.” In addition, they achieved “up to 757.16 TFLOPS/s using 1024 nodes (75% of the peak performance) using 490 geospatial locations on a Cray XC40 system.” 

Authors: Mary Lai O. Salvaña, Sameh Abdulah, Hatem Ltaief, Ying Sun, Marc G. Genton, and David E. Keyes

nOS-V: co-executing HPC applications using system-wide task scheduling 

Spanish researchers from the Barcelona Supercomputing Center believe the future of exascale supercomputers is one of massive parallelism, manycore processors and heterogeneous architectures. In cases like these, “it is increasingly difficult for HPC applications to fully and efficiently utilize the resources in system nodes. Moreover, the increased parallelism exacerbates the effects of existing inefficiencies in current applications,” they write. To address the problem, the researchers introduce “nOS-V, a lightweight tasking library that supports application co-execution using node-wide scheduling.” Co-execution is “a novel fine-grained technique to execute multiple HPC applications simultaneously on the same node, outperforming current state-of-the-art approaches.” The authors demonstrated “how co-execution with nOS-V significantly reduces schedule makespan for several applications on single node and distributed environments, outperforming prior node-sharing techniques.”

Authors: David Álvarez, Kevin Sala, and Vicenç Beltran

Figure 2: The relationship between the general structure of this program and
the heterogeneous platform.

IMEXLBM 1.0: a proxy application based on the Lattice Boltzmann Method for solving computational fluid dynamic problems on GPUs

A multi-institutional team of researchers from the City College of the City University of New York and Argonne National Laboratory describes a proxy application, IMEXLBM, developed for the Exascale Proxy Applications Project. The Project was created within the Exascale Computing Project (ECP) to “improve the quality of proxies created by the ECP, provide small, simplified codes which share important features of large applications, and capture programming methods and styles that drive requirements for compilers and other elements of the toolchain.” IMEXLBM is “an open-source, self-contained code unit, with minimal dependencies, that is capable of running on heterogeneous platforms like those with graphic processing units for accelerating the calculation.” Using the ThetaGPU machine at the Argonne Leadership Computing Facility, researchers demonstrated the code’s “functionality by solving a benchmark problem in computational fluid dynamics.” In addition, the authors point out that the “the code-unit is designed to be versatile and enable new physical models that can capture complex phenomena such as two-phase flow with interface capture.”

Authors: Geng Liu, Saumil Patel, Ramesh Balakrishnan, and Taehun Lee

Simulation-based optimization and sensibility analysis of MPI applications: variability matters

In this paper, Tom Cornebize Arnaud Legrand from the University Grenoble Alpes in France argues that “finely tuning MPI applications and understanding the influence of key parameters (number of processes, granularity, collective operation algorithms, virtual topology, and process placement) is critical to obtain good performance on supercomputers.” The researchers present in this paper “an extensive validation study which covers the whole parameter space of High-Performance Linpack.” Performing all experiments using the Dahu Cluster from the Grid’5000 testbed,  the researchers demonstrate “how the open-source version of HPL can be slightly modified to allow a fast emulation on a single commodity server at the scale of a supercomputer.” In addition, they show “an extensive (in)validation study that compares simulation with real experiments and demonstrates our ability to predict the performance of HPL within a few percent consistently.”  Lastly, they demonstrate their ‘surrogate’ “allows studying several subtle HPL parameter optimization problems while accounting for uncertainty on the platform.”

Authors: Tom Cornebize and Arnaud Legrandz

Lifetime-based method for quantum simulation on a new Sunway supercomputer 

A multi-institutional team of researchers from the National Supercomputing Center Wuxi, Tsinghua University, Zhejiang Lab, Shanghai Research Center for Quantum Sciences, and the Information Engineering University in Zhengzhou, China, introduced “lifetime-based methods to reduce the slicing overhead and improve the computing efficiency.” Researchers demonstrated that the “in place slicing strategy reduces the slicing overhead to less than 1.2 and obtains 100-200 times speedups over related efforts. The resulting simulation time is reduced from 304s (2021 Gordon Bell Prize) to 149.2s on Sycamore RQC, with a sustainable mixed precision performance of 416.5 Pflops using over 41M cores to simulate 1M correlated samples.”

Authors: Yaojian Chen, Yong Liu, Xinmin Shi, Jiawei Song, Xin Liu, Lin Gan, Chu Guo, Haohuan Fu, Dexun Chen, and Guangwen Yang

cuHARM: a new GPU accelerated GR-MHD code and its application to ADAF disks 

“A  new GPU-accelerated general-relativistic magneto-hydrodynamic (GR-MHD) code based on HARM” is introduced by a multi-institutional team of researchers from Bar Ilan University in Israel, the School of Astronomy and Space Science at the Nanjing University in China, the Key Laboratory of Modern Astronomy and Astrophysics at the Nanjing University in China. cuHARM is “code is written in CUDA-C and uses OpenMP to parallelize multi-GPU setups. Researchers tout that “a 2563 simulation is well within the reach of an Nvidia DGX-V100 server, with the computation being a factor about 10 times faster if only the CPU was used.” Using this code, researchers “examine several disk structures all in the ‘Standard And Normal Evolution’ state.” The results of their experiments found that “(i) increasing the magnetic field, while in the SANE state does not affect the mass accretion rate; (ii) Simultaneous increase of the disk size and the magnetic field, while keeping the ratio of energies fixed, lead to the destruction of the jet once the magnetic flux through the horizon decrease below a certain limit… [and] (iii) the structure of the jet is a weak function of the adiabatic index of the gas, with relativistic gas tend to have a wider jet.”

Authors: Damien Bégué, Asaf Pe’er, Guoqiang Zhang, BinBin Zhang, Benjamin Pevzner

Towards quantum ray tracing

A multi-institutional team of researchers from the University of Minho and the Institute for Systems and Computer Engineering, Technology and Science in Portugal, the University of the West of England in the UK, and the Texas Advanced Computing Center University of Texas at Austin in Texas, are working towards the goal of developing “a fully quantum rendering system.”  In this preprint, the researchers investigate “hybrid quantum-classical algorithms for ray tracing, a core component of most rendering techniques.” The authors “propose algorithms to significantly reduce the computation required for quantum ray tracing through exploiting image space coherence and a principled termination criteria for quantum searching.” They demonstrate “results for both Whitted style ray tracing, and for accelerating ray tracing operations when performing classical Monte Carlo integration for area lights and indirect illumination.”

Authors: Luís Paulo Santos, Thomas Bashford-Rogers, João Barbosa, Paul Navrátil


Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire