In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Evaluating the water and air cooling systems of the K computer
Eight years ago, the RIKEN Advanced Institute for Computational Science’s K supercomputer debuted Top500 list as the world’s most powerful publicly ranked supercomputer. Now, it’s been decommissioned, with some of its subsystems (power supply, and water/air cooling) slated for use in RIKEN’s next supercomputer, Fugaku. This paper, written by a team from RIKEN and Kobe University, evaluates the cooling systems of the K supercomputer with a focus on the possible benefits of low water and air temperatures. The authors expect that the study will be useful in Fugaku’s planning process.
Authors: Jorji Nonaka, Keiji Yamamoto, Akiyoshi Kuroda, Toshiyuki Tsukamoto, Kazuki Koiso and Naohisa Sakamoto.
Studying the natural gas market using HPC
With natural gas quickly emerging as a dominant global fuel, understanding the dynamics of its market is more critical than ever. In this paper, a team from the Melentiev Energy Systems Institute and the Matrosov Institute for System Dynamics and Control Theory in Russia propose a new model for natural gas market simulation. They run the scalable application in a heterogeneous distributed computing framework, increasing the number of runs and their accuracy relative to prior methods.
Authors: V. I. Zorkalzev, A. V. Edelev, S. M. Perzhabinsky, I. A. Sidorov and A. G. Feoktistov.
Facing challenges in fluid flow simulations under exascale
This author, from the Indian Institute of Technology Kanpur, discusses the challenges in porting hydrodynamic codes to exascale systems, including the “complexities of finite difference method, pseudospectral method, and Fast Fourier Transform (FFT).” The author suggests that finite difference and finite volume codes scale well, making them likely candidates for use in exascale systems.
Author: Mahendra K. Verma
Learning to learn for neuroscience on HPC
“The simulation of biological neural networks (BNN) is essential neuroscience,” write these authors (a team from the Jülich Supercomputing Center). In this paper, they discuss the computational models that allow them to decompose, analyze and understand the brain’s structure and activity – models, they say, that are largely advanced by “learning-to-learn” or meta-learning methods. They describe an implementation of learning-to-learn on HPC, demonstrating performance improvements.
Authors: S. Diaz, W. Klijn, A. Peyser, A. Subramoney, W. Maas, G. Visconti and M. Herty.
Assessing performance portability across diverse computer architectures
In contrast to previous studies of performance portability, which typically assessed one application, these authors — a team from the University of Bristol — “explore the wider landscape of performance portability” by evaluating a number of applications using rigorous performance portability metrics. They present their results, which span 12 computer architectures, including six server CPUs (from five vendors), five GPUs (from two vendors) and one vector architecture.
Authors: Tom Deakin, Simon McIntosh-Smith, James Price, Andrei Poenaru, Patrick Atkinson, Codrin Popa and Justin Salmon.
Fostering precision agriculture and livestock farming using HPC
A third of global food production is lost or wasted every year – nearly a trillion dollars thrown away. These authors, hailing from Greece, Cyrpus, France, Serbia and Belgium, discuss precision agriculture and precision livestock farming, data-driven approaches which aim to reduce those inefficiencies. They introduce CYBELE, a platform for integrating vast amounts of farm sensor data and analyzing that data using large-scale HPC infrastructures.
Authors: Konstantinos Perakis, Fenareti Lampathaki, Konstatinos Nikas, Yiannis Georgiou, Oskar Marko and Jarissa Maselyne.
Harnessing HPC capabilities for large-scale geospatial modeling using R
Large-scale simulations and parallel computing techniques are becoming more commonplace in many Gaussian process applications. This paper, by a team from the King Abdullah University of Science and Technology (KAUST), presents “ExaGeoStatR,” a package for large-scale geostatistics in R. They assess ExaGeoStatR’s accuracy and performance using both synthetic datasets and a real-world sea surface temperature dataset.
Authors: Sameh Abdulah, Yuxiao Li, Jian Cao, Hatem Ltaief, David E. Keyes, Marc G. Genton and Ying Sun.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.