In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
“Parametric computational modeling of galaxies is a process with a high computational cost,” write these researchers from Brazil’s National Institute for Space Research and Universidade Cruzeiro do Sul. The paper outlines their work to optimize the GALPHAT (Galaxy Photometric Attributes) data processing tool more efficiently on modern HPC platforms in order to enable better processing of galaxy data from the Sloan Digital Sky Survey.
Authors: Igor Kolesnikov, Celso Mendes, Reinaldo de Carvalho and Reinaldo Rosa.
The Energy Exascale Earth System Model (E3SM) is used by the Department of Energy to produce climate predictions for the energy sector. In this paper, a team of researchers from Sandia National Laboratories work to port the nonhydrostatic atmospheric dynamical core so that it runs more efficiently on a variety of architectures. “When using the GPUs, our implementation is able to achieve 0.97 Simulated Years Per Day, running on the full Summit supercomputer,” they write. “To the best of our knowledge, this is the most achieved to date by any global atmosphere dynamical core running at such resolutions.”
Authors: Luca Bertagna, Oksana Guba, Mark A. Taylor, James G. Foucar, Jeff Larkin, Andrew M. Bradley, Sivasankaran Rajmanickam and Andrew G. Salinger.
In this paper, a trio from Riken and the University of Tokyo challenge what they say is a “widely accepted” conclusion that hot water cooling is a good technique for energy-efficient HPC datacenters. Rather than looking solely at the power consumption of the cooling system, they say, attention should be paid to the power consumption and performance impacts on the HPC system itself. Testing on the Oakforest-PACS supercomputer, they find that higher cooling water temperature results in an increased number of nodes suffering from performance degradation.
Authors: Jorji Nonaka, Toshihiro Hanawa and Fumiyoshi Shoji.
These researchers, hailing from Argonne National Laboratory, the University of Notre Dame and the University of Chicago, outline the need to streamline the pipeline for the massive amounts of electron microscopy data produced by many laboratories. To that end, they introduce their pipeline, HAPPYNeurons, which is modular and “paves the way” for both “the deluge of data anticipated from faster next-generation microscopes” and exascale supercomputers.
Authors: Rafael Vescovi, Hanyu Li, Jeffery Kinnison, Murat Keçeli, Misha Salim, Narayanan Kasthuri, Thomas D. Uram and Nicola Ferrier.
Last year, Riken’s Fugaku supercomputer took the Top500 by storm as the first Arm-based supercomputer to top the list. At the heart of Fugaku is the A64FX, a 64-bit Arm chip designed by Fujitsu. In this paper, researchers from Riken present a preliminary performance evaluation of the A64FX across seven HPC applications and benchmarks, comparing it to the Marvell ThunderX2 and Intel Xeon Skylake processors. They find that “the A64FX achieved higher performance in a memory bandwidth-intensive application[.]”
Authors: Tetsuya Odajima, Yuetsu Kodama, Miwako Tsuji, Motohiko Matsuda, Yutaka Maruyama and Mitsuhisa Sato.
“MPI has been ubiquitously deployed in flagship HPC systems,” these authors from the University of California and Lawrence Livermore National Laboratory write, “aiming to accelerate distributed scientific applications running on tens of hundreds of processes and compute nodes.” In this paper, they introduce MATCH, a suite for characterizing, researching and comparing different MPI fault tolerance designs to ensure that MPI applications resume efficiently after system failures.
Authors: Luanzheng Guo, Giorgis Georgakoudis, Konstantinos Parasyris, Ignacio Laguna and Dong Li.
“Understanding the status of high-performance computing platforms and correlating applications to resource usage provide insight into the interactions among platform components,” write these researchers from Texas Tech University and Dell EMC. To that end, they introduce MonSTer, an out-of-the-box HPC system monitoring tool that “correlates applications to resource usage and reveals insightful knowledge without having additional overhead on the application and computing nodes.” The researchers discuss deploying MonSTer on Texas Tech’s 467-node Quanah cluster over the past year.
Authors: Jie Li, Ghazanfar Ali, Ngan Nguyen, Jon Hass, Alan Sill, Tommy Dang and Yong Chen.
Do you know about research that should be included in next month’s list? If so, send us an email at firstname.lastname@example.org. We look forward to hearing from you.