In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Running skin modeling simulations on large distributed-memory clusters
Realistic simulation of human skin is crucial for a number of medical and scientific applications (such as drug development). These authors, a team from Ruhr-Universität Bochum and Universität Hamburg, outline how skin modeling tasks – which benefit, they say, from modern parallel algorithms – scale up when tested on a large distributed-memory cluster. The researchers, who tested the simulations on the Hazel Hen cluster at the High Performance Computing Center in Stuttgart, present their results and a study for scaling the simulations further to up to 12,288 cores.
Authors: Jose Pinzon, Martin Siebenborn and Andreas Vogel.
Scaling computational fluid dynamics for supersonic jet flow analysis
New regulations have placed limits on the noise created by airplanes, forcing researchers to investigate aeroacoustics. These researchers (a team from France and Brazil) used an in-house computational fluid dynamics tool to simulate supersonic jet flows for aeroacoustics. They ran this model on the 1.85 Linpack petaflop Santos Dumont system at the Laboratório Nacional de Computação Científica. The researchers describe how they scaled the code up to operate on Santos Dumont and present their results.
Authors: Carlos Junqueira-Junior, João Luiz F.Azevedo, Jairo Panetta, William R. Wolf and Sami Yamounie.
Optimizing cosmological simulations on supercomputers
Astrophysicists use “N-body” simulations to examine clusters, galaxies and other cosmological phenomena – but they can be very memory-expensive to run. This paper, written by a team from Shanghai Jiao Tong University, Xiamen University and New York University, outlines how the authors built a new algorithm to optimize memory performance for N-body simulations. They scaled the algorithm to 512 nodes on a supercomputer, achieving what the authors claim is the “largest completed cosmological N-body simulation.”
Authors: Shenggan Cheng, Hao-Ran Yu, Derek Inman, Qiucheng Liao, Qiaoya Wu and James Lin.
Working with exascale-oriented code for hypersonic aerothermodynamics
This paper, written by a trio from the Center for Turbulence Research at Stanford University, discusses the open-source Hypersonics Task-based Research (HTR) solver for tackling hypersonic aerothermodynamics problems. The researchers outline how the solver is able to scale well in GPU-based supercomputers, testing it with a series of use cases such as supersonic turbulent channel flows and hypersonic transitional boundary layers.
Authors: Mario Di Renzo, Jin Fu and Javier Urzay.
Evaluating the energy efficiency of the Marvell ThunderX2 Arm processor for HPC workloads
As the exascale era quickly approaches, energy efficiency is becoming an even more serious concern for increasingly massive computing systems. These authors – a team from Italy – discuss how Arm CPU architectures could help to ameliorate energy efficiency concerns. They specifically evaluate the Marvell ThunderX2 Arm processor, comparing its performance and energy use with other processors commonly used in large HPC installations. Their results show energy efficiency increases across a number of comparisons.
Authors: Enrico Calore, Alessandro Gabbana, Sebastiano Fabio Schifano and Raffaele Tripiccione.
Developing enterprise resource planning for an HPC environment
Enterprise resource planning (ERP) helps connect logistics systems, production operations, IoT and other enterprise data sources. This paper, written by a trio from ITMO University in Russia, outlines a method for processing ERP data in a GPU-enabled HPC environment. The authors suggest that handling ERP data in this way could help data scientists work with ERP datasets for faster AI model creation and easier data interaction.
Authors: Artem N. Sisyukov, Vlad K. Bondarev and Olga S. Yulmetova.
Footprinting FPGAs and GPUs on an astrophysics application
As the computational needs of the astrophysics community reach astronomical proportions, researchers are turning their eyes to the exascale era. This paper, written by a team from Italy and Greece, examines the performance and energy footprints of a state-of-the-art astrophysics application optimized for heterogeneous architectures. The authors evaluate energy consumption on four different platforms, including an exascale-analogous platform equipped with ARM processors and FPGAs. The authors find that the application suits this architecture well.
Authors: David Goz, Georgios Ieronymakis, Vassilis Papaefstathiou, Nikolaos Dimou, Sara Bertocco, Giuliano Taffoni, Francesco Simula, Antonio Ragagnin, Luca Tornatore and Igor Coretti.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.