In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Using HPC to simulate cardiac arrhythmia
Studies show that virtual arrhythmia risk prediction is safer and more accurate than clinical procedures – however, the simulations required for virtual risk prediction have high computational requirements that often take hours or days, rendering them inappropriate for time-sensitive use cases. This paper, written by a team from Simula Research Laboratory and the University of Oslo, explores the use of a number of numerical schemes and hardware configurations (particularly GPUs) to accelerate the process.
Authors: Johannes Langguth, Hermenegild Arevalo, Kristian Gregorius Hustad and Xing Cai.
Enabling cloud HPC for Earth science modeling on over a thousand cores
Earth science is a popular application for high-performance computing, and many HPC users have been turning to the cloud for their computing needs as a cost reduction measure – however, previous research has cast doubt on the suitability of large-scale cloud HPC for simulations. This paper, written by researchers from Harvard and MIT, argues that recent advances in cloud network performance have reduced this barrier, allowing cloud platforms to serve as a “viable alternative to local clusters for simulations at large scale.”
Authors: Jiawei Zhuang, Daniel J. Jacob, Haipeng Lin, Elizabeth W. Lundgren, Robert M. Yantosca, Judit Flo Gaya, Melissa P. Sulprizio and Sebastian D. Eastham.
Conducting million- and billion-atom simulations
As computing power increases, larger and larger atomistic simulations become possible. This paper, by Karissa Y. Sanbonmatsu, discusses explicit solvent molecular dynamics simulations of large macromolecular complexes, which help to integrate disparate experimental data. Sanbonmatsu runs these simulations on the Trinity supercomputer to explore ribosomes and chromatin, including the first explicit solvent simulation of an entire gene locus.
Author: Karissa Y. Sanbonmatsu
Teaching HPC systems administrators
HPC education and training is often difficult, as it relies on learners having high levels of access to expensive, highly secure systems. These researchers – a duo from Purdue University – describe how Purdue Research Computing trained undergraduates for HPC jobs through teaching methods and hardware platforms. They discuss a virtual machine-based approach, best practices and barriers facing students.
Authors: Alex Younts and Stephen Lien Harrell
Understanding HPC benchmark performance on Intel Broadwell and Cascade Lake processors
As computational complexity continues to grow, even multicore CPUs are posing challenges for accurate performance estimates and benchmarking. In this paper, written by a team from three German institutions and Los Alamos National Laboratory, the authors discuss microbenchmarks, which benchmark a specific piece of hardware. In particular, they focus on HPC microbenchmarks for two Intel x86 server CPU architectures (Broadwell and Cascade Lake), highlighting hardware configuration concerns.
Authors: Christie L. Alappat, Johannes Hofmann, Georg Hager, Holger Fehske, Alan R. Bishop and Gerhard Wellein.
Optimizing HPC software for dark matter experiments
The LUX-ZEPLIN experiment is searching for dark matter – a task that has now become computationally challenging. This paper (written by a team from Lawrence Berkeley National Laboratory, SLAC National Accelerator Laboratory and the University of Sheffield) discusses the unique HPC challenges faced by the search for dark matter. The authors go on to discuss strategies for ameliorating memory concerns that they believe may have applications outside of the hunt for dark matter.
Authors: Venkitesh Ayyar, Wahid Bhimji, Maria Elena Monzani, Andrew Naylor, Simon Patton and Craig E. Tull.
Using exascale computing and explainable AI to meet UN sustainable development goals
The 17 United Nations sustainable development goals set ambitious objectives for major areas of global growth such as poverty, clean water, education and climate change. These authors, a team from Oak Ridge National Laboratory and five universities, explore how exascale computing and explainable AI could be leveraged to meet those goals. They specifically highlight potential advances in crop development, global climate applications and benefits for food and energy plant breeding programs.
Authors: Jared Streich, Jonathon Romero, João Gabriel Felipe Machado Gazolla, David Kainer, Ashley Cliff, Erica Teixeira Prates, James B. Brown, Sacha Khoury, Gerald A. Tuskan, Michael Garvin, Daniel Jacobson and Antoine L. Harfouche.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.