In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
CERN’s Large Hadron Collider (LHC) produces massive amounts of data, distributed worldwide by the Worldwide LHC Computing Grid (WLCG) and the workload management system PanDA. However, these tools are now insufficient to handle the quantity of data produced by LHC experiments, a problem these authors say will only worsen if left unaddressed. In this paper, the team – a group from laboratories around the world – highlight recent R&D projects such as data lake prototypes, federated data storage and data carousels that may help to close this gap.
Authors: Alexei Klimentov, Douglas Benjamin, Alessandro Di Girolamo, Kaushik De, Johannes Elmsheuser, Andrej Filipcic, Andrey Kiryanov, Danila Oleynik, Jack C. Wells, Andrey Zarochentsev and Xin Zhao.
With bushfires raging across Australia, better understanding of how, when and why wildfires propagate is more crucial than ever. This paper, written by a team from Colorado, Utah and the Czech Republic, presents an interactive HPC framework for coupled fire and weather simulations. The framework, which the authors say is “suitable for urgent simulations and forecasts,” automates many processes and does not require expert knowledge.
Authors: Jan Mandel, Martin Vejmelka, Adam Kochanski, Angel Farguell, James Haley, Derek Mallia and Kyle Hilburn.
This paper, written by a team spanning seven European nations, argues that HPC systems “need ultra-efficient heterogeneous compute nodes and hardware accelerators with a high degree of specialization” to meet the stringent requirements of exascale-class applications. To that end, they introduce a flexible exploration platform for developing reconfigurable HPC architectures, design tools and applications with built-in run-time reconfiguration. “Ultimately,” they write, “this open platform will enable groundbreaking research towards new exascale computing platforms.”
Authors: Dirk Stroobandt, Cătălin Bogdan Ciobanu, Marco D. Santambrogio, Gabriël Figueiredo, Andreas Brokalakis, Dionisios Pnevmatikatos, Michael Huebner, Tobias Becker and Alex J. W. Thom.
Tsunamis can be disastrous and deadly, but forecasting techniques are typically limited in their ability to help warn for tsunamis due to the extremely quick turnaround necessary for simulations to be useful. A research team from the National Institute of Geophysics and Volcanology in Italy, the University of Malaga in Spain and the Norwegian Geotechnical Institute explains how GPUs can be used to produce “faster than real time” (FTRT) simulations. They discuss the need for these “urgent simulations,” which would include probabilistic tsunami forecasting.
Authors: Finn Løvholt, Stefano Lorito, Jorge Macias, Manula Volpe, Jacopo Selva and Steven Gibbons.
Sediment flow analysis is useful for a range of environmental activities, such as assessing silt deposits in estuaries. This dissertation from a student at the University of Grenoble in France explores high-resolution numerical modeling of sediment flows and implementing the corresponding algorithms on HPC systems. The author concludes that using HPC, it is possible to “accurately, and at a very reasonable cost,” establish variables crucial to sediment flow analysis.
Author: Jean-Baptiste Keck.
Rotorcraft aerodynamic calculations are used to establish the performance characteristics of an aircraft in various modes of flight, but the aggregate computing needs of these calculations can stretch into the millions of hours. These researchers from the U.S. Army Corps of Engineers introduce a new plugin that monitors key variables in the aerodynamic calculations, leading to reduced computational expenses when determining hover performance. Further, the authors discuss automating the calculations using an HPC workflow management tool.
Authors: Robert B. Haehnel, Andrew M. Wissink, Glover George, Deanna Hardin and John Fegyveresi
Chemists use nuclear magnetic resonance (NMR) analysis of chemical shifts to examine individual amino acids within proteins or protein groups; however, determining structures from NMR data alone is computationally intensive. In this paper, a team from the University of Delaware and Nvidia presents a hardware-accelerated strategy for estimating those chemical shifts. Using Nvidia V100 GPUs, the researchers were able to reduce computing time for the largest test dataset from 14 hours to under 47 seconds.
Authors: Eric Wright, Mauricio Ferrato, Alex Bryer, Robert Searles, Juan Perilla and Sunita Chandrasekaran.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.