In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every other week for more!

Data scientists are often faced with the challenging task of integrating HPC and data-intensive scalable computing (DISC) – paradigms designed for different purposes and with different requirements. In this paper, a team of researchers from Brazil, France, and the U.S. outline the SciDISC project, which seeks to effectively combine simulation and data analysis activities, and discuss its first results. The authors call the results “quite encouraging” and plan to “improve … dataflow monitoring, debugging and extend … support for adaptation at runtime like parameter fine-tuning and data reduction.”
Authors: Patrick Valduriez, Marta Mattoso, Reza Akbarinia, Heraldo Borges, José Camata, Alvaro Coutinho, Daniel Gaspar, Noel Lemus, Ji Liu, Hermano Lustosa, Florent Masseglia, Fabricio Nogueira da Silva, Vítor Silva, Renan Souza, Kary Ocaña, Eduardo Ogasawara, Daniel de Oliveira, Esther Pacitti, Fabio Porto, and Dennis Shasha.
Matching HPC hardware and software
The mismatch between hardware capabilities and programming software is the predominant challenge facing exaflop computing. This article, written by a team of researchers from France and the U.S., examine key performance enablers at the software level, outlining limitations and promising approaches for remedying the mismatch. They conclude by recommending a way forward for codesigned hardware and software.
Authors: William Jalby, David Kuck, Allen D. Malony, Michel Masella, Abdelhafid Mazouz, and Mihail Popov.
Analyzing magnetic fields in the exaflop regime

Radio interferometers are collecting data from galaxy clusters at unprecedented resolutions, allowing researchers to analyze intra-cluster magnetic fields at small scales for the first time. The authors of this paper – a team from Italy, Korea, and Minnesota – present a new numerical approach to simulating these magnetic fields for future cosmological simulations. The new code – called ‘WOMBAT’ and developed in collaboration with Cray – will allow researchers to scale magnetic field simulations to the exaflop regime.
Authors: Julius Donnert, Hanbyul Jang, Peter Mendygral, Gianfranco Brunetti, Dongsu Ryu, and Thomas Jones.
Overlapping network communications and computation
Exascale HPC will increase the pressure on systems to effectively overlap network communications and computation activities. In this paper, a team of French researchers examine the MPI standard for asynchronous communication progress. Specifically, they discuss dedicated progress threads (PTs), which struggle with balancing efficiency and computational burden. The authors propose “a solution inspired from the PT approach which benefits from idle time of compute threads to make MPI communication progress in background” and claim a performance gain on unbalanced workloads.
Authors: Marc Sergent, Mario Dagrada, Patrick Carribault, Julien Jaeger, and Marc Pérache.
Isolating resilience to silent errors
Silent errors – errors that bypass detection – are a growing threat as HPC systems increase in size and power. This paper, written by researchers from UC Merced, LLNL, and the Technical University of Munich, looks at resilience to silent errors. The authors present a framework called ‘FlipTracker’ that is designed to isolate the resilient properties of applications that are naturally resilient to silent errors. The authors then present a set of resulting patterns.
Authors: Luanzheng Guo, Dong Li, Ignacio Laguna, and Martin Schulz.
Optimizing data center energy use for HPC applications
Data centers typically optimize energy use by profiling the energy use of an application via a full execution – a technique less than practical with HPC applications that have long execution times. In this paper, researchers from Complutense University of Madrid and Technical University of Madrid “present a methodology to estimate the dynamic CPU and memory energy consumption of an application without executing it completely.” They claim that their methodology “shows an overall error below 8.0% when compared to the dynamic energy of the whole execution of the application.”
Authors: Juan Carlos Salinas-Hilburg, Marina Zapater, Jose M. Moya, and Jose L. Ayala.
Increasing the efficiency of VMs for HPC
Virtualizing HPC environments has become a common tool for researchers and analysts. The authors of this paper – a team from Germany and the UK – look at the use of virtual machines for HPC. The authors discuss how the checkpoint size of a virtualized environment can be minimized through a zeroing technique, increasing efficiency in several areas.
Authors: Ramy Gad, Simon Pickartz, Tim Süß, Lars Nagel, Stefan Lankes, Antonello Monti, and André Brinkmann.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]communications.com. We look forward to hearing from you.