In this bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back on the first and third Mondays of each month for more!
Terminating failed HPC jobs through machine and deep learning
Allowing jobs that will fail to continue running on a supercomputer decreases available power and system efficiency. These researchers – a team from Spain and Germany – examined a dataset from the petascale Mistral supercomputer in an attempt to develop a framework for early termination of jobs where software failures are predicted. They trained a neural net to predict job evolution and evaluated the effect on CPU savings.
Authors: Michał Zasadziński, Victor Muntés-Mulero, Marc Solé, David Carrera, and Thomas Ludwig
Optimizing hybrid parallel application execution in heterogeneous HPC systems
Heterogeneous HPC systems are important for many key computational problems. This requires careful programming of the applications such that they utilize parallelism on multiple levels simultaneously and can combine programming interfaces for multiple types of devices. Increasingly, programming models and algorithms must also optimize for energy-efficiency as nations set peak power limits on their exascale targets.
For his doctoral dissertation Paweł Rościszewski of Gdańsk University of Technology extracts a general model of hybrid parallel application execution in heterogeneous HPC systems based on a synthesis of existing approaches. He further develops “an optimization methodology for such execution aiming for minimization of the contradicting objectives of application execution time and power consumption of the utilized computing hardware.”
Author: Paweł Rościszewski
Using Jupyter notebooks to teach HPC to undergraduates
The learning curve for HPC can be steep. In this paper, published in the Journal of Computing Sciences in Colleges, the authors created an open-access course for undergraduates using Jupyter notebooks, which combine text, live code, output, and visualizations in a single document. The paper describes their process and their results.
Authors: Ben Glick and Jens Mache
Using HPC to manage petroleum reservoirs
Managing oil and gas reservoirs requires thorough evaluation of a series of development conditions and scenarios – everything from well placement to flooding strategy to historical matching. This usually requires specialized hardware – but in this paper, published by the Society of Petroleum Engineers, the authors propose creating a grid cluster using local networking of available workstations to create a makeshift HPC system. They discuss previous applications of the grid cluster method in the oil and gas industry, which were successful.
Authors: Ramil Yaubatyrov, Vladimir Babin, and Akmadieva Liya
Stress testing 3D textile composites using HPC
3D textile composites offer increased toughness and impact resistance and are used in intensive applications like body armor and blade containment systems. Their complex geometry, however, makes them difficult to analyze. Two researchers from the aerospace department of Texas A&M University created the geometry of a 3D textile and used a finite element analysis framework running on a high-performance system to assess the material’s response to severe stresses.
Authors: M. Keith Ballard and John D. Whitcomb
Bringing HPC to population-wide data assets
These authors set out to leverage Canada’s massive healthcare data system – afforded by its publicly-funded health services – by creating a secure HPC cloud within the Hospital for Sick Children in Toronto. This Ontario Data Safe Haven, or ODSH, “will allow research teams to post, access and analyze individual datasets over which they have authority, and enable linkage to Ontario administrative and other data.” The researchers discuss their implementation process and preliminary results, which include architectural choices, privacy and security measures, and documentation.
Authors: J. Charles Victor, P. Alison Paprica, Michael Brudno, Carl Virtanen, Walter Wodchis, Anna Goldenberg, and Michael Schull
Proposing an HPC framework for desynchronized information propagation in simulations
Implementing simulation systems – such as traffic or social simulations – can often require supercomputing systems. However, it can be difficult to develop software that can be scaled up to thousands of nodes, in large part because the system needs to synchronize states across nodes. The authors of this paper – a team from AGH University of Science and Technology – propose “a framework based on a desynchronized method for the distribution of information inspired by the propagation of smell.” They test the framework on three real world-inspired simulations, proving scalability up to almost 3,500 cores.
Authors: Jakub Bujas, Dawid Dworak, Wojciech Turek, and Aleksander Byrski
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.