In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Blockchain has the potential to integrate well with certain services on HPC platforms, but it currently lacks elements that would enable that integration. In this paper, researchers from the University of Nevada, Reno, and the University of California, Davis, discuss what they consider the “two missing pieces”: new consensus protocols for shared storage in HPC and new fault-tolerant mechanisms that compensate for MPI, which can be vulnerable when handling blockchain workloads. The authors introduce their solutions for these missing pieces, showing performance improvements relative to comparable solutions.
Authors: Abdullah Al-Mamun and Dongfan Zhao.
AI has managed to tackle the operation of fully autonomous self-driving cars with reasonable success in test environments. However, these researchers from Oak Ridge National Laboratory (ORNL) and the National Renewable Energy Laboratory (NREL) argue that machine-trained driving remains unable to generalize to a wide range of scenarios and often locks effective training data. To those ends, the authors introduce an approach that uses a combination of conditional imitation learning, reinforcement learning and HPC to train a neural network for autonomous driving.
Authors: Robert Patton, Shang Gao, Spencer Paulissen, Nicholas Haas, Brian Jewell, Xiangyu Zhang and Peter Graf.
HPC has enabled a wide variety of new, high-performance data services that complement the increased computing power. This paper (written by a team from Argonne National Laboratory, Carnegie Mellon University, the Vector Institute for Artificial Intelligence, Los Alamos National Laboratory and The HDF Group) introduces Mochi, a framework that “enables composition of specialized distributed data services from a collection of connectable modules and subservices.” The authors argue that Mochi provides more specialized data services that will prove useful to HPC users.
Authors: Robert B. Ross, George Amvrosiadis, Philip Carns, Charles D. Cranor, Matthieu Dorier, Kevin Harms, Greg Ganger, Garth Gibson, Samuel K. Gutierrez, Robert Latham, Bob Robey, Dana Robinson, Bradley Settlemyer, Galen Shipman, Shane Snyder, Jerome Soumagne and Qing Zheng.
Supercomputers are now allowing researchers to tackle seismology questions, such as fossil fuel extraction and earthquake prediction, in new and improved ways. In this paper, written by a team from Saudi Aramco, the authors introduce “GeoDRIVE,” an HPC framework tailored for “massive seismic applications.” Touting the framework’s “versatile design,” the authors highlight how GeoDRIVE will unlock new capabilities for seismic applications, reducing uncertainties in seismic modeling and reducing drilling risks.
Authors: Suha N. Kayum, Thierry Tonellot, Vincent Etienne, Ali Momin, Ghada Sindi, Maxim Dmitriev and Hussain Salim.
Researchers from INSEMEX Petrosani in Romania highlight their efforts to improve HPC-based simulations of flammable air-gas explosions using computational fluid dynamics (CFD). By applying ANSYS Fluent and running the simulations on an HPC cluster, the team improved the scalability and speed of the simulations. They highlight how the improved simulations will help the Romanian mining industry understand prevent air-methane explosions.
Authors: Laurenţiu Munteanu, Marius Cornel Şuvar and Ligia Ioana Tuhuţ.
This paper introduces “BioinfoPortal,” a gateway designed to enable bioinformatics for the Brazilian National High Performance Computing System. The authors – a team from Brazil’s National Laboratory of Scientific Computing and Fluminense Federal University – highlight the capabilities of the new framework and outline the challenges of integrating BioinfoPortal with the system’s CSGrid middleware. They present their findings on BioinfoPortal’s performance, which they were able to optimize up to 75 percent performance efficiency.
Authors: Kary A.C.S.Ocaña, Marcelo Galheigoa, Carla Osthoffa, Luiz M.R.Gadelha Jr., Fabio Porto, Antônio Tadeu A. Gomes, Daniel de Oliveira and Ana Tereza Vasconcelosa.
HPC education can often prove difficult due to the necessity of large-scale clusters as a testbed environment for students. In this paper, written by a team from Sun Yat-sen University, the Guangdong Key Laboratory of Big Data Analysis and Processing and the National Supercomputer Center in Guangzhou, the authors outline how they designed, developed and implemented a lightweight, container-based experimental platform to allow students a functional and customizable practice environment for HPC learning.
Authors: Zelong Wang, Di Wu, Zhenxiao Luo and Yunfei Du.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.