In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Soon, one of the Large Hadron Collider experiments will need to cope with the data from one billion particles per second – or, in data terms, 40 terabits per second, all of which needs to be processed in real-time to sort the data for storage. A team of researchers from Switzerland, Spain and France present their particle tracking algorithm and parallel raw input decoding tool “Compass” in this paper. The authors discuss the performance tradeoffs in different configurations and test the model against simulations.
Authors: Placido Fernandez Declara, Daniel Hugo Campora Perez, Javier Garcia-Blas, Dorothea Vom Bruch, J. Daniel Garcia and Niko Neufeld.
As HPC systems grow, operational data volumes grow with them, necessitating new mechanisms to gather, store and analyze that data. This paper (written by a team from Lawrence Berkeley National Laboratory) describes the authors’ experiences designing and implementing an infrastructure (OMNI) for extreme-scale operational data collection. The authors present a number of real-world case studies that benefited from OMNI’s data capabilities.
Authors: Elizabeth Bautista, Melissa Romanus, Thomas Davis, Cary Whitney and Theodore Kubaska.
Supporting the “explosion” of scientific data from high-performance simulations and sensors complicates scientific workflows. In this paper, written by researchers from several Chinese institutions, the authors propose the “Tiered Data Management System” (TDMS) to accelerate those workflows on HPC systems. They also propose a data-aware task scheduling module. The authors evaluate the performance of TDMS with realistic workflows, showing a speedup of up to 1.54x for data-intensive workflows compared to Lustre.
Authors: Peng Cheng, Yutong Lu, Yunfei Du and Zhiguang Chen.
Public clouds have made bare metal servers widely available for big data and HPC applications. This paper – written by Hyungro Lee and Geoffrey C. Fox of Indiana University – evaluates the performance of big data processing on dedicated bare metal servers. The authors benchmark results and conduct system performance tests to demonstrate the servers’ suitability for data storage for large-scale applications.
Authors: Hyungro Lee and Geoffrey C. Fox
MPI, the Message Passing Interface, has been a popular standard for system communication in the HPC community for over twenty years. A team of researchers from Riken-CCS, the University of Tennessee and Inria set out to survey MPI users around the world. In the process, they found that Japanese MPI users are distinct from MPI users around the rest of the world, possibly highlighting challenges that may face MPI-enabled HPC in the near future.
Authors: Atsushi Hori, George Bosilca, Emmanuel Jeannot, Takahiro Ogura and Yutaka Ishikawa.
Railway alignment optimization – that is, identifying the lowest-cost configuration for new rail construction – continues to impose a high computational burden on transit planners. In this paper, two researchers from the University of São Paulo propose a new, HPC-enabled framework to solve the optimization problem. They applied the framework to new connections between Brazilian cities and found it to be accurate.
Authors: Cassiano A. Isler and Joao A. Widmer.
Medical imaging is a booming field with many different implementations: ultrasound, X-ray, CT, etc. These researchers – a team from Pakistan and France – propose a medical imaging-based high performance hardware architecture and programming toolkit called “HPMIS.” The authors explain how HPMIS “can perform medical image registration, storage, and processing in hardware” and is “easy to program[.]”
Authors: Tassadaq Hussain, Amna Haider, Muhammad Shafique and Abdelmalik Taleb-Ahmed.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.