In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
As HPC becomes more widespread, training a skilled HPC workforce is more important than ever. This paper – written by a team from the University of Tennessee, Knoxville – details a program for hands-on research experiences in high-performance data sciences, data analytics and machine learning. The authors discuss the Research Experiences for Computation Science, Engineering, and Mathematics (RECSEM) program, which is supported by the U.S. National Science Foundation (NSF), highlighting experiences and lessons learned.
Authors: Kwai Wong, Stanimire Tomov and Jack Dongarra.
Like most fields involving genomic data, phylogenomics – the intersection of genetics and evolution – is struggling to keep pace with the volume and diversity of its data collection. This paper – written by an international research team from the U.S., Belgium and Scotland – provides an in-depth look at “BEAGLE,” a high-performance evaluator for evolutionary likelihood calculation, said to enable “substantially reduce[d] computation time in phylogenomic and phylodynamic analyses.” The team discusses computational issues surrounding BEAGLE, with an emphasis on comparing performance between multicore CPUs and a range of GPUs.
Authors: Guy Baele, Daniel L. Ayres, Andrew Rambaut, Marc A. Suchard and Philippe Lemey.
Heterogeneous HPC systems are popular due in large part to their flexibility – however, that flexibility (and complexity) puts burdens on resource management. In this paper, a team from the IBM T.J. Watson Research Center describes “CuSH,” a cognitive scheduler that leverages deep neural networks and reinforcement learning. The researchers evaluated CuSH using a simulator, finding that CuSH outperforms traditional approaches in all tested use cases.
Authors: Giacomo Domeniconi, Eun Kyung Lee and Alessandro Morari.
Collecting data from HPC systems efficiently, reliably and resiliently remains a major challenge. A team of Chinese researchers has set out to create an effective optimized framework for improving the efficiency of data collection in petascale systems with an eye toward improving scalability into the exascale era. Their paper describes the framework, which includes a data collection acceleration layer within H2FS (the Tianhe-2 file system), a performance analysis tool, and a new method for log template extraction. The research team tested the framework and find it to be effective and scalable.
Authors: Huang Huang, Li-Qian Zhou, YuTong Lu, Tong Xiao, Can Leng, Chuanying Li and Zhe Quan.
Modern vehicles – especially smart vehicles or self-driving cars – collect large amounts of data from multiple cameras, which can be used to train neural networks within the vehicles to detect objects; however, that data must first be annotated. In this paper, a team of researchers from Intel and Ericsson Nikola Tesla investigate the possibility of using HPC inside vehicles to add initial annotations at the edge rather than transferring the data to a datacenter for training.
Authors: Branimir Malnar, Alexander Unnervik and Neslihan Köse.
The European Commission’s Centre of Excellence for Global Systems Science (CoeGSS) provides computer-aided decision support for energy, water, food and pandemic issues, among others. In this paper, a team from the High-Performance Computing Center Stuttgart and the Poznan Supercomputing and Networking Center propose a benchmark that evaluates HPC architectures for use in global systems science. The researchers also attempt to identify the best HPC system for typical global systems science software environments.
Authors: Damian Kaliszan, Norbert Meyer, Sebastian Petruczynik, Michael Gienger and Sergiy Gogolenko.
Advances in remote sensing instruments are allowing geoscientists to perform increasingly robust quantitative retrieval applications, which let them observe the characteristics of land, atmosphere and ocean areas. In this paper, written by a team from China and the United Kingdom, the authors design and implement a high-performance framework on a GPU cluster to reduce the time consumption of quantitative retrieval applications over time scales.
Authors: Jia Liu, Yong Xue, Kaijun Ren, Junqiang Song, Christopher Windmill and Patrick Merritt.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.