In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Developing high-performance function as a service for science
Modern scientific computing requires a myriad of approaches with various advantages – triggers, accelerators, mobility – suited to specific uses. These authors (a team from the University of Chicago and Argonne National Laboratory) propose funcX: “a high-performance function-as-a-service (FaaS) platform that enables intuitive, flexible, efficient, scalable, and performant remote function execution on existing infrastructure[.]” The authors call this approach, which allows researchers to execute commands without specifying a physical resource, “serverless supercomputing.” They demonstrate results across experiments and deployments.
Authors: Ryan Chard, Tyler J. Skluzacek, Zhuozhao Li, Yadu Babuji, Anna Woodard, Ben Blaiszik, Steven Tuecke, Ian Foster and Kyle Chard.
Using adaptive sparse matrix-vector multiplication on heterogeneous architectures
Sparse matrix-vector multiplication (SpMV) is the core algorithm for solving sparse linear equations – a key tool in research and engineering fields. In this paper, the authors – a team from the Naval University of Engineering and the National University of Defense Technology in China – describe their design and implementation for an adaptive SpMV implementation on a CPU-GPU heterogeneous architecture, reporting significant performance gains.
Authors: Jing Nie, Chunlei Zhang, Dan Zou, Fei Xia, Lina Lu, Xiang Wang and Fei Zhao.
Highlighting high-throughput computing use cases
This paper discusses a number of common ways that researchers use high-throughput computing (also called “capacity computing”), which allows users to run a very large number of tasks (e.g. simulation or data analysis) in a short amount of time. The experiences they highlight focus on researchers’ own experiences with high-throughput computing and include independent jobs, interrelated jobs and the use of several high-throughput resources.
Author: Lee Liming.
Cataloging galaxy-galaxy strong gravitational lenses using convolutional neural networks
In this collaboration among over 50 researchers from more than three dozen institutions, the authors discuss their work to search Dark Energy Survey (DES) imaging for galaxy-galaxy strong gravitational lenses using convolutional neural networks. After training the neural networks using simulated lens images, the researchers used the networks to score nearly eight million images, then examined the best candidates. They identify 152 probable or definite lenses using this method.
Authors: C. Jacobs, T. Collett, K. Glazebrook, E. Buckley-Geer, H.T. Diehl, H. Lin, C. McCarthy, A.K. Qin, et al.
Building an earthquake and tsunami workflow using HPC
Early (and accurate) tsunami and earthquake analysis is critical for effective emergency response. In this paper, a team of researchers from France, the Czech Republic, Italy, and Germany discuss their “LEXIS project,” which seeks to “enhance the workflow of rapid loss assessment and emergency decision support systems by leveraging an orchestrated heterogeneous environment combining [HPC] resources and cloud infrastructure.” The paper outlines the project’s workflow and its computational model.
Authors: Thierry Goubier, Andrea Ajmar, Carmine D’Amico, Paul Dubrulle, Susanna Grita, Sephane Louise, Jan Martinovič, Tomáš Martinovič, Natalja Rakowsky, Paolo Savio, Danijel Schorlemmer, Alberto Scionti and Olivier Terzo.
Using convolutional neural networks for brain recognition and the “internet of medical things”
The medical field involves a wide variety of complex medical images that are difficult to extract and analyze. In this paper – written by a duo from Australia – the researchers propose an adaptive convolutional neural network model (CNN-BN-PReLU). The model, they say, is able to abstract the image features without human intervention, shortening training time and improving the image recognition rate.
Authors: Yuxi Liu and Jun Xiong
Paving the way for Chinese exascale computing
The race to exascale is well underway. In this paper, Yutong Lu, deputy chief designer of the Tianhe supercomputers, explores the major technical challenges facing China on the path to exascale computing. Lu also assesses China’s ongoing R&D activities in the supercomputing field and possible pathways for achieving exascale computing (such as co-design and convergence computing).
Authors: Yutong Lu
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.