In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Addressing computational challenges in the LSST sky survey
“The Large Synoptic Sky Survey [LSST] will cover the sky deeply every week for ten years,” writes J. Anthony Tyson of the University of California. In this paper, Tyson discusses the vastness of the LSST’s data – hundreds of petabytes – and how high-performance computing can help to solve the challenges associated with turning that data into informative simulations, products and other analyses.
Author: J. Anthony Tyson
Analyzing “first contact” with HPC
As researchers leverage HPC in more and more fields, understanding how new users interact with HPC tools is increasingly important. This paper, written by a team from Hungary and Spain, looked at a team of ethologists (animal behavior researchers) at Eötvös Loránd University, who were interacting with HPC systems for the first time. The authors discuss the experiment results, highlighting issues that non-experts may face when learning to use HPC systems.
Authors: Bence Ferdinandy, Ángel Manuel Guerrero-Higueras, Éva Verderber, Ádám Miklósi and Vicente Matellán
Advancing HIV vaccine research with low-cost HPC
“Next-generation sequencing,” these authors (a team from South Africa and Johns Hopkins) write, “has revolutionized biological research.” The new technology allows for easier access to genome sequencing, but it requires access to powerful (and often expensive) computing resources. The authors describe a use case for applying low-cost HPC in a genomic analysis setting: a three-node arrangement of ordinary desktop computers applied toward the analysis of large antibody sequence and virus datasets, in service of working toward a vaccine for HIV.
Authors: Batsirai M. Mabvakure, Raymond Rott, Leslie Dobrowsky, Peter Van Heusden, Lynn Morris, Cathrine Scheepers and Penny L. Moore
Making the case for FPGA-based HPC
In this article, a team from the University of Manchester discusses the current state of FPGAs in HPC systems, highlighting challenges and opportunities. They focus on the requirements for system architectures and interconnects, arguing that “this model requires a reliable, connectionless, hardware-offloaded transport supporting a global memory space.” They cite a 25 percent latency improvement compared to a software-based transport, arguing that their solution can outperform state-of-the-art HPC in test cases.
Authors: Joshua Lant, Javier Navaridas Palma, Mikel Lujan and John Goodacre.
Examining the case of Spark-DIY (and the HPC/Big Data convergence)
HPC and big data analytics are converging. In this paper, a trio from the University Carlos III of Madrid and Argonne National Laboratory discuss this convergence, highlighting the case of the Spark-DIY platform – a prototype implementation of a new architectural model that allows for interoperable HPC and big data execution models. The authors discuss performance results of Spark-DIY, concluding that it is a “clear example of how current HPC simulations are evolving toward hybrid … applications.”
Authors: Silvina Caino-Lores, Jesus Carretero, Bogdan Nicolae, Orcun Yildiz and Tom Peterka
Using HPC architectures analysis for gene networks inference
Inferring the regulatory networks and transition functions of genes is an important, but computationally intensive task. These authors – a team from Brazil – discuss ways to speed up inference of genomic networks, introducing a benchmark based on CPUs and FPGAs and assessing its cost, processing time, energy use and complexity. They discuss their results, highlighting the performance of the Titan XP GPU and the cost-benefit strengths of the R9 Nano GPU.
Authors: Anderson G. Marco, Mario A. Gazziro and David C. Martins, Jr
Applying high-performance mesoscale simulations for microfluidics
These authors – a group from ETH Zurich – discuss applications of HPC in microfluidics, which involves transporting particles and cells for applications like manufacturing or drug design. They present “a computational tool for large scale, efficient and high throughput mesoscale simulations of fluids and deformable objects at complex microscale geometries.” Using this tool, the researchers achieve a 10x speed-up relative to other state-of-the-art solutions when operating on Piz Daint, a supercomputer at the Swiss National Supercomputing Centre that currently places 6th among publicly ranked supercomputers.
Authors: Dmitry Alexeev, Lucas Amoudruz, Sergey Litvinov and Petros Koumoutsakos
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.