In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Molecular docking is an intensive task during drug discovery, involving the screening of a large library of molecules and typically requiring HPC platforms. In this paper, a team of researchers from Italy discuss porting and optimizing a molecular docking application to a heterogeneous system with GPU accelerators. The authors demonstrate that their approach better exploits the node compared to pure CPU/GPU approaches.
Authors: Emanuele Vitali, Davide Gadioli, Gianluca Palermo, Andrea Beccari, Carlo Cavazzoni and Cristina Silvano.
Quantum computing simulations can be difficult due to exponential runtime and memory requirements. To date, this has been addressed by using GPUs and multi-node computers. This paper, written by a team from IBM Research, proposes a heterogeneous parallelization approach combining GPUs and CPUs to simultaneously accelerate simulation and enlarge the total memory space. The authors show empirical performance evaluations of their approach.
Authors: Jun Doi, Hitomi Takahashi, Rudy Raymond, Takashi Imamichi and Hiroshi Horii.
The exascale era’s looming power requirements have researchers racing to ensure that the systems operate within a maximum power budget of 20 megawatts. With Arm processors becoming a popular approach for increasing efficiency, these authors – a team from Technical University of Valencia – analyze the combined use of Arm processors and the rCUDA remote GPU virtualization middleware, aiming to further increase efficiency. The authors conclude that applications could be sped up almost 8x while reducing energy consumption by up to 35 percent.
Authors: Carlos Reaño, Javier Prades and Federico Silla.
Traditional cooling methods are proving insufficient in the face of rapidly increasing computing needs and datacenter density. Cold plate cooling, which shows better results with high-density systems, is a burgeoning alternative. In this paper, written by a team from the Center for Development of Advanced Computing and the Vishwakarma Institute of Technology, the authors discuss their development of a high-accuracy computational fluid dynamics model for cold plate cooling, allowing for easier optimization of cold plate design.
Authors: Mohan Labade, Vikas Kumar and Mangesh Chaudhari.
This paper – written by a team from Government College Women University in Pakistan – also assessed how performance could be optimized in the exascale era under power limitations. The researchers review a number of existing strategies for enhancing performance and reducing power and combinations of those strategies, finally concluding by suggesting a massive parallel programming mechanism.
Authors: Muhammad Usman Ashraf, Amna Arshad and Rabia Aslam.
Next-generation genomic sequencing has a wide variety of applications, but requires systems that can process petabytes of genomic data. In this paper, Jitao Yang of the Beijing Language and Cultural University proposes a design and implementation plan for a genomics cloud. The cloud, which would be able to scale storage and computing abilities and provide easy access to genomics analytics, could feasibly allow scientists to avoid building HPC clusters and managing petabytes of genomic data themselves.
Author: Jitao Yang
In this article, a team from Peking University, Purdue ECE, and Microsoft Research presents an empirical study on the deep learning functions of 16,500 of the most popular Android apps. The authors aim to answer three questions: which apps were early adopters of deep learning, for what do they use deep learning, and how can their deep learning models be characterized? The authors paint a broad picture of deep learning on smartphones and conclude with recommendations for optimization and security.
Authors: Mengwei Xu, Jiawei Liu, Yuanqiang Liu, Felix Xiaozhu Lin, Yunxin Liu and Xuanzhe Liu.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.