In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back on the second and fourth Mondays of each month for more!
Exploring the nature and patterns of fatal events in IBM Blue Gene/Q Mira
With supercomputers costing many millions of dollars and scaling to thousands of nodes, reliability is a first-class concern. In this paper, researchers from Argonne National Laboratory “explore potential correlations of fatal system events for one of the most powerful supercomputers – IBM Blue Gene/Q Mira, which is deployed at Argonne National Laboratory, based on its 5-year reliability, availability, and serviceability (RAS) log.” They “summarize six important ‘takeaways’ which can help system vendors and administrators better understand an extreme-scale system’s fatal events” and believe that their work will be useful to “large-scale HPC system administrators and vendors and to fault tolerance researchers, enabling them to better understand fatal events and mitigate such events accordingly[.]”
Authors: Sheng Di, Hanqi Guo, and Rinku Gupta
Evaluating the TensorFlow programming model for solving HPC problems
TensorFlow is the super-star framework of the AI world, but what about using TensorFlow for HPC? This paper, written by a team of researchers from the KTH Royal Institute of Technology in Stockholm, “attempts to evaluate the usability and expressiveness of the TensorFlow programming model for traditional HPC problems.” The researchers “prototyped a distributed block matrix multiplication for large dense matrices which cannot be co-located on a single device and a Conjugate Gradient (CG) solver” and “[evaluated] the difficulty of expressing traditional HPC algorithms using computational graphs and study the scalability of distributed TensorFlow on accelerated systems.” They found that “TensorFlow is extremely scalable.”
Authors: Steven Wei Der Chien, Stefano Markidis, Ivy Bo Peng, and Erwin Laure
Investigating the potential for FPGAs to feature in future exascale platforms
With FPGAs gaining momentum, a group of researchers from the University of Cambridge set out to “investigate the potential for [FPGAs] to feature in future exascale platforms, and their capacity to improve performance per unit power measurements for the purposes of scientific computing.” They “[focused their] efforts on Variational Monte Carlo, and report on the benefits of co-processing with an FPGA relative to a purely multicore system.” They “established that [their] implementation offers significant benefits in terms of raw compute performance and reduced power consumption.”
Authors: Salvatore Cardamone, Jonathan R. Kimmitt, Hugh G. A. Burton, and Alex J. W. Thom
Bringing reconfigurable hardware to future high-performance applications
As computer architecture trends toward heterogeneous platforms, programming these machines poses unique difficulties. This paper, written by Alessandro Cilardo of the University of Naples, “describes the main outcomes of the HtComp project, a two-year research programme aimed at exploring methodologies and tools allowing the automated generation of FPGA-based accelerators from high-level applications written in traditional software languages.” Specifically, the researchers focus on “the main contributions brought by the project, covering the generation of hardware systems from high-level parallel code, the performance-oriented optimisation of memory architectures tailored on the application access patterns, as well as the automated definition of application-driven special-purpose on-chip interconnects.” They conclude that “the above innovations contributed to creating a viable path allowing generic software developers to access tomorrow’s hardware-accelerated high-performance platforms with minimum development overheads.”
Author: Alessandro Cilardo
Analyzing neural network states for the classical simulation of quantum computing
The simulation of quantum algorithms can impose exponential resource requirements. In an attempt to improve on efficiency for simulating certain circuit structures, the authors of this paper “introduce a classical approach to the simulation of general quantum circuits based on neural-network quantum states (NQS) representations.” They “derive rules for exactly applying single-qubit and two-qubit Z rotations to NQS” and “provide a learning scheme to approximate the action of Hadamard gates.” They conclude that “[the] overall accuracy obtained by the neural-network states based on Restricted Boltzmann machines is satisfactory, and offers a classical route to simulating highly-entangled circuits[.]”
Authors: Bjarni Jónsson, Bela Bauer, and Giuseppe Carleo
Using physics-informed machine learning for DRAM error modeling
With high-performance computing facilities accelerating into the exascale era, addressing hardware failures is increasingly important. These researchers — a team from Los Alamos National Laboratory, Sandia National Laboratories, and AMD — “investigate the predictability of DRAM errors using field data from two recently decommissioned supercomputers: Cielo, at Los Alamos National Laboratory, and Hopper, at Lawrence Berkeley National Laboratory.” They “apply statistical machine learning to predict the probability of DRAM errors at previously un-accessed locations” and “compare the predictive performance of six machine learning algorithms,” finding that “a model incorporating physical knowledge of DRAM spatial structure outperforms purely statistical methods.”
Authors: Elisabeth Baseman, Nathan DeBardeleben, Sean Blanchard, Juston Moore, Olena Tkachenko, Kurt Ferreira, Taniya Siddiqua, and Vilas Sridharan
Establishing hybrid entanglement of three quantum memories with three photons
In this paper, researchers from the Hefei National Laboratory for Physical Sciences, the University of Science and Technology of China, and the CAS-Alibaba Quantum Computing Laboratory “report an experiment realizing hybrid entanglement between three photons and three atomic-ensemble quantum memories.” They “make use of three similar setups, in each of which one pair of photon-memory entanglement with high overall efficiency is created via cavity enhancement.” They believe that this work “demonstrates the largest size of hybrid memory-photon entanglement, which may be employed as a build block to construct larger and complex quantum network.”
Author: Bo Jing, Xu-Jie Wang, Yong Yu, Peng-Fei Sun, Yan Jiang, Sheng-Jun Yang, Wen-Hao Jiang, Xi-Yu Luo, Jun Zhang, Xiao Jiang, Xiao-Hui Bao, and Jian-Wei Pan
Improving efficiency and resilience in HPC through analytics and data-driven management
As more and more facets of our day-to-day lives rely on large-scaling computing systems, efficiently managing those systems is crucial. In this paper, a researcher from Boston University proposes “novel methodologies to automatically diagnose the root causes of performance and configuration problems and to improve efficiency through data-driven system management.” The author shows that “by training machine learning models on resource usage and performance data collected from servers, [the] approach successfully diagnoses 98% of the injected anomalies at runtime in real-world HPC clusters with negligible computational overhead.”
Integrating low-latency analysis into HPC system monitoring
While system monitoring data is increasingly available from HPC systems, analysis of that data is often too slow to be meaningfully actionable. These researchers — from UCF, Sandia National Laboratories, and Open Grid Computing — “enhance the architecture of a monitoring system used on large-scale computational platforms, to integrate streaming analysis capabilities at arbitrary locations within its data collection, transport, and aggregation facilities.” They “leverage the flexible communication topology of the monitoring system to enable placement of transformations based on overhead concerns, while still enabling low-latency exposure on node.” Finally, they “show the viability of our implementation for a case with production-relevance: run-time determination of the relative per-node files system demands.”
Authors: Ramin Izadpanah, Nichamon Naksinehaboon, Jim Brandt, Ann Gentile, and Damian Dechev
Scaling HPC benchmarking and looking beyond the average
Creating an efficient, balanced high-performance system requires an understanding of major bottlenecks. In this paper, researchers from the Barcelona Supercomputing Center and Universitat Politècnica de Catalunya “execute seven production HPC applications on a production HPC platform, and analyse the key performance bottlenecks: FLOPS performance and memory bandwidth congestion, and the implications on scaling out.” They find that “results depend significantly on the number of execution processes and granularity of measurements” and “advocate for guidance in the application suites, on selecting the representative scale of the experiments,” proposing that “the FLOPS performance and memory bandwidth should be represented in terms of the proportions of time with low, moderate and severe utilization[.]”
Authors: Milan Radulovic, Kazi Asifuzzaman, Paul Carpenter, Petar Radojković, and Eduard Ayguadé
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.