In this bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back on the first and third Mondays of each month for more!
Predicting properties of complex materials with supercomputers
Hybrid organic-inorganic perovskites (or HOIPs) are popular semiconductors for solar cells, LEDs, and other light-based devices. Though the physics behind how the properties of HOIPs emerge were previously poorly-understood, a team of materials scientists from Duke University have successfully used a supercomputer to predict the electrical and optical properties of HOIPs. This research opens the door for accurate models of these materials, allowing for a more rigorous exploration of new designs.
Authors: Chi Liu, William Huhn, Ke-Zhao Du, Alvaro Vazquez-Mayagoitia, David Dirkes, Wei You, Yosuke Kanai, David B. Mitzi, and Volker Blum
Supporting high-performance and high-throughput computing for experimental science
With the rise of large experimental science facilities like the Large Hadron Collider and the Laser Interferometer Gravitational Wave Observatory, there are strong needs for both high-performance computing and high-throughput computing. In this paper, written by a team from the National Center for Supercomputing Applications and Rutgers University, the researchers argue that the (traditionally separate) HPC and HTC infrastructures must be integrated and unified.
Authors: E.A. Huerta, Roland Haas, Shantenu Jha, Mark Neubauer, and Daniel S. Katz
Building an HPC certification program
Training new HPC practitioners is crucial to the growth of the HPC community. This paper, written by researchers from the University of Reading, argues that an HPC certification program would help employers identify and overcome knowledge gaps and help users identify relevant skills. The researchers outline a first version of an HPC certification program that would categorize, define, and examine competencies.
Authors: Kunkel, J., Himstedt, K., Hübbe, N., Stüben, H., Schröder, S., Kuhn, M., Riebisch, M., Olbrich, S., Ludwig, T., Filinger, W., Acquaviva, J.-T., Gerbes, A. and Lafayette, L.
Using exascale deep learning for climate analytics
In this paper, researchers from Nvidia, Berkeley Lab, and Oak Ridge National Laboratory extracted pixel-level masks of extreme weather patterns using neural networks running on the Piz Daint and Summit supercomputers. The authors outline the improvements to the software, input pipeline, and network training algorithms that allowed the neural networks to scale on the supercomputers.
This research is the first demonstration of exascale deep learning breaking the exaop barrier, and the researchers are finalists for the Gordon Bell Prize.
Authors: Thorsten Kurth, Sean Treichler, Joshua Romero, Mayur Mudigonda, Nathan Luehr, Everett Phillips, Ankur Mahesh, Michael Matheson, Jack Deslippe, Massimiliano Fatica, Prabhat, and Michael Houston.
Improving the training speed of ranking methods with HPC
The learning to rank (LTR) method for solving ranking problems is generally effective and has applications in anti-spam, search engines, data mining, and more. However, it experiences a significant bottleneck in the training process. Researchers from Xidian University applied parallel computing to accelerate the algorithm, which achieved improved results and showed good portability.
Authors: Huming Zhu, Pei Li, Peng Zhang, and Zheng Luo.
Using HPC to create a weather prediction model secured by DNA cryptography
Weather prediction systems require processing large amounts of past data. In this paper, researchers from the Institute of Engineering and Management in India use a model based on Markov’s chain to develop a weather prediction model for implementation on an HPC system. They also secured the model using a DNA cryptography-based algorithm for secure data transmission between systems. The model reportedly achieved 85-95% accuracy in tests.
Authors: Animesh Kairi, Suruchi Gagan, Tania Bera, and Mohuya Chakraborty.
Assessing the energy efficiency of HPC datacenters
Energy consumption is a major constraint on HPC datacenters. In this dissertation, Torsten Wilde of the Technical University of Munich introduces a “common frame of reference for the datacenter energy efficiency research domain” and uses the framework to identify gaps and generate a new metric for datacenter energy efficiency. The author goes on to show the presence of node power variability in homogeneous HPC systems and discuss remedies.
Author: Torsten Wilde
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.