In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Exploring transfer learning to reduce training overhead of HPC data in machine learning
HPC scientific simulations often generate terabytes (or even petabytes) of data per run, and processing that amount of data for machine learning training can be taxing, taking days, or even weeks. This paper, written by a team from Temple University, Shanghai Jiao Tong University and the New Jersey Institute of Technology, discusses the use of transfer learning to reduce this training overhead. The researchers find that transfer learning can reduce training time without (in most cases) significantly increasing the error rate.
Authors: Tong Liu, Shakeel Alibhai, Jinzhen Wang, Qing Liu, Xubin He and Chentao Wu.
Moving toward exascale simulations of the magnetic universe
Cosmic structure simulations, these astrophysicists say, “are presently at the forefront of today’s use of supercomputers” – but the field “requires the development of new numerical methods that excel in accuracy, robustness, parallel scalability, and physical fidelity[.]” The authors describe the EXAMAG project, in which they worked on improving and applying AREPO, an astrophysical moving-mesh code. They discuss their efforts to release two major community coding projects, which they say represent the state-of-the-art in the field.
Authors: Volker Springel, Christian Klingenberg, Rüdiger Pakmor, Thomas Guillet and Praveen Chandrashekar.
Using an FPGA-based HPC platform for cryptanalysis
Cryptanalysis is used to test the strength of cryptographic algorithms, but certain cryptanalytic strategies have – until now – been exclusively theoretical due to computational requirements. In this paper, written by a duo from the College of Engineering, Pune in India, the authors propose an FPGA-based HPC platform designed to meet those high requirements, achieving a computational complexity of 2124 for an AES algorithm attack.
Authors: Harshali Zodpe and Ashok Sapkal.
Reflecting on HPC education and training in Australia
The Pawsey Supercomputing Centre in Kensington, Australia, is home to radio astronomy telescopes and supercomputers, but it’s also home to a variety of education, training and outreach activities aimed at Australian researchers. In this paper, a team from Pawsey highlights their efforts to use different learning methods and tools to appeal to specific education and training purposes.
Authors: Maciej Cytowski, Luke Edwards, Mark Gray, Christopher Harris, Karina Nunez and Aditi Subramanya.
Developing energy-efficient algorithms for exascale weather prediction
Weather prediction at exascale is anticipated to be an extraordinarily energy-intensive task. These researchers – hailing from over a dozen weather and research institutions – here discuss the Energy-efficient Scalable Algorithms for Weather Prediction at Exascale (ESCAPE) project, which aimed to “develop a sustainable strategy to evolve weather and climate prediction models to next-generation computing technologies.”
Authors: Andreas Müller, Willem Deconinck, Christian Kühnlein, Gianmarco Mengaldo, Michael Lange, Nils Wedi, Peter Bauer, Piotr K. Smolarkiewicz, Michail Diamantakis, Sarah-Jane Lock, et al.
Conducting earthquake simulations on Sierra and Lassen
Earthquakes can cause hundreds of millions of dollars in damage. In this paper, written by Arthur Rodgers of Lawrence Livermore National Laboratory (LLNL), Rodgers describes physics-based 3D numerical simulations of earthquake ground motions along the hazardous Hayward Fault in California. “These simulations using the newly enhanced … code allow us to run higher resolution seismic simulations with shorter run times, providing a new capability for seismic hazard and risk studies,” Rodgers wrote.
Author: Arthur Rodgers
Using performance-driven analysis for adaptive car navigation services on HPC systems
With the advent of self-driving cars and real-time traffic data, larger and more powerful computing systems are needed to process the vast volumes of data being produced. In this paper, a team from the Polytechnic University of Milan introduces an adaptive car navigation system and a performance model used to adjust the scale of the computing infrastructure as necessary. The researchers present their model validation process, which used data from the urban area of Milan.
Authors: Leonardo Arcari, Macro Gribaudo, Gianluca Palermo and Giuseppe Serazzi.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.