In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Cluster-scale gravitational lenses are a useful point of study for cosmology and dark matter. In this paper, written by a team from France and Switzerland, the authors present Lenstool-HPC, a highly parallel gravitational lens modeling and map generation tool designed based on HPC designs. They describe testing of the software on the Swiss National Supercomputing Centre’s Piz Daint cluster and outline results.
Authors: C. Schäfera, G. Fourestey and J.-P. Kneibac.
Rendering intensive 3D models is a growing use case for HPC systems, but energy consumption remains a major cost consideration for most companies considering HPC use. In this paper, a team from the Technical University of Ostrava in the Czech Republic describe their efforts to increase the efficiency of a popular rendering tool, Blender, on a typical HPC system. Using a tuner of their own design, they achieve energy consumption reductions of 9 percent, albeit with a longer runtime.
Authors: M. Jaros, O. Vysocky, P. Strakos and M. Spetko.
HPC demand is outpacing on-premises HPC supply, but cloud-based HPC is rapidly growing. In this paper, a team of researchers from Russia and China presents “MC2E,” an environment that aggregates heterogeneous HPC resources (such as public and private clouds) for multidisciplinary academic research. MC2E allows users to schedule parallel applications between clouds and supercomputers based on performance and resource usage.
Authors: V. Antonenko, I. Petrov, R. Smeliansky, Z. Huang, M. Chen, D. Cao and X. Chen.
The Human Brain Project is a massive, decade-long project to better understand the human brain and turn that knowledge into valuable treatments and technology. These researchers – a team from ten different institutions – outline the synergy between the Human Brain Project’s neuroscientific aims and high-performance computing, highlighting HPC’s interdisciplinary role in conjunction with fields like neuroinformatics and neurorobotics.
Authors: Katrin Amunts, Alois C. Knoll, Thomas Lippert, Cyriel M. A. Pennartz, Phillippe Ryvlin, Alain Destexhe, Viktor K. Jirsa, Egidio D’Angelo and Jan G. Bjaalie.
As the world moves away from fossil fuels, research into biofuels is ramping up. This doctorate thesis, written by Veeraraghava Raju Hasti, used a high-performance computing model based on large eddy simulation to capture the combustion behaviors of different biofuels. Using this model, the author was able to develop AI-based models for early detection of blowout in a realistic combustor.
Author: Veeraraghava Raju Hasti
Extreme-scale cosmology simulations are crucial to understanding our universe, but the data they produce can be unwieldy, necessitating compression. In this paper, researchers from the University of Alabama and Los Alamos National Laboratory propose the use of GPU-based lossy compression for these simulations, demonstrating that this compression can provide the necessary accuracy for post-analysis while achieving a high compression ratio.
Authors: Sian Jin, Pascal Grosset, Christopher M. Biwer, Jesus Pulido, Jiannan Tian, Dingwen Tao and James Ahrens.
HPC systems collect enormous amounts of data during operation, which can be leveraged to improve operations over time. These German researchers worked to derive a model with supervised learning that helps to select the optimal CPU frequency when executing a job, with the aim of minimizing energy usage. The researchers demonstrate “good prediction” of CPU power draw with a low margin of error.
Authors: Gence Ozer, Sarthak Garg, Neda Davoudi, Gabrielle Poerwawinata, Matthias Maiterth, Alessio Netti and Daniele Tafani.
Do you know about research that should be included in next month’s list? If so, send us an email at email@example.com. We look forward to hearing from you.