In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Exploiting hardware events to reduce HPC energy consumption
As Moore’s law slows and HPC systems continue to expand, energy is growing as a concern for system operators. In this article, five researchers from South Korea’s Sungkyunkwan University explain how they developed a mechanism called Event-driven Uncore Frequency Scaler (eUFS) that improves energy efficiency by adapting the frequency of non-core CPU hardware in response to events like cache accesses and clock cycles, reducing energy use by an average of six percent.
Authors: Yongho Lee, Osang Kwon, Kwangeun Byeon, Yongjun Kim and Seokin Hong.
Assessing the use cases of persistent memory in scientific HPC
These researchers from Google and Israeli research institutions explore the applications of Intel’s Optane persistent memory in HPC-driven science: specifically, through replacing standard storage devices and/or replacing or augmenting DRAM. To do that, they configured an HPC system for testing and tested a series of scientific codes, concluding that the persistent memory “allows scientific applications to fully utilize nodes’ locality by providing them sufficiently-large main memory. Moreover, it can also be used for providing a high-performance replacement for persistent storage.”
Authors: Yehonatan Fridman, Yaniv Snir, Matan Rusanovsky, Kfir Zvi, Harel Levin, Danny Hendler, Hagit Attiya and Gal Oren.
Improving concentrating solar power through HPC-powered simulations
Concentrating solar power (CSP) generates energy by using large mirrors to focus energy onto molten mixtures that are used to produce steam for turbines. In this dissertation from Stanford University, Hilario Cardenas Torres recounts using HPC-powered multi-physics simulations to represent particle-laden turbulent flows in the radiation environments of CSP plants. Naming the solver Soleil-X, Torres explains the development process and a test case on a particle jet heated with 10kW.
Author: Hilario Cardenas Torres.
Developing the ExaWorks project to enable exascale workflows
In preparation for the arrival of exascale supercomputing, these researchers – a team from Rutgers, Lawrence Livermore National Laboratory, Argonne National Laboratory, the University of Chicago and Brookhaven National Laboratory – present ExaWorks, a project operating under the umbrella of the Exascale Computing Project (ECP). The new project, they say, can address many of the challenges in managing deeply heterogeneous exascale systems and software through a variety of workflow management tools. The authors elaborate on the development of the system and early partnerships with the community and large computing facilities.
Authors: Aymen Al-Saadi, Dong H. Ahn, Yadu Babuji, Kyle Chard, James Corbett, Mihael Hategan, Stephen Herbein, Shantenu Jha, Daniel Laney, Andre Merzky, et al.
Moving towards a codesign approach for Europe’s next supercomputers
The EuroHPC Joint Undertaking recently launched its first four supercomputers – but its stewards are already looking far into the future. In this paper, dozens of authors from more than a dozen institutions across Europe introduce the TEXTAROSSA project. The project, led by Italy, consists of 17 institutions and companies and aims to bridge key technology gaps for European supercomputing hardware and software.
Authors: Giovanni Agosta, Daniele Cattaneo, William Fornaciari, Andrea Galimberti, Giuseppe Massari, Federico Reghenzani, Federico Terraneo, Davide Zoni, Carlo Brandolese, Massimo Celino, et al.
Developing an HPC framework for searching mass spectrometry data
“There has been substantial effort in improving” the computational efficiency of peptide database search algorithms, explain these authors from Florida International University, but “modern serial and [HPC] algorithms exhibit suboptimal performance mainly due to their ineffective parallel designs … and high overhead costs.” In their paper, they present an HPC framework called HiCOPs that efficiently accelerates these peptide search algorithms on distributed-memory supercomputers.
Authors: Muhammad Haseeb and Fahad Saeed.
Accommodating deadline-driven jobs on grid-based HPC platforms
“Grid computing is a connected computing infrastructure that furnishes reliable, stable, ubiquitous, and economic access to high-end computational power,” write these authors from Turkey, West Virginia and Malaysia. However, they write, “[the] dynamic nature of the grid brings several challenges to scheduling algorithms that operate in queuing-based scheduling[.]” In their paper, they explain how a proposed “swift gap” mechanism could use backfilling and local search optimization to reduce delays by placing jobs in the gaps that guarantee the best start times and fastest resources.
Authors: Omar Dakkak, Yousef Fazea, Shahrudin Awang Nor and Suki Arif.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.