In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
Micro-HPC (µHPC) may seem like an oxymoron, but such clusters are important to building HPC experience among students and HPC trainees in developing countries with few HPC resources. Here, a duo from the University of Eastern Finland and Warwick University investigate the use of credit card-sized µHPC systems to train students in HPC skills and knowledge. They conclude that µHPC systems are “easy to use … for managing, deploying, installing, downloading, and running program systems except for writing parallel programs.”
Authors: Nkundwe Moses Mwasaga and Mike Joy.
Remote Direct Memory Access (RDMA) network interface controllers (NICs) allow system operators to use network offload to alleviate CPU loads. In this paper, a team from the KTH Royal Institute of Technology, KAUST and UT Austin present RedN, “a principled, practical approach to implementing complex RNIC offloads, without requiring any hardware modifications.” The authors demonstrate that RedN “can outperform one and two-sided RDMA implementations by up to 3x and 7.8x for key-value get operations and performance isolation, respectively[.]”
Authors: Waleed Reda, Marco Canini, Dejan Kostić and Simon Peter.
As the exascale era approaches, stumbling blocks await. This author from Forschungszentrum Jülich discusses one such “pothole”: the parallelization difficulties imposed by increasingly complex interactions between hardware and system software. The paper examines this question in the context of the HemeLB exascale flagship application code from the EU’s Centre of Excellence for Computational Biomedicine (CompBioMed), which runs on the SuperMUC-NG supercomputer.
Author: Brian J. N. Wylie.
“Permafrost thaw has been observed at several locations across the Arctic tundra in recent decades,” write these authors – a team from the University of Connecticut and the Woods Hole Research Center. “However, the pan-Arctic extent and spatiotemporal dynamics of thaw remains poorly explained.” To help explain it, they engaged in “knowledge discovery through artificial intelligence, big imagery and high-performance computing,” developing a tool called Mapping Application for Permafrost Land Environment (MAPLE). The researchers report “robust performances” of the tool when applied to diverse tundra landscapes.
Authors: Chandi Witharana, Md Abul Ehsan Bhuiyan and Anna K. Liljedahl.
“With growing demands in terms of aggregated bandwidth, scalability, transceiver form factor and cost, silicon photonics is expected to play a growing role,” write these authors from France, Germany and Israel. In the paper, they argue that silicon photonics may “pave the way to terabit-scale communications in datacenters and HPC systems,” stressing that “this new paradigm will be possible only with an evolution of existing silicon photonics manufacturing platforms, in order to solve the challenges of 3D packaging, laser integration, reflow-compatible optical connectors and high efficiency, low footprint modulators.”
Authors: S.Bernabé, Q. Wilmart, K. Hasharoni, K. Hassan, Y. Thonnart, P. Tissier, Y. Désières, S. Olivier, T. Tekin and B. Szelag.
In this article, nine authors from Fujitsu and RIKEN discuss the development of AI technology for the world-leading Fugaku supercomputer. They discuss deep learning initiatives for the Arm-based system, including the DL4Fugaku program and its resulting AI framework. They also address future initiatives, including future collaborations with the Arm community and the application of technologies such as content-aware computing.
Authors: Atsushi Nukariya, Kazutoshi Akao, Jin Takahashi, Naoto Fukumoto, Kentaro Kawakami, Akiyoshi Kuroda, Kazuo Minami, Kento Sato and Satoshi Matsuoka.
Here, a quintet from Oak Ridge National Laboratory (ORNL) discuss the 2.0 version Oak Ridge Leadership Computing Facility (OLCF) Test Harness (OTH), a tool first introduced in 2007 for acceptance testing of the Jaguar supercomputer. In the paper, the authors discuss the design of the OTH and note how the OTH was improved for the acceptance of the Summit system and, looking toward the exascale era, they evaluate challenges and lessons learned from the acceptance of the last three flagship systems at the OLCF.
Authors: Veronica G. Vergara Larrea, Michael J. Brim, Arnold Tharrington, Reuben Budiardja and Wayne Joubert.
Do you know about research that should be included in next month’s list? If so, send us an email at email@example.com. We look forward to hearing from you.