In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
These four authors – each from a different national laboratory – argue that the HPC software stack has been allowed to “stagnate, relying on incremental changes to tried-and-true designs” in the face of a much more rapidly shifting hardware environment. The authors further contend that “a modern system software stack that focuses on manageability, scalability, security and modern methods will benefit the entire HPC community,” then going on to enumerate the characteristics of such a stack.
Authors: Benjamin S. Allen, Matthew A. Ezell, Doug Jacobsen and Cory Lueninghoener.
The Large Hadron Collider (LHC) in Switzerland remains the world’s largest particle collider (and, beyond that, its largest machine). These authors – a trio from CERN and the NRC Kurchatov Institute – describe the work undertaken to integrate workflows from the LHCb experiment (which is investigating the “beauty quark”) on the Marconi-A2 supercomputer at CINECA in Italy. The authors discuss the challenges involved in the process, including optimization for a many-core architecture.
Authors: Federico Stagni, Andrea Valassi and Vladimir Romanovskiy.
Energy flows are a common use of high-performance computing, whether in engines or – as in this case – power generation. With the exascale era swiftly approaching, these authors (from Pennsylvania State University, Argonne National Laboratory and Texas A&M University) review “landmark simulations” of nuclear reactor components made possible by unprecedented computing resources, discussing the results and their implications for the future of nuclear reactor simulations.
Authors: Elia Merzari, Paul Fischer, Misun Min, Stefan Kerkemeier, Aleksandr Obabko, Dillon Shaver, Haomin Yuan, Yiqi Yu, Javier Martinez, Landon Brockmeyer, Lambert Fick, Giacomo Busco, Alper Yildiz and Yassin Hassan.
Modeling space explosions (like supernovae) requires powerful computing resources. These authors – a team from Nvidia, Lawrence Berkeley National Laboratory, Stony Brook University and the University of California, Berkeley – describe recent changes in nuclear astrophysics codes made to prepare them for exascale by ensuring compatibility with current pre-exascale systems. The authors also provide an overview of science in this sector that is now possible thanks to new, more powerful supercomputers.
Authors: Max P. Katz, Ann Almgren, Maria Barrios Sazo, Kiran Eiden, Kevin Gott, Alice harpole, Jean M. Sexton, Don E. Willcox, Weiqun Zhang and Michael Zingale.
As HPC systems scale up, power consumption is becoming an ever-more-important issue. In this paper, researchers from the State University of New York at Buffalo and the Roswell Park Comprehensive Cancer Center discuss a new suite of tools for analyzing HPC jobs developed as part of the NSF-funded XMS project. The tools enable system operators to conduct energy usage analysis and communicate that information to end users. The authors discuss the roadblocks encountered when developing and implementing this system on a 1,400 node academic HPC cluster.
Authors: Joseph P. White, Martins Innus, Robert L. Deleon, Matthew D. Jones and Thomas R. Furlani.
Computational fluid dynamics (CFD) simulations, these authors (a team from Spain and France) write, “require a huge amount of computational power. As such, it is of paramount importance to carefully assess the performance of CFD codes and to study them in depth for enabling optimization and portability.” In this paper, they study three CFD codes covering two numerical methods, applying a generic performance analysis tool to identify critical points that limit the codes’ scalability.
Authors: Marta Garcia-Gasulla, Fabio Banchelli, Kilian Peiro, Guillem Ramirez-Gargallo, Guillaume Houzeaux, Ismail Ben Hassan Saidi, Christian Tenaud, Ivan Spisso and Filippo Mantovani.
In a field as specialized as HPC, training the next generation of specialists is an ongoing challenge. These authors – a duo from the Holland Computing Center at the University of Nebraska, Lincoln – highlight the outreach done by the center, including the “Legion” Raspberry Pi cluster, which was created to enable training and outreach. The authors discuss how Legion has helped the center conduct its outreach and education through a variety of learning opportunities.
Authors: Caughlin Bohn and Carrie Brown.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.