What’s New in HPC Research: EXA2PRO, DQRA, and HiCMA-PaRSE Frameworks & More

By Mariana Iriarte

June 28, 2022

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.


EXA2PRO: a framework for high development productivity on heterogeneous computing systems

In this paper, a team of international researchers (National Technical University of Athens, Greece; Center for Research and Technology Hellas, Chemical Process and Energy Resources Institute, Greece; Linköping University, Sweden; Center for Research and Technology Hellas, Chemical Process and Energy Resources Institute, Greece; Maison de la Simulation, CEA, CNRS, France; Université de Pau et des Pays de l’Adour, France; Bordeaux University, France; Center for Research and Technology Hellas, Information Technologies Institute, Greece) showcased the key components of the EXA2PRO framework, which aims to improve “developers’ productivity for applications that target heterogeneous computing systems.” The framework is “based on advanced programming models and abstractions that encapsulate low-level platform-specific optimizations and it is supported by a runtime that handles application deployment on heterogeneous nodes.” The researchers “applied the EXA2PRO framework to four HPC applications and demonstrated how it can be used to automatically deploy and evaluate applications to a wide variety of heterogeneous clusters.”

Authors: Lazaros Papadopoulos, Dimitrios Soudris, Christoph Kessler, August Ernstsson, Johan Ahlqvist, Nikos Vasilas, Athanasios I. Papadopoulos, Panos Seferlis, Charles Prouveur, Matthieu Haefele, Samuel Thibault, Athanasios Salamanis, Theodoros Ioakimidis, and Dionysios Kehagias

Deep reinforcement learning for computational fluid dynamics on HPC systems 

Researchers from the Institute of Aerodynamics and Gas Dynamics at the University of Stuttgart, Hewlett Packard Enterprise (HPE), the High Performance Computing Center Stuttgart at the University of Stuttgart, and the Laboratory of Fluid Dynamics and Technical Flows at the University of Magdeburg “Otto von Guericke” describe the Relexi framework that they have developed. Relexi “bridges the gap between machine learning workflows and modern computational fluid dynamics (CFD) solvers on HPC systems providing both components with its specialized hardware.” It is “a scalable reinforcement learning (RL) framework… built with modularity in mind and allows easy integration of various HPC solvers by means of the in-memory data transfer provided by the SmartSim library.” In this paper, the researchers demonstrated the “Relexi framework can scale up to hundreds of parallel environments on thousands of cores.” According to the researchers, this capability will make it possible for HPC resources to tackle massive problems or shorten the turnaround times of projects.  Lastly, the researchers demonstrated “the potential of an RL-augmented CFD solver by finding a control strategy for optimal eddy viscosity selection in large eddy simulations.”

Authors: Marius Kurz, Philipp Offenhauser, Dominic Viola, Oleksandr Shcherbakov , Michael Resch, Andrea Beck

DQRA: deep quantum routing agent for entanglement routing in quantum networks

Researchers from the College of Computing and Software Engineering at the Kennesaw State University in Marietta, Georgia, tackle routing in quantum networks with a “machine-learning-powered quantum routing model for quantum networks” named Deep Quantum Routing Agent (DQRA). In this paper, the authors detail the deep reinforcement routing scheme DQRA, which uses an “empirically designed deep neural network that observes the current network states to accommodate the network’s demands, which are then connected by a qubit-preserved shortest path algorithm.” According to the research team, the “training process of DQRA is guided by a reward function that aims toward maximizing the number of accommodated requests in each routing window.” They demonstrate that on average “DQRA is able to maintain a rate of successfully routed requests at above 80 percent in a qubit-limited grid network and approximately 60 percent in extreme conditions.”

Authors: Linh Le and Tu N. Nguyen

A framework to exploit data sparsity in tile low-rank Cholesky factorization

This paper by a multi-institutional team of researchers from the Innovative Computing Laboratory at the University of Tennessee, the Extreme Computing Research Center, Division of Computer, Electrical, and Mathematical Sciences and Engineering at King Abdullah University of Science and Technology, Oak Ridge National Laboratory, and the University of Manchester proposes a software “framework that couples the PaRSEC runtime system and the HiCMA numerical library to solve challenging 3D data-sparse problems.” In this paper, the researchers demonstrate “the efficiency and scalability of HiCMA-PaRSE.” Performance results performed by researchers found that implementing their HiCMA-PaRSE framework demonstrated “up to 7-fold on Shaheen II and 9-fold on Fugaku performance superiority in situations where the 3D unstructured mesh deformation application renders a matrix operator with low density.” In addition, the software framework “solves a formally dense 3D problem with 52M mesh points on 65K cores in about half an hour.”

Authors: Qinglei Cao, Rabab Alomairy, Yu Pei, George Bosilca, Hatem Ltaief, David Keyes, and Jack Dongarra

The Summit supercomputer.

Modeling pre-Exascale AMR parallel I/O workloads via proxy applications

In this paper, computer scientists from Oak Ridge National Laboratory in Tennessee and the Georgia Institute of Technology in Georgia dive into ”the modeling of pre-exascale input/output (I/O) workloads of Adaptive Mesh Refinement (AMR) simulations through a simple proxy application.” According to the authors, the ultimate goal of this study is “to provide an initial level of understanding of AMR I/O workloads via lightweight proxy applications models to facilitate autotune data management strategies in anticipation of exascale systems.” Using the Summit supercomputer, the scientists collected data from the AMReX Castro framework “for a wide range of scales and mesh partitions for the hydrodynamic Sedov case as a baseline to provide sufficient coverage to the formulated proxy model.” The results from this study demonstrated that “MACSio can simulate actual AMReX non-linear ‘static’ I/O workloads to a certain degree of confidence on the Summit supercomputer using the present methodology.”

Authors: William F. Godoy, Jenna Delozier, and Gregory R. Watson

HPC extensions to the OpenKIM processing pipeline

A team of researchers from the department of aerospace engineering and mechanics at the University of Minnesota and at the San Diego Supercomputer Center at the University of California, San Diego, develop extensions to the OpenKIM processing pipeline to efficiently enable the use of high performance computing resources with Open Knowledgebase of Interatomic Models (OpenKIM ). OpenKIM is “an NSF Science Gateway that archives fully functional computer implementations of interatomic models (potentials and force fields) and simulation codes that use them to compute material properties. Interatomic models are coupled with compatible simulation codes and executed in a fully automated manner by the OpenKIM processing pipeline, a cloud-based computation platform.” However, previous studies have suggested that the use of the pipeline is not sufficient enough to support large-scale computations. Therefore, researchers detail in this paper the extensions to the OpenKIM processing pipeline to achieve better results using high performance computing resources.

Authors: Daniel S. Karls, Steven M. Clark, Brendon A. Waters, Ryan S. Elliott, and Ellad B. Tadmor

A Taxonomy of Error Sources in HPC I/O Machine Learning Models

In this paper, a multi-institute research team analyze datasets from the ALCF Theta supercomputer and the NERSC Cori supercomputer to better understand I/O throughput modeling. “We look at why ML models of I/O throughput can wildly mispredict HPC jobs performance. Most of the time, [it’s on account of] bad models or insufficient training, but we found that several other effects are in play — undiagnosed overfitting, contention between jobs, system noise, etc.,” said lead author Mihailo Isakov (Arizona State University) in a Tweet. Isakov also reported that the paper had been accepted by the Supercomputing Conference (SC22). It is currently accessible as an Arxiv pre-print.

Authors: Mihailo Isakov, Mikaela Currier, Eliakin del Rosario, Sandeep Madireddy, Prasanna Balaprakash, Philip Carns, Robert B. Ross, Glenn K. Lockwood, Michel A. Kinsy


Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

The Three Laws of Robotics and the Future

September 14, 2024

Isaac Asimov's Three Laws of Robotics have captivated imaginations for decades, providing a blueprint for ethical AI long before it became a reality. First introduced in his 1942 short story "Runaround" from the "I, R Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qubits on an H1 system to simulate an iron catalyst's low ener Read more…

Diversity Hiring Maximizes Everyone’s Success in STEM and Beyond

September 12, 2024

Despite overwhelming evidence, some companies remain surprised by this simple revelation: Diverse workforces and leadership teams are good for business. Companies that cultivate diverse hiring practices and maintain a di Read more…

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI report. The global study, conducted by S&P Global Market In Read more…

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specific Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

The Three Laws of Robotics and the Future

September 14, 2024

Isaac Asimov's Three Laws of Robotics have captivated imaginations for decades, providing a blueprint for ethical AI long before it became a reality. First i Read more…

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI rep Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

AWS’s High-performance Computing Unit Has a New Boss

September 10, 2024

Amazon Web Services (AWS) has a new leader to run its high-performance computing GTM operations. Thierry Pellegrino, who is well-known in the HPC community, has Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire