Institutions and academia all over the world are working to redesign codes and algorithms to meet the demands of advanced research. In the UK, the Exascale Computing ALgorithms and Infrastructures Benefiting UK Research (ExCALIBUR) is one such research program aiming to deliver the next generation of high performance simulation software for the highest-priority fields in UK research. These fields include very compute-intensive workloads, such as simulating the entire evolution of the universe, understanding seismic and gravitational waves, assessing tsunamis, and modeling the fundamental structure of matter.
The creation of new compute options, including smart and programmable interconnect solutions, also referred to as data processing units (DPUs), are providing researchers with unprecedented innovation for modern high performance systems. NVIDIA BlueField DPUs combine powerful In-Network Computing engines, high speed networking, and extensive programmability to deliver software defined, hardware-accelerated solutions for the most demanding workloads.
Working in conjunction with the ExCALIBUR program, Distributed Research utilising Advanced Computing (DiRAC), which includes computing resources that are distributed over four university sites, including Cambridge, Leicester, Durham, and Edinburgh, is leveraging BlueField DPUs in innovative ways to enable breakthrough science with its breadth of programmable capabilities.
One example is the Institute for Computational Cosmology and Department of Computer Science at Durham University, which runs large-scale simulations of wave propagation, such as gravitational and seismic waves, utilizing adaptive cartesian meshes with a code called ExaHyPE that phrases most of its computations as tasks. Over the past few years, extensive efforts have gone into labeling and identifying the critical tasks that require synchronization to other nodes or tasks where the mesh changes. When combined with the greater number of lower priority tasks, there is significant effort that must be made to perform load balancing across the system. Once load balancing is performed at an optimal level, it becomes very difficult and time/cost consuming if the domain changes and the system consequently becomes unbalanced. In order to become more reactive, researchers must shift lightweight, idle tasks to other resources, however, orchestration becomes problematic and tweaking the MPI runtime to make progression at the right time, at the right pace, can take a toll on compute resources.
By utilizing the BlueField DPU compute cores, Durham University can save compute resources by using the DPU as an MPI progressions engine—observing and routing tasks, as well as cache/accept tasks as they come in. The work extends beyond previous research collaborations within the ExaHyPE consortium, and notably the group of Michael Bader at TUM, introducing a clear separation-of-concerns for the CPU’s focus on compute tasks. As a result, an intelligent network starts to own the data responsibilities.
Over at the University College London, graduate student James Legg is working with BlueField DPUs to accelerate computational codes that use task-based scheduling. In James’ approach, the BlueField DPU, in particular its Arm processor subsystem, is the task scheduler and the main host processor that runs the computational tasks or kernels. This inverts the traditional relationship in which it is the accelerator card that runs the kernels and the host organizes the running of those. When, previously, the scheduler and kernels both ran on the host, they competed for processing resources, keeping the scheduler design lean. On the BlueField DPU, the scheduler can easily have several dedicated threads, allowing for the processing of the schedule itself in parallel to the kernels on the host, as well as more complex processing of the schedule itself. Additional research is being looked into that allows the schedulers on the BlueField DPUs to move the computation data between the hosts’ RAMs without involving the host processors at all.
The Cambridge Service for Data Driven Discovery, or CSD3 for short, is a UK National Research Cloud and one of the world’s most powerful academic cloud-native supercomputers. CSD3 is enabled by the BlueField DPUs to offload infrastructure management, such as security policies and storage frameworks from the host, while providing acceleration and isolation for workloads to maximize I/O performance. This enables secure, bare metal performance so that researchers can pursue exploration like never before.
These are just a few of the examples of researchers exploring innovative ways to take advantage of DPU performance and programmability. To help ease programmability even more, and fuel further advancements, the NVIDIA DOCA SDK has been developed to enable infrastructure developers to rapidly create network, storage, security, management, and AI and high performance computing (HPC) applications and services, on top of the BlueField DPU, leveraging industry-standard APIs. With DOCA, developers can program the supercomputing infrastructure of tomorrow by creating high-performance, software-defined, and cloud-native DPU-accelerated services. Developers can get started today by registering for early access to the program.
Brian Sparks, Sr. Director, HPC and InfiniBand Marketing, NVIDIA
Brian Sparks is a senior marketing and corporate communications executive with over 20 years of experience in the HPC, hyperscale and cloud data center markets. Brian has previously held Marketing Working Group Chair positions in the InfiniBand Trade Association (IBTA) and the OpenFabrics Alliance (OFA) and is the current Marketing Working Group Chair for the Unified Communication Framework (UCF) Consortium. Brian holds a B.A. degree in Communications from San Jose State University.