Strategy Behind Virtualizing GPUs

August 20, 2018

Overview

The use of graphics processing units (GPUs) to accelerate portions of general-purpose scientific and engineering applications is widespread today. However, the adoption for running GPU-based high performance computing (HPC) and artificial intelligence jobs is limited due to the high acquisition cost, high power consumption and low utilization of GPUs. Typically, applications can only access GPUs located within the local node where they are being executed which limits their usage. In addition, the sharing of GPUs is not considered by job schedulers such as SLURM that may be used for HPC or AI compute runs.

In today’s datacenter environment, the ability to leverage system resources, especially that of GPUs needs to be more flexible.  An ideal solution is sharing the notably expensive, power hungry GPUs among several nodes in a cluster as part of a virtualization solution. Virtualizing and sharing GPUs efficiently addresses several concerns including maximizing utilization across remote GPU resources for either or HPC and AI workloads.

The GPU virtualization middleware solution from rCUDA (remote CUDA) solves these issues by turning GPUs into virtual compute resources when running on networks with underlying high-performance networking technologies such as Mellanox InfiniBand®. According to Dr. Federico Silla, Associate Professor at the Department of Computer Engineering and rCUDA team leader, from Universitat Politècnica de València (Technical University of Valencia) in Spain, “Sharing GPUs among nodes in the cluster by remotely virtualizing them is a powerful mechanism that can provide important energy savings while at the same time the overall cluster throughput (in terms of jobs completed per time unit) is noticeably increased. Furthermore, it is possible to provide differentiated quality service levels to customers paying different fees. rCUDA is a modern tool that adds value to the GPUs in your cluster.”

Introducing rCUDA

The rCUDA framework was developed by Universitat Politècnica de València (Spain). rCUDA is a middleware product that enables remote virtualization of GPUs. With rCUDA, physical GPUs are installed only in some of the nodes of the cluster, and they are transparently shared among all the nodes. Nodes equipped with GPUs provide GPU services to all of the nodes within the cluster.

Benefits of running rCUDA in the data center

  • Energy savings
  • rCUDA improves GPU utilization and makes GPU usage more efficient and flexible, allowing up to 100% of available GPU capacity
  • More GPUs are available for a single application
  • Using rCUDA does not mean a drop or reduced performance (on average, the overhead of using rCUDA is negligible when InfiniBand or RDMA over Converged Ethernet (RoCE) are leveraged)
  • Same fabric, no special network is needed
  • rCUDA is transparent to applications (source code of NVIDIA CUDA® applications does not need to be modified)
  • rCUDA is not tied to a specific processor architecture
  • GPU virtualization enables cluster configurations with a reduced number of GPUs reducing the costs associated with the use of GPUs

How does rCUDA work?

While NVIDIA’s CUDA® platform is limited to interact with GPUs that are physically installed in the node where the application is being executed, remote GPU virtualization frameworks follow a client-server distributed approach. With rCUDA, applications are not limited to local GPUs, and can leverage any GPU in the cluster— this is known as remote GPU virtualization. The client part of the rCUDA middleware is installed in the cluster node that is executing the application which is requesting GPU services. The server side runs on the system owning the actual GPU. When the client receives a CUDA request from an accelerated application, it processes it and forwards it to the remote server. The server node receives the request and forwards it to the GPU, which completes the execution of the request and provides the associated results back to the server which is executing the application process as shown in Figure 1.

Figure 1. rCUDA architecture allows applications to use GPUs across the network

Applications do not need to be modified to use rCUDA. However, applications must be linked to the rCUDA libraries instead of the CUDA libraries. rCUDA then decouples GPUs from the nodes where they are installed and creates a GPU clustering environment with multiple GPUs which provide services to multiple local or remote compute systems. Clustered GPUs can be transparently shared by any of the nodes in the facility.

rCUDA runs on many systems and applications

Mellanox InfiniBand and RoCE enabled solutions have native RDMA engines which are supported across system architectures and can easily implement rCUDA functionality. Because rCUDA is also not tied to a specific processor architecture —it can run on a variety of systems including x86, ARM, and IBM POWER processors as shown in Figure 2.

Figure 2. rCUDA can run on all major system architectures

“In addition, rCUDA has successfully run on popular GPU and HPC applications such as BARRACUDA, CUDAmeme, GPUBlast, GPU-LIBSVM, Gromacs, LAMMPS, MAGMA and NAMD. Deep learning frameworks are also supported. rCUDA has been successfully run with TensorFlow version 1.7, Caffe, Torch, Theano, PyTorch and MXNET. Finally, renderers such as Blender and Octane are also supported,” states Silla.

How Mellanox integrates with rCUDA

Mellanox Technologies is a leading supplier of end-to-end Ethernet and Mellanox InfiniBand® intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Because Mellanox InfiniBand is based on open standards, its non-proprietary solutions easily integrate with all middleware technologies, including rCUDA’s innovative GPU virtualization technology.

Previously, virtualized GPUs were impaired by the low bandwidth of the underlying network. However, there is a negligible performance impact when running rCUDA on a high-performance network fabric such as Mellanox InfiniBand— execution time is usually increased by less than 4% when a high performance network fabric is used. “By taking full advantage of the native RMDA engines, the high bandwidth and ultra-low latency of Mellanox InfiniBand, rCUDA provides near-native performance to applications using any remote GPU,” states Scot Schultz, Sr. Director, HPC / Artificial Intelligence and Technical Computing – Mellanox Technologies.

Summary

Until recently, GPU usage for HPC and AI processing has been limited because native CUDA software could only use GPUs that were physically installed in the node where the application is executed. rCUDA’s virtualized middleware installed on systems using Mellanox’s high bandwidth networking architecture allows GPUs to be shared among all the nodes from the entire cluster rather than limiting the user application to a single node’s local GPUs. rCUDA also provides significant energy and cost savings with negligible impact to application performance.

According to Silla, “Different remote GPU virtualization solutions provide varying performance values when used across clusters. Therefore, you have to try remote GPU virtualization by yourself in your cluster to draw your own conclusions. Do not accept demos carried out in clusters other than your own. Unlike other solutions, you can try rCUDA in your cluster to prove its value in your system.”

 

Mellanox

 

 

 

rCUDA

 

 

 

References

rCUDA slides: http://www.rcuda.net/pub/rCUDA_isc18.pdf

rCUDA technical paper: https://dl.acm.org/citation.cfm?id=2830015

http://www.mellanox.com

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire