Strategy Behind Virtualizing GPUs

August 20, 2018

Overview

The use of graphics processing units (GPUs) to accelerate portions of general-purpose scientific and engineering applications is widespread today. However, the adoption for running GPU-based high performance computing (HPC) and artificial intelligence jobs is limited due to the high acquisition cost, high power consumption and low utilization of GPUs. Typically, applications can only access GPUs located within the local node where they are being executed which limits their usage. In addition, the sharing of GPUs is not considered by job schedulers such as SLURM that may be used for HPC or AI compute runs.

In today’s datacenter environment, the ability to leverage system resources, especially that of GPUs needs to be more flexible.  An ideal solution is sharing the notably expensive, power hungry GPUs among several nodes in a cluster as part of a virtualization solution. Virtualizing and sharing GPUs efficiently addresses several concerns including maximizing utilization across remote GPU resources for either or HPC and AI workloads.

The GPU virtualization middleware solution from rCUDA (remote CUDA) solves these issues by turning GPUs into virtual compute resources when running on networks with underlying high-performance networking technologies such as Mellanox InfiniBand®. According to Dr. Federico Silla, Associate Professor at the Department of Computer Engineering and rCUDA team leader, from Universitat Politècnica de València (Technical University of Valencia) in Spain, “Sharing GPUs among nodes in the cluster by remotely virtualizing them is a powerful mechanism that can provide important energy savings while at the same time the overall cluster throughput (in terms of jobs completed per time unit) is noticeably increased. Furthermore, it is possible to provide differentiated quality service levels to customers paying different fees. rCUDA is a modern tool that adds value to the GPUs in your cluster.”

Introducing rCUDA

The rCUDA framework was developed by Universitat Politècnica de València (Spain). rCUDA is a middleware product that enables remote virtualization of GPUs. With rCUDA, physical GPUs are installed only in some of the nodes of the cluster, and they are transparently shared among all the nodes. Nodes equipped with GPUs provide GPU services to all of the nodes within the cluster.

Benefits of running rCUDA in the data center

  • Energy savings
  • rCUDA improves GPU utilization and makes GPU usage more efficient and flexible, allowing up to 100% of available GPU capacity
  • More GPUs are available for a single application
  • Using rCUDA does not mean a drop or reduced performance (on average, the overhead of using rCUDA is negligible when InfiniBand or RDMA over Converged Ethernet (RoCE) are leveraged)
  • Same fabric, no special network is needed
  • rCUDA is transparent to applications (source code of NVIDIA CUDA® applications does not need to be modified)
  • rCUDA is not tied to a specific processor architecture
  • GPU virtualization enables cluster configurations with a reduced number of GPUs reducing the costs associated with the use of GPUs

How does rCUDA work?

While NVIDIA’s CUDA® platform is limited to interact with GPUs that are physically installed in the node where the application is being executed, remote GPU virtualization frameworks follow a client-server distributed approach. With rCUDA, applications are not limited to local GPUs, and can leverage any GPU in the cluster— this is known as remote GPU virtualization. The client part of the rCUDA middleware is installed in the cluster node that is executing the application which is requesting GPU services. The server side runs on the system owning the actual GPU. When the client receives a CUDA request from an accelerated application, it processes it and forwards it to the remote server. The server node receives the request and forwards it to the GPU, which completes the execution of the request and provides the associated results back to the server which is executing the application process as shown in Figure 1.

Figure 1. rCUDA architecture allows applications to use GPUs across the network

Applications do not need to be modified to use rCUDA. However, applications must be linked to the rCUDA libraries instead of the CUDA libraries. rCUDA then decouples GPUs from the nodes where they are installed and creates a GPU clustering environment with multiple GPUs which provide services to multiple local or remote compute systems. Clustered GPUs can be transparently shared by any of the nodes in the facility.

rCUDA runs on many systems and applications

Mellanox InfiniBand and RoCE enabled solutions have native RDMA engines which are supported across system architectures and can easily implement rCUDA functionality. Because rCUDA is also not tied to a specific processor architecture —it can run on a variety of systems including x86, ARM, and IBM POWER processors as shown in Figure 2.

Figure 2. rCUDA can run on all major system architectures

“In addition, rCUDA has successfully run on popular GPU and HPC applications such as BARRACUDA, CUDAmeme, GPUBlast, GPU-LIBSVM, Gromacs, LAMMPS, MAGMA and NAMD. Deep learning frameworks are also supported. rCUDA has been successfully run with TensorFlow version 1.7, Caffe, Torch, Theano, PyTorch and MXNET. Finally, renderers such as Blender and Octane are also supported,” states Silla.

How Mellanox integrates with rCUDA

Mellanox Technologies is a leading supplier of end-to-end Ethernet and Mellanox InfiniBand® intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Because Mellanox InfiniBand is based on open standards, its non-proprietary solutions easily integrate with all middleware technologies, including rCUDA’s innovative GPU virtualization technology.

Previously, virtualized GPUs were impaired by the low bandwidth of the underlying network. However, there is a negligible performance impact when running rCUDA on a high-performance network fabric such as Mellanox InfiniBand— execution time is usually increased by less than 4% when a high performance network fabric is used. “By taking full advantage of the native RMDA engines, the high bandwidth and ultra-low latency of Mellanox InfiniBand, rCUDA provides near-native performance to applications using any remote GPU,” states Scot Schultz, Sr. Director, HPC / Artificial Intelligence and Technical Computing – Mellanox Technologies.

Summary

Until recently, GPU usage for HPC and AI processing has been limited because native CUDA software could only use GPUs that were physically installed in the node where the application is executed. rCUDA’s virtualized middleware installed on systems using Mellanox’s high bandwidth networking architecture allows GPUs to be shared among all the nodes from the entire cluster rather than limiting the user application to a single node’s local GPUs. rCUDA also provides significant energy and cost savings with negligible impact to application performance.

According to Silla, “Different remote GPU virtualization solutions provide varying performance values when used across clusters. Therefore, you have to try remote GPU virtualization by yourself in your cluster to draw your own conclusions. Do not accept demos carried out in clusters other than your own. Unlike other solutions, you can try rCUDA in your cluster to prove its value in your system.”

 

Mellanox

 

 

 

rCUDA

 

 

 

References

rCUDA slides: http://www.rcuda.net/pub/rCUDA_isc18.pdf

rCUDA technical paper: https://dl.acm.org/citation.cfm?id=2830015

http://www.mellanox.com

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire