Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there’s room for the company to grow with software. The company hopes to fill a big software gap with an agreement to acquire Run.ai for a whopping $700 million.
AI deployments are getting larger and more complex and spreading across more GPUs and accelerators. Run.ai provides the middleware to orchestrate and manage these deployments and ensure resources aren’t wasted.
The middleware includes tools to speed up workloads, manage resources, and ensure errors don’t bring down entire AI or high-performance computing operations. The middleware runs on a Kubernetes layer to virtualize AI workloads on GPUs.
Nvidia’s GPUs are hot-ticket items in the AI rush and are available to customers through all major cloud providers.
The Run.ai acquisition will help Nvidia build its independent cloud service without building its data centers.
Nvidia wants to create its own network of GPUs and DGX systems across all major cloud providers. Run.ai’s middleware will provide an important hook for customers to reach more GPUs available online or on-premise.
“Run:ai enables enterprise customers to manage and optimize their compute infrastructure, whether on-premise, in the cloud, or in hybrid environments,” Nvidia said in a blog entry.
At the top of Nvidia’s software stack is AI Enterprise, which includes the programming, deployment, and other tools. It has 300 libraries and 600 models.
The stack includes the proprietary CUDA parallel programming framework, compilers, AI large-language models, microservices, and other tools. The toolset also includes container toolkits, but Run.ai’s middleware supports open-source large language model deployments.
Nvidia GPUs are cloud-native, and Google, Amazon, and Oracle have strong Kubernetes stacks. Nvidia already has its own container runtime as a Kubernetes plugin for GPU devices, but Run.ai will bring more granular control to AI container management and orchestration. As a result, Nvidia will use more such tools rather than relying entirely on cloud provider configurations.
The Problem
Multiple GPU allocations for AI tasks are still not a straightforward deal. Nvidia’s GPUs are in its DGX server boxes deployed on premises of all major cloud providers.
Nvidia’s Triton Inference Server automatically distributes inferencing workloads across multiple GPUs in a configuration, but there are problems.
AI workloads also need Python code to point to cloud operators, only after which AI workloads will be executed on Nvidia GPUs within cloud services.
Nvidia is thinking ahead with its acquisition of Run.ai. The company wants to reduce its reliance on cloud operators — it’s one more step to lock down customers to its software stack. Customers can rent GPU time in the cloud and then go to Nvidia for all software needs.
At the same time, it fills a major need for Nvidia to provide a complete software stack.
Preparing for an AI Future
Currently, AI training and inferencing are mostly done on GPUs in data centers, but this will change in a few years.
Over time, AI — specifically inferencing — will be offloaded from data centers to the edge. AI PCs are already being used for inferencing.
The current state of AI processing, with power-hungry GPUs, is unsustainable. It is the same problem that crypto faced — tons of hungry GPUs running complex math at full speed and power to mine results quickly.
Nvidia has tried to reduce the power consumption of its chips with Blackwell. But the company is adding software to the equation, and Run.ai will help orchestrate workloads across GPUs and further down the network on AI PCs and edge devices.
AI processing will also be done on various waypoints, such as telecom chips, as it travels through wireless and wired networks. However, more demanding AI workloads will remain on servers with GPUs, while the less demanding workloads will offload to the edge.
Companies that include Rescale are already working with companies for high-priority tasks to remain on GPUs in the cloud, while low-priority tasks are delivered to low-end chips somewhere else. Run.ai’s orchestration can manage that via a strong combination of speed, power-efficiency, and utilization of resources.
The Run:ai Stack
One small error can bring down an entire AI operation. Run.ai’s stack has the three operations layers to prevent such mishaps, and deliver safe and efficient deployment.
The lowest layer is the AI Cluster Engine, which makes sure GPUs are well utilized and operate efficiently.
The engine provides granular insight into the entire AI stack, including the compute nodes, users, and workloads running on it. Companies can prioritize specific tasks, and make sure idle resources are utilized.
If a GPU seems busy, Run.ai will reallocate resources. It can also assign GPU quotas based on user or breakdown resources within GPUs to ensure proper allocation.
The second layer — called the Control Plane Engine — provides granular visibility on the resources used in the Cluster Engine, and cluster management tools to ensure metrics are being met. It also sets up policy on access control, resource management and workloads. It also has reporting tools.
The top layers include the API and development tools. The dev tools also support open-source models.
Falling in Line with Nvidia’s New GPUs
The big wild card is whether Run.ai will tap into some RAS (reliability, availability, and serviceability) features in Nvidia’s latest Blackwell GPUs. Blackwell GPUs were introduced in March and include more fine-grained features to ensure the chip runs predictably.
The GPUs have on-chip software to point out the healthy and unhealthy GPU nodes. “We’re looking at the trail of data from all those GPUs, monitoring thousands of data points every second to see how the job can get optimally done,” said Charlie Boyle, the vice president and general manager of DGX Systems unit at Nvidia, in a March interview.
Run.ai’s efficiency might improve if it could tap into metrics or information from Blackwell. That kind of fine-grained reporting could go a big way in ensuring that AI tasks run smoothly.
Nvidia’s Acquisition History
Nvidia pulled in revenue of $22.1 billion in the most recent quarter, up by a whopping 265% from the same quarter a year ago. Data center revenue was a whopping $18.4 billion.
The company is generating software revenue through the subscription model, and ultimately hopes it will become a multi-billion dollar market. The Run.ai acquisition will fit into that goal.
Nvidia swung big with the failed acquisition attempt of ARM, which was way before the company became a $2 trillion behemoth. The ARM acquisition was blocked due to monopoly and regulatory concerns, but had the acquisition succeeded, the chip maker would have dominated the CPU and GPU market. ARM already dominates the mobile market and is making inroads in the server and PC markets.
The chip maker in 2011 paid $367 million for Icera, a software modem maker, which turned out to be a dud. Nvidia ultimately abandoned its chase of the mobile phone market, and the Icera product was dropped.