Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine that handles security, networking, and storage management – offloading those tasks usually handled by the host CPU – and speeds performance.
With typical flair Jensen Huang, CEO of Nvidia, made the case for the DPU during his GTC21 keynote today. The rise of AI computing and virtualization has become too much for the CPUs of the datacenter, he contends.
“A simple way to think about this is that a third of the roughly 30 million datacenter servers shipped each year are consumed running the software defined datacenter stack,” said Huang. “This workload is increasing much faster than Moore’s law. So unless we offload and accelerate this workload, datacenters will have fewer and fewer CPUs to run the applications, the time for Bluefield has come.
“Today we’re announcing BlueField-3 – 22 billion transistors, the first 400 gigabits-per-second networking chip, 16 Arm CPUs to run the entire virtualization software stack, for instance, running VMware ESX. BlueField-3 takes security to a whole new level, fully offloading [and] accelerating IPsec and TLS cryptography, secret key management and regular expression processing. We’re on a pace to introduce a new Bluefield generation every 18 months. BlueField-3 will do 400 gigabits per second, and be 10x the processing capability Bluefield-2, and BlueField-4 will do 800 gigabits per second, and add Nvidia’s AI computing technologies to get another 10x boost.”
BlueField-2, announced at last year’s fall GTC, will begin shipping this year and is expected to appear in a variety of systems including a pair of large supercomputers being announced this week as well as Nvidia SuperPOD (DGX) systems later this year. Several server vendors (see below) have also announced support for BlueField.
At the high-end, the DPU’s ability to deliver secure, multi-tenancy to supercomputers is a linchpin of Nvidia’s Cloud Native Supercomputing architecture and “trusted environment” generally. It could also ease adoption of advanced HPC adoption in the enterprise, says the company.
Gilad Shainer, senior vice president of marketing, Mellanox networking, Nvidia, told HPCwire in a pre-briefing, “With the DPU we’re migrating the infrastructure management – the security, the monitoring, and everything [else] – from the host to run on the DPU. Now the DPU is the entity that governs the infrastructure management. The DPU can provision the server, it’s isolated between the users and the host, it can load clean operating systems, re-provision the servers as needed, delete all residual information from previous jobs, and actually create a clean interface for a new job and so forth.”
“The second part is obviously the file system. Because as you move the file system management into the DPU, now it sits between the user and the storage. So users actually mount storage on the DPU; the DPU presents itself as a kind of a virtual local storage. The application mounts storage on the DPU or the DPU is actually mounting storage on the root storage, and all the operations from the applications to storage are governed by the DPU. You can control if a user is allowed to go to this area or not allowed to go to this area. You might have storage which is encrypted, because there is a medical information. The DPU includes encryption engines, so you can protect data, move data encrypted and do key management,” said Shainer.
Nvidia clearly has big plans for the DPU. Early reaction from analysts has been positive.
“The Nvidia DPU promises to make high performance computing more efficient by handling functions in the network, freeing up CPUs, GPUs and TPUs to focus on what each of them does best. The x86 CPU is the dominant HPC processor, but unlike vector processors it was not designed specifically for HPC and left room for other processor types to fill important gaps in functionality,” said Steve Conway, Hyperion Research.
“GPUs stepped in to help address the rise of early AI and other data-intensive workflows more cost-effectively than vector CPUs or x86 processors. DPUs are designed not just to offload certain functions from other processor types, but to insert computing capability directly into datacenter and hybrid cloud networks. Assuming there’s good market uptake for this, it could be a boon for existing HPC workflows and for HPC’s emerging role in things like 5G/6G-enabled IoT and edge computing,” said Conway.
Karl Freund of Cambrian AI Research, was bullish on the DPU, “Today’s announcement, from the DPU to the forthcoming Arm Grace CPU, will essentially reinvent the datacenter architecture blueprints of the future to enable vast neural networks to run efficiently and securely in the cloud.”
Shainer said the DPU can function by itself at the edge or be embedded in a larger system, whether a server or large supercomputer. First announced in 2016, it will be interesting to watch if the DPU outgrows its smartNIC roots to become an important component of AI-centric computing. Nvidia today also announced the first release of the DOCA (datacenter on as chip) development platform for the Bluefield product family. BlueField-3 will be backwardly compatible with earlier Bluefield DPUs.
Given that BlueField-2 is only now moving into production, it is difficult to provide clear real-world evaluations of it or comparisons with BlueField-3. In a pre-pre-briefing, Justin Boitano, general manager, enterprise and edge computing, echoed Huang’s comments, “Whereas BlueField-2 currently offloads an equivalent of 30 CPU cores for software defined networking securities and storage. It would take 300 CPU cores to secure offload and accelerate the networking traffic at 400 gigabits-per-second line rate. That’s a 10x leap in performance that’s required, and that’s what BlueField-3 delivers.”
Nvidia reported “Dell Technologies, Inspur, Lenovo and Supermicro are integrating BlueField DPUs into their systems. Cloud service providers across the world are using BlueField DPUs to accelerate workloads, including Baidu, JD.com and UCloud. The BlueField ecosystem is also expanding with BlueField-3 support from leading hybrid cloud platform partners Canonical, Red Hat and VMware; cybersecurity leaders Fortinet, Guardicore and storage providers DDN, NetApp and WekaIO; and edge platform providers Cloudflare, F5 and Juniper Networks.”
One interesting aspect of the DPU strategy will be identifying opportunities to perform other kinds of processing. Shainer said Nvidia has been working with Ohio State University researcher Dhabaleswar K. (DK) Panda to get hybrid MPI working on the DPU and with applications.
“With DK Panda, we’re working on creating a hybrid MPI. Because the DPU has multiple cores, there are processes that can run on the DPU cores. Those processes can get information from host processes, on locations of data and permissions of data, and with that metadata movement, the DPU can actually access the host memory and can access remote memory and fully use RDMA and MPI to run on a DPU. This is what we what we working on. By migrating MPI to run on the DPU, you can achieve full overlapping. Actually, it’s the first time you can do full overlapping of compute and communications,” said Shainer.
“We started looking at applications and one of the first applications we took is an FFT (3D FFT) because FFTs are used in many scientific simulations. We modified the FFT to be able to use the DPU as part of the FFT work and we ran that, on different grid sizes, on a cloud native supercomputer and were able to get on average, 30 percent performance improvement. For FFTs in some sizes, it was almost 40 percent. So that means you can achieve almost 1.4x performance improvement on FFT. [With fill overlapping], it means that it’s 1.4x performance for entire datacenter. It’s 40 percent more capacity,” he said.
Shainer said he thinks DPU adoption will ramp quickly because the need is high and the ROI is also high. He also said Nvidia would work with customers who purchase DGX system with BlueField-2 to later upgrade to BlueField-3.
Release of DOCA 1.0 is also an important step, as was emphasized by Huang during his keynote. “there’s all kinds of great technology inside: deep packet inspection, secure boot, TLS crypto offload, regular expression acceleration, and a very exciting capability, a hardware-based, real-time clock that can be used for synchronous datacenters, 5G and video broadcast. We have great partners working with us to optimize leading platforms on Bluefield infrastructure: software providers, edge and CDN providers, cybersecurity solutions and storage providers. Basically, the world’s leading companies in datacenter infrastructure.”
Along those lines, Nvidia provided testimonials from Red Hat and VMware
- “Red Hat continues to collaborate with Nvidia as part of an open ecosystem that accelerates innovation while providing access to the latest hardware innovations for composable infrastructure,” said Chris Wright, chief technology officer of Red Hat. “We recognize the need to develop advanced solutions for network security and automation and are excited to support BlueField DPUs and the Nvidia Morpheus AI framework via Red Hat Enterprise Linux, Red Hat OpenShift, industry leading containers and Kubernetes powered hybrid cloud platform.”
- “Our mutual customers are racing to harness the power of AI for enterprise applications,” said Lee Caswell, vice president, marketing, Cloud Platform Business Unit at VMware. “The vision of enterprise infrastructure powered by the VMware Cloud Foundation and to be certified with the newly announced Nvidia BlueField-3 DPU shows customers a path to improved application performance, a consistent operating model across virtualized and bare metal environments, along with a new model for delivering zero-trust security without compromising performance.”
Nvidia security framework, Morpheus, and its DGX management framework, Base Command, both work with BlueField. Talking about Morpheus, Boitano said, “This datacenter security framework can perform real time inspection of all packets flowing through the datacenter. Morpheus is built on Nvidia AI. It uses Nvidia Bluefield-2, it uses DOCA telemetry and it runs on Nvidia-certified servers. Because Bluefield is effectively a server running at the edge of every server in your datacenter, it acts as a sensor to monitor all the traffic between all the containers and VMs in your datacenter.”