Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

By John Russell

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine that handles security, networking, and storage management – offloading those tasks usually handled by the host CPU – and speeds performance.

With typical flair Jensen Huang, CEO of Nvidia, made the case for the DPU during his GTC21 keynote today. The rise of AI computing and virtualization has become too much for the CPUs of the datacenter, he contends.

“A simple way to think about this is that a third of the roughly 30 million datacenter servers shipped each year are consumed running the software defined datacenter stack,” said Huang. “This workload is increasing much faster than Moore’s law. So unless we offload and accelerate this workload, datacenters will have fewer and fewer CPUs to run the applications, the time for Bluefield has come.

“Today we’re announcing BlueField-3 – 22 billion transistors, the first 400 gigabits-per-second networking chip, 16 Arm CPUs to run the entire virtualization software stack, for instance, running VMware ESX. BlueField-3 takes security to a whole new level, fully offloading [and] accelerating IPsec and TLS cryptography, secret key management and regular expression processing. We’re on a pace to introduce a new Bluefield generation every 18 months. BlueField-3 will do 400 gigabits per second, and be 10x the processing capability Bluefield-2, and BlueField-4 will do 800 gigabits per second, and add Nvidia’s AI computing technologies to get another 10x boost.”

BlueField-2, announced at last year’s fall GTC, will begin shipping this year and is expected to appear in a variety of systems including a pair of large supercomputers being announced this week as well as Nvidia SuperPOD (DGX) systems later this year. Several server vendors (see below) have also announced support for BlueField.

At the high-end, the DPU’s ability to deliver secure, multi-tenancy to supercomputers is a linchpin of Nvidia’s Cloud Native Supercomputing architecture and “trusted environment” generally. It could also ease adoption of advanced HPC adoption in the enterprise, says the company.

Gilad Shainer, senior vice president of marketing, Mellanox networking, Nvidia, told HPCwire in a pre-briefing, “With the DPU we’re migrating the infrastructure management – the security, the monitoring, and everything [else] – from the host to run on the DPU. Now the DPU is the entity that governs the infrastructure management. The DPU can provision the server, it’s isolated between the users and the host, it can load clean operating systems, re-provision the servers as needed, delete all residual information from previous jobs, and actually create a clean interface for a new job and so forth.”

“The second part is obviously the file system. Because as you move the file system management into the DPU, now it sits between the user and the storage. So users actually mount storage on the DPU; the DPU presents itself as a kind of a virtual local storage. The application mounts storage on the DPU or the DPU is actually mounting storage on the root storage, and all the operations from the applications to storage are governed by the DPU. You can control if a user is allowed to go to this area or not allowed to go to this area. You might have storage which is encrypted, because there is a medical information. The DPU includes encryption engines, so you can protect data, move data encrypted and do key management,” said Shainer.

Nvidia clearly has big plans for the DPU. Early reaction from analysts has been positive.

“The Nvidia DPU promises to make high performance computing more efficient by handling functions in the network, freeing up CPUs, GPUs and TPUs to focus on what each of them does best. The x86 CPU is the dominant HPC processor, but unlike vector processors it was not designed specifically for HPC and left room for other processor types to fill important gaps in functionality,” said Steve Conway, Hyperion Research.

“GPUs stepped in to help address the rise of early AI and other data-intensive workflows more cost-effectively than vector CPUs or x86 processors. DPUs are designed not just to offload certain functions from other processor types, but to insert computing capability directly into datacenter and hybrid cloud networks. Assuming there’s good market uptake for this, it could be a boon for existing HPC workflows and for HPC’s emerging role in things like 5G/6G-enabled IoT and edge computing,” said Conway.

Karl Freund of Cambrian AI Research, was bullish on the DPU, “Today’s announcement, from the DPU to the forthcoming Arm Grace CPU, will essentially reinvent the datacenter architecture blueprints of the future to enable vast neural networks to run efficiently and securely in the cloud.”

Shainer said the DPU can function by itself at the edge or be embedded in a larger system, whether a server or large supercomputer. First announced in 2016, it will be interesting to watch if the DPU outgrows its smartNIC roots to become an important component of AI-centric computing. Nvidia today also announced the first release of the DOCA (datacenter on as chip) development platform for the Bluefield product family. BlueField-3 will be backwardly compatible with earlier Bluefield DPUs.

Given that BlueField-2 is only now moving into production, it is difficult to provide clear real-world evaluations of it or comparisons with BlueField-3. In a pre-pre-briefing, Justin Boitano, general manager, enterprise and edge computing, echoed Huang’s comments, “Whereas BlueField-2 currently offloads an equivalent of 30 CPU cores for software defined networking securities and storage. It would take 300 CPU cores to secure offload and accelerate the networking traffic at 400 gigabits-per-second line rate. That’s a 10x leap in performance that’s required, and that’s what BlueField-3 delivers.”

Nvidia reported “Dell Technologies, Inspur, Lenovo and Supermicro are integrating BlueField DPUs into their systems. Cloud service providers across the world are using BlueField DPUs to accelerate workloads, including Baidu, JD.com and UCloud. The BlueField ecosystem is also expanding with BlueField-3 support from leading hybrid cloud platform partners Canonical, Red Hat and VMware; cybersecurity leaders Fortinet, Guardicore and storage providers DDN, NetApp and WekaIO; and edge platform providers Cloudflare, F5 and Juniper Networks.”

One interesting aspect of the DPU strategy will be identifying opportunities to perform other kinds of processing. Shainer said Nvidia has been working with Ohio State University researcher Dhabaleswar K. (DK) Panda to get hybrid MPI working on the DPU and with applications.

“With DK Panda, we’re working on creating a hybrid MPI. Because the DPU has multiple cores, there are processes that can run on the DPU cores. Those processes can get information from host processes, on locations of data and permissions of data, and with that metadata movement, the DPU can actually access the host memory and can access remote memory and fully use RDMA and MPI to run on a DPU. This is what we what we working on. By migrating MPI to run on the DPU, you can achieve full overlapping. Actually, it’s the first time you can do full overlapping of compute and communications,” said Shainer.

“We started looking at applications and one of the first applications we took is an FFT (3D FFT) because FFTs are used in many scientific simulations. We modified the FFT to be able to use the DPU as part of the FFT work and we ran that, on different grid sizes, on a cloud native supercomputer and were able to get on average, 30 percent performance improvement. For FFTs in some sizes, it was almost 40 percent. So that means you can achieve almost 1.4x performance improvement on FFT. [With fill overlapping], it means that it’s 1.4x performance for entire datacenter. It’s 40 percent more capacity,” he said.

 

Shainer said he thinks DPU adoption will ramp quickly because the need is high and the ROI is also high. He also said Nvidia would work with customers who purchase DGX system with BlueField-2 to later upgrade to BlueField-3.

Release of DOCA 1.0 is also an important step, as was emphasized by Huang during his keynote. “there’s all kinds of great technology inside: deep packet inspection, secure boot, TLS crypto offload, regular expression acceleration, and a very exciting capability, a hardware-based, real-time clock that can be used for synchronous datacenters, 5G and video broadcast. We have great partners working with us to optimize leading platforms on Bluefield infrastructure: software providers, edge and CDN providers, cybersecurity solutions and storage providers. Basically, the world’s leading companies in datacenter infrastructure.”

Along those lines, Nvidia provided testimonials from Red Hat and VMware

  • “Red Hat continues to collaborate with Nvidia as part of an open ecosystem that accelerates innovation while providing access to the latest hardware innovations for composable infrastructure,” said Chris Wright, chief technology officer of Red Hat. “We recognize the need to develop advanced solutions for network security and automation and are excited to support BlueField DPUs and the Nvidia Morpheus AI framework via Red Hat Enterprise Linux, Red Hat OpenShift, industry leading containers and Kubernetes powered hybrid cloud platform.”
  • “Our mutual customers are racing to harness the power of AI for enterprise applications,” said Lee Caswell, vice president, marketing, Cloud Platform Business Unit at VMware. “The vision of enterprise infrastructure powered by the VMware Cloud Foundation and to be certified with the newly announced Nvidia BlueField-3 DPU shows customers a path to improved application performance, a consistent operating model across virtualized and bare metal environments, along with a new model for delivering zero-trust security without compromising performance.”

Nvidia security framework, Morpheus, and its DGX management framework, Base Command, both work with BlueField. Talking about Morpheus, Boitano said, “This datacenter security framework can perform real time inspection of all packets flowing through the datacenter. Morpheus is built on Nvidia AI. It uses Nvidia Bluefield-2, it uses DOCA telemetry and it runs on Nvidia-certified servers. Because Bluefield is effectively a server running at the edge of every server in your datacenter, it acts as a sensor to monitor all the traffic between all the containers and VMs in your datacenter.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire