Amid the stream of news from GTC22 today was Nvidia’s launch of a new Ethernet networking platform – Spectrum-4 – and a new 51.2 terabit Spectrum-4 Ethernet switch that is powered by a 100-billion transistor ASIC, which Nvidia says “is the largest switch ASIC that’s ever been done.” The Spectrum-4 (Ethernet) platform joins Quantum-2 (InfiniBand) as Nvidia’s two main networking platforms. Nvidia also reported growing traction for BlueField, its DPU offering.
During his GTC22 keynote, Nvidia CEO Jensen Huang said, “Today we’re introducing the Spectrum-4 switch at 51.2 terabits per second. The 100-billion transistor ASIC in Spectrum-4 is the most advanced switch ever built. Spectrum-4 introduces fair bandwidth distribution across all ports, adaptive routing and congestion control for the highest overall datacenter throughput. With CX-7 (ConnectX-7) and BlueField-3 adapters, and DOCA datacenter infrastructure software, this is the world’s first 400 gigabits-per-second, end-to-end networking platform. And Spectrum-4 can achieve timing precision to a few nanoseconds versus the many milliseconds of jitter in a typical datacenter. That is a five to six orders of magnitude improvement.”
Introduction of the Spectrum Ethernet platform emphasizes Nvidia’s growing push into the enterprise as well as Nvidia’s steady post-Mellanox-acquisition migration away the Mellanox name in branding its interconnect products. In a pre-briefing yesterday, Nvidia VP for networking Kevin Deierling cited the data management and networking needs of several Nvidia AI platforms such as Riva (natural language processing), Merlin (recommender), and Omniverse (digital twins) as driving the launch of the new Ethernet-based platform.
Traditional workloads, he said, are characterized by large numbers of users and compute processes but don’t need to move as much data, “There’s lots of connections but exchanging small amounts of data. We call those mouse flows and there’s lots of mouse flows. Traditional network load balancing mechanisms like ECMP (equal-cost multi-path routing) work just fine when you have thousands and thousands of small mouse flows.”
Newer, accelerated computing and AI workloads are changing those requirements. Simulating a factory floor, for instance, can require exchanging a huge database between nodes, said Deierling.
“These are called elephant flows and they can collide and cause congestion. Nvidia is using adaptive congestion-based routing to locate and identify elephant flows and adjust accordingly. The good thing here is we’re using industry standard technologies,” he said. “We built it for RoCE (remote direct memory access over converged Ethernet). This allows us to share data very quickly between GPUs and storage. We use technologies like GPUDirect Storage, so that we can go grab data directly from the storage nodes, and bypass the CPU and send the data directly to the GPUs. And we can even share data between GPUs and use the network hardware to move the data.”
The core of the platform, he said, is the new Spectrum-4 Ethernet switch which is expected to be available in Q3. Its specs are impressive. Dieriling noted that, “Spectrum-4 delivers 12.8 terabytes of MACsec crypto. This is important for zero trust computing that you’re hearing about and is the highest performance crypto in any switch. Spectrum-4 can process almost 38 billion packets per second, again the highest performance packet switching available with 400 Gig ports. Spectrum-4 delivers four times the throughput of our previous switch and it does that both by doubling the bandwidth that we can connect per lane and then doubling the number of lanes,” said Deierling.
Deierling was asked if Spectrum-4 switch uses 112G SerDes (Serializer/Deserializer) to achieve 800 gig speeds. 112G SerDes technology is powerful but also has proven tricky to implement.
“The answer is yes,” said Deierling. “Spectrum-4 uses 100 gig to achieve 800 gig ports. So, it stripes eight lanes by 100 gig to achieve 800 gig, or four lanes to achieve 400 gig. This is the critical technology that people are saying ‘hey, if it works we’re going to move very quickly to 400 gig and 800 gig.’ And by work, it needs to be reliable, cost effective, and power efficient. We are very confident because today, with the Connect X-7, we are already shipping 100 gig so we have 400 gig or 4 by 100 gig on our ConnectX-7 is using the same proven 112G SerDes technology that we’re incorporating into the Spectrum-4.”
InfiniBand was noticeably absent from the GTC pre-briefing and keynote as all attention seemed focused on enterprise technology and use cases. In the media/analyst pre-briefing, Deierling did a nice job distinguishing Nvidia InfiniBand and Ethernet network product lines. The Quantum-2 platform, launched at SC last year, is the InfiniBand line. (See HPCwire coverage of Quantum launch.)
“I’ll start with Quantum-2. [It] fits into the Nvidia networking space for our HPC and AI scale-out computing. InfiniBand is the technology that really has the highest performance, lowest latency and offers things like in-network computing, so that we can do data reductions in the network itself. So, for AI and HPC workloads, Quantum-2 is the platform that we use. The Ethernet platform is more for enterprise use cases where people have a familiar environment that they want to continue to use with Ethernet. Obviously, we’re supporting, you know, 51 terabits-per-second bandwidth. So [it’s] no slouch, but it doesn’t have all of the capabilities that InfiniBand has, such as the in-network computing,” said Deierling.
Asked to compare InfiniBand with RoCE (remote direct memory access over converged Ethernet), he said “They both offer 400 gig today connectivity. We can run RDMA, which is effectively zero overhead transfer of data. In either case with InfiniBand or RoCE technology, we can do GPUDirect storage. We have storage partners so that you can go fetch the data and move it directly into the GPU memory without having the CPU involved and having to transfer it across the PCIe bus and the memory bus and CPU. So they’re very, very similar. The primary difference, again, is that capability to do the in-network computing, which is what we call our SHARP technology [that] InfiniBand has and RoCE doesn’t have.”
It’s clear Nvidia is recognizing the size of the Ethernet market. Deierling said, “InfiniBand has a great place in AI and HPC markets. It’s a nice large growing market. But obviously, Ethernet is an even broader market with sensors, enterprise use cases, databases, and in some edge use cases. If you look at 5G and aerial, often in a digital twin environment, there’ll be a ton of cameras and other sensors and robots that you want to connect to either through wired Ethernet connections or over 5G. It’s more likely that you’ll see Ethernet being used [there] because that’s where you can connect all of the different sensors.”
There wasn’t a lot of BlueField-3 news. Announced last year at Spring GTC21, BlueField-3 silicon is expected sometime this year and that seems on track. Deierling said BlueField-3 would show up in Nvidia converged cards later this year; currently those cards have BlueField-2 DPUs. Nvidia did announce the launch of DOCA 1.3 – the SDK for its DPUs. Deierling said DOCA 1.3 has been updated to take advantage of the entire Spectrum-4 Ethernet platform.
Deierling cited new partnerships seeking to leverage Nvidia DPUs and it will bear watching how widely Nvidia DPUs and similar infrastructure-supporting chips/systems are adopted.
“We are announcing a project Monterey beta on BlueField with Launchpad. This is the VMware project Monterey where we accelerate networking and security on the BlueField and actually run the NSX firewall on our BlueField DPU. We’ll [also announce] OpenShift, with Red Hat, is available on the BlueField DPU. You’ll see other announcements from partners like Pluribus, which is a networking company that’s unifying switch and host-based networking on BlueField. VAST is another company; it is announcing a storage platform based on BlueField.”
There does seem to be growing traction around the use of infrastructure-supporting processors to offload various housekeeping processes that currently run on host CPUs.