Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

By John Russell

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was on display at Hot Chips this week. Arm, Nvidia, and Intel all gave talks on the emerging class of devices and while much of the technical detail about their latest offerings wasn’t new, it was notable that DPUs and IPUs warranted a session at all. Also interesting is that Arm technology is being used in all of the new offerings – Octeon10 (Marvell/Arm), the BlueField line (Nvidia), Mount Evans (Intel’s new chip).

Once discounted as little more than SmartNICs on steroids, DPU/IPUs are gaining advocates. There’s a growing consensus that DPUs and IPUs –  or whatever we end up calling them – tackle a growing problem and offer significant advantages. Broadly, the idea is to unload many house-keeping chores, such as networking control, storage management, and security now being run on host CPUs. These tasks have steadily consumed more and more of CPU resources within datacenters and the cloud.

“Research from Google and Facebook has shown [infrastructure workloads] consume from 22 percent to 80 percent of CPU cycles across a variety of micro service workloads,” said Brad Burres of Intel in his talk. “You can see from that data how offloading infrastructure applications provides meaningful benefit to the cloud operators in several key areas.”

Intel is the most recent big player to jump onto the DPU/IPU wagon. Burres, an Intel fellow, leads IPU architecture work in Intel’s newly created networking and edge group. Mount Evans, Intel’s first ASIC IPU, was just announced a week ago at Intel’s Architecture Day. Notably, Mount Evans shed x86 cores in favor Arm Neoverse N1 cores, not least for power consumption advantages.

Planting a flag for Intel, Burres said, “IPU stands for infrastructure processing unit and is a term Intel recently introduced. We used to call these SmartNICs. But as more and more infrastructure applications beyond networking move into the platform, especially with their associated control planes, we wanted to use a more accurate informative name. [It] is sometimes called the DPU, but that name confuses a lot of our customers, because everything does data processing. More importantly, an IPU represents a revolution in datacenter architecture.”

Let the competitive jockeying (and device naming) begin.

One of the nice things about Hot Chips is its tech talk bias and the slides often provide a useful glimpse into approaches and capabilities. To varying degrees, that was the case here. Here’s quick snapshot of the three presentations:

  • Arm Neoverse N2 and Octeon. Andrea Pellegrini, distinguished engineer at Arm, reviewed the Arm Neoverse platform roadmap and dug into Neoverse N2 advances relative to N1. Much of that material presented was familiar (see HPCwire coverage). Among N2 advances are SVE2 support, CMN-700 (coherent mesh network), IPC lift (40 percent), and significant branch prediction improvements. Directly relevant to the session, was his brief description of Marvell’s forthcoming Octeon10 DPU family, introduced in June and which will leverage Neoverse N2 architecture. It will be fabbed using TSMC’s 5nm process.
  • Nvidia’s BlueField Lineup. Principal architect Idan Burstein presented Nvidia’s vision although many of technical details were widely familiar (see HPCwire coverage). Nvidia – via its Mellanox acquisition – has pursued the evolution of smart networking into full-function DPUs the longest. BlueField-2 is now shipping, BlueField-3 is expected next year, and BlueField-4 plans are underway. Nvidia’s vision for the DPU is grand and encompasses not only commonly-cited chores (networking, storage, security) but also expansion to include other application pieces. BlueField-2 is a 7-billion-transistor SOC and BlueField-3 will have 22 billion transistors including Arm cores and GPU cores.
  • Intel’s Mount Evans Climb. Burres broadly walked through its functionality without digging deeply into implementation. Mount Evans is Intel’s first ASIC IPU. It has had FPGA-based IPU SOCs and, broadly speaking, it has leveraged assets from its acquisition of BareFoot Networks in 2019. Its early target market seems to be cloud providers where there is a huge x86 installed base, and Burres said Mount Evans was designed in collaboration with a major cloud provider though he didn’t say who.

Presented here are a few points from each of the presentations along with a few slides that provide a glimpse into design choices and implementations.

Arm’s Ambition for Neoverse Includes DPUs

Arm, of course, is a relative newcomer to large datacenter and Pellegrini noted N2, the second generation of Neoverse, is intended to serve a wider infrastructure market. “Partners can use this platform to build optimized, local systems with constraining power envelope systems, like the ones needed for 5G deployments,” he said. “On the other hand, they can build high-core count, high frequency, high-memory bandwidth systems for the datacenter.” N2’s efficiency profile allows users to pack many cores per socket, making it a great design for power efficient, specialized designs, he said.

Pellegrini singled Marvell’s Octeon 10 DPU as an example of a special purpose design. “This DPU relies on Neoverse N2 platform for its general-purpose computer system and they have up to 36 N2 cores. Marvell augmented the design with several specialized IPs to enable high-speed packet processing and network connectivity. When we look at instantiation of this design and its capabilities, we see how a partner like Marvell can extract the most out of the Neoverse N2 platform to take advantage of the bleeding edge technology, such as DDR five [and] MPC agent files to deliver groundbreaking networking speeds. We’re talking about up to 400 Gigabit Ethernet here.”

He noted that Neoverse N2 introduces memory partition and monitoring (MPAM), which is a technology that can “help users monitor and partition shared system resources, such that they can ensure more reliable consistent performance even in heavily contended multi-tenant systems.” He added that Neoverse N2 significantly advances support for utilization, “introducing better hardware capabilities for handling nested virtual machines, reducing the overhead and cost of commonly used virtual machine operations.”

Does Nvidia’s BlueField Have Head Start?

Echoing others, Burstein said soaring network bandwidth demands (think AI requirements) and the fact that 25-to-50 percent of CPU cycles are consumed for “infrastructure” needs is forcing the move to DPUs.

“There is a need to isolate the datacenter workloads and to accelerate them in order to support these higher bandwidth demands. The naive approach to solving this problem is positioning those infrastructure-processing elements that were running on the on the application processor to be running on processors embedded in the networking device (think SmartNICs). This has no gain in performance over efficiency. It solves the isolation problem, but it is not scalable to higher bandwidth requirements, and it will require significant system modifications as CPUs are consuming more power,” said Burstein.

The BlueField-3 specs are impressive: 22 billion transistors, the first 400 gigabits-per-second networking chip, 16 Arm CPUs to run the entire virtualization software stack, for instance, running VMware ESX. “BlueField-3 takes security to a whole new level, fully offloading [and] accelerating IPsec and TLS cryptography, secret key management and regular expression processing. We’re on a pace to introduce a new BlueField generation every 18 months. BlueField-3 will do 400 gigabits per second, and be 10x the processing capability of Bluefield-2, and BlueField-4 will do 800 gigabits per second, and add Nvidia’s AI computing technologies to get another 10x boost,” reported Nvidia at BlueField-3’s introduction in April.

Burstein said, “BlueField is a complicated system. It’s about 400 gigabits of Ethernet and InfiniBand with pipelined with crypto and security acceleration. It has 36 lanes of PCI. It supports two times 370 packet per second, and it supports two times 40 million packets per second, all at the scale of millions of flows.”

Broadly, the company has said BlueField is critical to its cloud-native supercomputing architecture strategy. The company has reported that Dell Technologies, Inspur, Lenovo and Supermicro are integrating BlueField DPUs into their systems and that several cloud providers, including Baidu, JD.com and UCloud, for example, are or plan to integrate BlueField DPU. The proof will be in deployments and subsequent traction.

DOCA (Data Center Infrastructure-on-a-Chip Architecture) is Nvidia’s programming framework for BlueField: “DOCA software consists of an SDK and a runtime environment. The DOCA SDK provides industry-standard open APIs and frameworks, including Data Plane Development Kit (DPDK) and P4 for networking and security and the Storage Performance Development Kit (SPDK) for storage. The frameworks simplify application offload with integrated NVIDIA acceleration packages. The DOCA-based services are exposed in the compute nodes as industry-standard I/O interfaces, enabling infrastructure virtualization and isolation.”

Climbing Mount Evans – Intel’s IPU Aims for the Clouds

Burres broadly talked about Mount Evans’ technology and zeroed in on the case for adoption by cloud providers.

“First, the IPU allows for a separation of functions between the service provider and the tenants. This provides for greater security and isolation for all parties. It also enables important use cases like bare metal hosting to run on the same exact hardware platforms, using the same services as virtual machines. It lets tenants have full control over their CPU. They can do things like run their own hypervisor. And in that case, the cloud operators still fully control of the infrastructure functions such as networking, storage and security because those live out in the IPU,” said Burres.

“Second, the IPU provides [an] infrastructure-optimized execution environment. This includes a significant investment in hardware accelerators in the IPU, which enable the IPU to process the infrastructure tasks very efficiently. That allows better tuning of software and cores for these types of workloads. Overall, this optimizes the performance and the cloud operator can now rent out 100 percent CPU to his guest, which also maximizes revenues. Lastly, the IPU can help enable new service models for storage by abstracting the storage initiated from the tenant,” he contended.

Burres argued Mount Evans programmable packet processing capability was perhaps its most impressive feature. It supports “use cases like vSwitch offload, firewalls, telemetry functions, while supporting up to 200 million packets per second performance on real world implementations. This is enhanced with a fully featured transmit traffic shaper.” Mount Evans provides inline IPsec to secure every packet being sent across the network and supports up to 16 million secure connections.

“On the right-hand side (block diagram below), our compute complex is built on the Arm Neoverse architecture using the N1 cores, with up to 16 cores running up to 3 GHz. This is backed by a large 32-megabytes system level cache, and a three dual-mode LP DDR4 controller that can write a theoretical 102 gigabytes per second of memory bandwidth. Together these give us the bandwidth and horsepower to take on larger production workloads. The compute complex is tightly coupled with the network subsystem, allowing the NSS accelerators to use a system level cache as their own last level cache. The mesh providing high bandwidth low latency connections between the two sides,” he said.

Burres also emphasized the new Intel IPU was designed from-the-ground up; he said he was responding to comments he’d seen that Intel was simply gluing existing IP together. Burres said little about the choice of Arm over IA processor cores other than it was done as part of Intel’s routine evaluation. Likely, Arm’s reduced power requirement was a factor.

Whether IPUs and DPU win a place in the growing library of processor acronyms is likely to emerge over the next couple of years as the devices actually come to market and their functionality (performance and cost) gets tested.

Analysts have generally been friendly to the idea. Karl Freund, principal at Cambrian AI Research told HPCwire, “This market for Smart NICS+ is just beginning to take shape. The early adopters are the hyperscalers typically using an ASIC (AWS) or FPGA (Microsoft).  Nvidia however foresees needing something much more powerful with a GPU along with Arm cores on the IPU. In three years, that could be a game changer for very large composable infrastructure.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Royalty-free stock illustration ID: 1675260034

Solving Heterogeneous Programming Challenges with SYCL

December 8, 2021

In the first of a series of guest posts on heterogenous computing, James Reinders, who returned to Intel last year after a short "retirement," considers how SYCL will contribute to a heterogeneous future for C++. Reinde Read more…

Quantinuum Debuts Quantum-based Cryptographic Key Service – Is this Quantum Advantage?

December 7, 2021

Quantinuum – the newly-named company resulting from the merger of Honeywell’s quantum computing division and UK-based Cambridge Quantum – today launched Quantum Origin, a service to deliver “completely unpredicta Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

AWS Arm-based Graviton3 Instances Now in Preview

December 1, 2021

Three years after unveiling the first generation of its AWS Graviton chip-powered instances in 2018, Amazon Web Services announced that the third generation of the processors – the AWS Graviton3 – will power all-new Amazon Elastic Compute 2 (EC2) C7g instances that are now available in preview. Debuting at the AWS re:Invent 2021... Read more…

AWS Solution Channel

Introducing AWS HPC Connector for NICE EnginFrame

HPC customers regularly tell us about their excitement when they’re starting to use the cloud for the first time. In conversations, we always want to dig a bit deeper to find out how we can improve those initial experiences and deliver on the potential they see. Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies participated and, one of them, Graphcore, even held a separ Read more…

Royalty-free stock illustration ID: 1675260034

Solving Heterogeneous Programming Challenges with SYCL

December 8, 2021

In the first of a series of guest posts on heterogenous computing, James Reinders, who returned to Intel last year after a short "retirement," considers how SYC Read more…

Quantinuum Debuts Quantum-based Cryptographic Key Service – Is this Quantum Advantage?

December 7, 2021

Quantinuum – the newly-named company resulting from the merger of Honeywell’s quantum computing division and UK-based Cambridge Quantum – today launched Q Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

Raja Koduri and Satoshi Matsuoka Discuss the Future of HPC at SC21

November 29, 2021

HPCwire's Managing Editor sits down with Intel's Raja Koduri and Riken's Satoshi Matsuoka in St. Louis for an off-the-cuff conversation about their SC21 experience, what comes after exascale and why they are collaborating. Koduri, senior vice president and general manager of Intel's accelerated computing systems and graphics (AXG) group, leads the team... Read more…

Jack Dongarra on SC21, the Top500 and His Retirement Plans

November 29, 2021

HPCwire's Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the 2021 Top500 list, the outlook for global exascale computing, and what exactly is going on in that Viking helmet photo. Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Leading Solution Providers

Contributors

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire