Remote Direct Memory Access Networking for HPC: Comparative Review of 10GbE iWARP and InfiniBand

By Saqib Jang

February 24, 2010

The Rise of HPC Cluster Computing

While the HPC market is expected to experience a revenue dip in 2009, growth is expected to resume in 2010 and remain a bright spot in the overall IT market. The most important feature of the HPC growth trend is that it will continue to be fueled primarily by purchases of Linux cluster systems priced under $250,000. Cluster computing systems, separate compute nodes built from standard component technologies have caused disruptive changes in the HPC market.

As the component technologies of cluster systems have improved and buyers have become more confident running cluster systems, they have inevitably redirected capital once earmarked for large custom systems to larger cluster systems. These much larger clusters, often with thousands of processors, present opportunities for huge performance gains through improved parallel performance resulting in an overall higher order of magnitude return-on-investment (ROI). While algorithm and application tuning is often required to obtain these benefits, so often are cost, bandwidth, message rate, and latency of cluster interconnects.

One consequence of the range of requirements for cluster networking is that the leading interconnects in HPC are Gigabit Ethernet (which is based on Ethernet networking standard) and InfiniBand (delivering upwards of 10X performance vs. GbE). Both show significant deployment in HPC. The latest TOP500 list of HPC systems has 259 Gigabit Ethernet-based deployments compared to 181 InfiniBand-connected systems. The deployment of 10 Gigabit Ethernet (10GbE) cluster networking is emerging at this point. The price of this interconnect has been falling as the volume of its shipments grow. This growth is based on a combination of its 10X performance over GbE along with the ease of deployment due to its Ethernet heritage positions it for a bright future as a cluster interconnect.

As cluster systems have grown, so has the total amount of data in play in the average parallel HPC application. This has significant implications for HPC storage systems. Storage systems need to have the best possible bandwidth and latency characteristics. HPC storage systems have themselves become increasingly clustered and parallel as well as network-attached and accessible from all nodes on the cluster through the interconnect. In this context, the demand for interconnect solutions that supports a converged storage and cluster interconnect fabric is expected to grow significantly.

10GbE iWARP Overview and Value Proposition

For years, Ethernet has been the de facto standard LAN for connecting users to each other and to network resources. Ethernet sales volumes make it unquestionably the most cost-effective datacenter fabric to deploy and maintain. The latest generation of Ethernet, 10 Gigabit Ethernet (10GbE), offers a 10 Gbps data rate, which simplifies growth for existing data networking applications while removing the bandwidth barriers to deployment for highest-performance HPC clustering and storage networking.

  • 10GbE end-to-end performance now compares very favorably with that of more specialized datacenter interconnects, which eliminates performance as a drawback to the adoption of an Ethernet unified data center fabric.
     
  • Off-loading cluster and storage protocol processing from the central CPU to intelligent 10GbE NIC can also improve the power efficiency of end stations because off-load ASIC processors are generally considerably more power-efficient in executing protocol workloads.
     
  • The value of implementing TCP/IP protocol processing in silicon at 10 Gbps data rates is clear. Effectively, such approaches have the potential of reducing the relative bandwidth and latency overhead effect of TCP/IP protocol processing to zero.

Achieving 10GbE performance for latency-sensitive HPC communications has required solving Ethernet’s long-standing overhead problems; problems that, in slower Ethernet generations, were adequately overcome by steadily increasing CPU clock speeds.

Enter 10GbE iWARP

The iWARP extensions to TCP/IP focus on eliminating the three major sources of networking overhead — transport (TCP/IP) processing, intermediate buffer copies, and application context switches — that collectively account for nearly 100 percent of CPU overhead related to networking. Specifically, iWARP implements a number of mechanisms to provide a low-latency means of passing RDMA over Ethernet.

The iWARP extensions utilize advanced techniques to reduce CPU overhead, memory bandwidth utilization, and latency by a combination of offloading TCP/IP processing from the CPU, eliminating unnecessary buffering, and dramatically reducing expensive operating system calls and context switches — moving data management and network protocol processing to an accelerated RDMA over TCP/IP NIC (or R-NIC) 10 Gigabit Ethernet adapter.

R-NICs can reduce CPU utilization for 10 Gbps transfers to less than 10 percent and can reduce the host component of end-to-end latency to as little as 5–10 microseconds. High port-count 10GbE switches are available, which delivers HPC-class latency performance within 100’s of nanoseconds.

InfiniBand Overview and Value Proposition

InfiniBand is an I/O architecture designed to increase the communication speed between CPUs, devices within servers and subsystems located throughout a network. The original goal behind the release of the InfiniBand specification by the InfiniBand Trade Association was to address the mismatch between the speed of CPUs and the PCI I/O bus, as well as other deficiencies of the PCI bus, including bus sharing, scalability, and fault tolerance.

InfiniBand is a point-to-point, switched I/O fabric architecture. Both devices at each end of a link have full access to the communication path. To go beyond a point and traverse the network, switches come into play. By adding switches, multiple points can be interconnected to create a fabric. As more switches are added to a network, aggregated bandwidth of the fabric increases. By adding multiple paths between devices, switches also provide a greater level of redundancy.

A single InfiniBand link supports 2.5 Gbps in each direction per connection. InfiniBand supports double (DDR) and quad data rate (QDR) speeds, for 5 Gbps or 10 Gbps respectively, at the same data-clock rate. InfiniBand links use 8B/10B encoding — every 10 bits sent carry 8 bits of data, which meansthe net data transmission rate is four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbps respectively.

A quad-rate 12X link therefore carries 120 Gbps raw, or 96 Gbps of useful data. At present, most systems use 4X 10 Gbps (SDR), 20 Gbps (DDR) or 40 Gbps (QDR) connections. However, InfiniBand QDR performance is bounded by the 26 Gbps PCIe Gen2 throughput limitation.

Latency performance of InfiniBand SDR ad DDR switch chips is around 200 nanoseconds. InfiniBand Host Channel Adapters (HCAs) are rated 1-3 microseconds (although effective application-level performance is a different matter).

High-end clustering architectures have provided the main opportunity for InfiniBand deployment. Using the InfiniBand fabric versus Gigabit Ethernet as the cluster inter-process communications (IPC) interconnect typically boosts cluster performance and scalability while improving application response times. InfiniBand also provides exceptional scalability and failover in comparison to Gigabit Ethernet. In short, compared to Gigabit Ethernet, InfiniBand stands out in providing the mechanisms necessary to support the demanding requirements of high-end clustering.

iWARP and InfiniBand Comparative Review

As far as its compatibility with existing datacenter infrastructure, because it is layered on top of TCP, iWARP is fully compatible with existing Ethernet switching equipment that is able to process iWARP traffic out-of-the-box. In comparison, deploying InfiniBand requires environments where two separate network infrastructures are installed and managed as well as specialized InfiniBand to Ethernet gateways for bridging between the two infrastructures.

10GbE infrastructure is available from a range of incumbent and startup vendors. Intel, Broadcom, and Chelsio provide 10GbE iWARP adapters, while 10GbE switches are available from a broad range of vendors including Cisco, HP, IBM, BLADE Network Technologies, Extreme, Force10, Arista, and Voltaire. InfiniBand host channel adapter and switch silicon is only available from two vendors (Mellanox and QLogic), who in turn have signed up a number of OEMs to carry adapter and switching systems.

Both interconnects offer equivalent capabilities for supporting operating systems. The OpenFabrics software stack that is fully integrated into the flavors of Linux distributed by Novell and Red Hat fully supports both 10GbE iWARP and InfiniBand.

10GbE iWARP leverages its heritage to also support acceleration of emerging Ethernet-based storage protocols, including file storage (NFS-RDMA), which is fully supported by the Linux OFED stack. In addition, the Linux OFED stack also enables 10GbE iWARP to out-of-the-box support Lustre networking (LNET). In addition, 10GbE iWARP adapters can also provide concurrent, native support for standard Ethernet protocols such as NFS, CIFS, and iSCSI. In comparison, InfiniBand has had minimal deployments for server-to-storage communications, whether for file or block storage.

Regarding pricing, major server vendors are starting to add a 10 Gigabit Ethernet chip to the motherboard-known as LAN-on-Motherboard (LOM). NIC prices will continue to drop as LOM technology lets NIC vendors reach the high volumes they need to keep costs down, which in turn will drive switch port prices down as well. InfiniBand, on the other hand, has reached a mature market position and, consequently, reductions in the pricing of InfiniBand products will be relatively gradual.

Large-scale clusters built using 10GbE iWARP technology and high port-count 10Gbe switches are gaining ground, and cluster scalability is no longer viewed as inhibiting 10Gbe deployment. InfiniBand technology is an established interconnect for building large node-count clusters.

From a roadmap standpoint, the Ethernet market is moving forward aggressively to develop and implement 40G and 100G-based standards. It is expected that the standard for these versions of Ethernet will be ratified during 2010 and initial implementations based on these standards will be shipping from a range of vendors in the blade server and Ethernet networking switch markets within the next 2 to 3 years.

Converged Enhanced Ethernet

The IEEE has been developing standards collectively referred to as “Data Center Bridging” (DCB) or “Converged Enhanced Ethernet” (CEE) This refers to high speed Ethernet (currently 10 Gbps, with a clear path to 40 Gbps and 100 Gbps), plus a number of new features. The main new features are:

  • Priority-Based Flow Control (802.1Qbb), sometimes called “per-priority pause”
  • Enhanced Transmission Selection (802.1Qaz)
  • Congestion Notification (802.1Qau)

The first two features allow splitting an Ethernet link into multiple “virtual links” that operate independently — bandwidth can be reserved for a given virtual link, and by having per-virtual-link flow control, CEE can ensure that certain traffic classes do not overrun their buffers thus avoiding dropping packets. This congestion notification capability means that we can tell senders to slow down to avoid congestion spreading caused by that flow control.

CEE was developed primarily for use in Fibre Channel over Ethernet (FCoE). FC requires a very reliable network — it simply does not work if packets are dropped because of congestion — and, so, CEE provides the ability to segregate FCoE traffic on top of a “no drop” virtual link.

The roadmap initiatives in the InfiniBand space consist of QDR, EDR (2011), and RDMA over CEE. However, these roadmap initiatives suffer from the same limitations that have been a traditional challenge for InfiniBand, namely, limited vendor support.

RoCEE Overview and Value Proposition

Mellanox, the leader in the InfiniBand market, is behind the emerging RDMA over Converged Enhanced Ethernet (RoCEE) protocol proposal. RoCEE is designed to allow the deployment of RDMA semantics on Converged Enhanced Ethernet fabric by running the IB transport protocol using Ethernet frames.

Mellanox’s RoCEE proposal was motivated in order to create a protocol analogous to FCoE for Ethernet-based cluster networking. In other words, to take the InfiniBand transport layer and package it into Ethernet frames, instead of using the iWARP protocol for Ethernet-based high-performance cluster networking. But there are a number of challenges associated with this proposal:

First, one of the major motivations behind the RoCEE proposal is that it is the fastest path forward for an Ethernet-based alternative to InfiniBand. However, this ignores the fact that iWARP adapters are already shipping from multiple vendors, including Intel, Chelsio, and Broadcom. In addition, iWARP will automatically leverage the performance benefits of CEE as support for it will be ubiquitous in all 10GbE server adapter and LOM implementations, iWARP and non-iWARP alike.

Second, the idea that an InfiniBand over Ethernet (IBoE) specification will be quick or easy to develop flies in the face of the experience with FcoE. While FCoE sounded simple in concept, it turns out that the standards work took at least three years. In comparison, IBoE is more complicated to specify, and fewer resources are available for it, so a realistic view is that a true standard is very far away.

Last, RoCEE proponents point to the performance overhead challenges related to iWARP based on the TCP/IP protocol. However, this does not take into account the efficiency of silicon-based implementations of 10 Gbps TCP/IP. In addition, iWARP is also positioned to automatically take advantage of CEE as that protocol gains ubiquity in 10GbE server LOM and adapters.

In summary, RoCEE is unproven and its deployment faces significant hurdles including standardization and application and upper layer adoption. In addition, RoCEE is dependent on the deployment of 10GbE CEE infrastructure; currently only one vendor (Cisco) offers CEE switches, which are at relatively high price points.

About the Author

Saqib Jang is founder and principal at Margalla Communications, a Woodside, Calif.-based strategic and technical marketing consulting firm focused on storage and server networking.

This article is an excerpt from a Margalla Communications white paper entitled High-speed Remote Direct Memory Access (RDMA) Networking for HPC: Comparative Review of 10GbE iWARP and InfiniBand available at www.margallacomm.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Research Scales to 11,400 Cores for EDA

August 5, 2021

For many HPC users, their needs are not evenly distributed throughout a year: some might need few – if any – resources for months, then they might need a very large system for a week. For those kinds of users, large Read more…

Careers in Cybersecurity Featured at PEARC21

August 5, 2021

The PEARC21 (Practice & Experience in Advanced Research Computing) Student Program featured a Cybersecurity Careers Panel. Five experts shared lessons learned from more than 100 years of combined experience. While it Read more…

HPC Career Notes: August 2021 Edition

August 4, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

The Promise (and Necessity) of Runtime Systems like Charm++ in Exascale Power Management

August 4, 2021

Big heterogeneous computer systems, especially forthcoming exascale computers, are power hungry and difficult to program effectively. This is, of course, not an unrecognized problem. In a recent blog, Charmworks’ CEO S Read more…

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its HPC cloud capabilities. Nimbix will become “an Atos HPC c Read more…

AWS Solution Channel

Pushing pixels, not data with NICE DCV

NICE DCV, our high-performance, low-latency remote-display protocol, was originally created for scientists and engineers who ran large workloads on far-away supercomputers, but needed to visualize data without moving it. Read more…

Berkeley Lab Makes Strides in Autonomous Discovery to Tackle the Data Deluge

August 2, 2021

Data production is outpacing the human capacity to process said data. Whether a giant radio telescope, a new particle accelerator or lidar data from autonomous cars, the sheer scale of the data generated is increasingly Read more…

Careers in Cybersecurity Featured at PEARC21

August 5, 2021

The PEARC21 (Practice & Experience in Advanced Research Computing) Student Program featured a Cybersecurity Careers Panel. Five experts shared lessons learn Read more…

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

IBM and University of Tokyo Roll Out Quantum System One in Japan

July 27, 2021

IBM and the University of Tokyo today unveiled an IBM Quantum System One as part of the IBM-Japan quantum program announced in 2019. The system is the second IB Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Will Approximation Drive Post-Moore’s Law HPC Gains?

July 26, 2021

“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). “I think that’s something that this crowd will probably, actually, be already familiar with.” Thompson, speaking... Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Leading Solution Providers

Contributors

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire