Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
May 21, 2009

An Ethernet Protocol for InfiniBand

by Michael Feldman

It’s inevitable that lossless Ethernet will work its way into the datacenter. With the upcoming converged enhanced Ethernet (CEE) standard, called Data Center Bridging (DCB) by the IEEE and Data Center Ethernet (DCE) by Cisco, networking gear and supporting software will soon be available to allow all applications to run on a unified wire.

The catch is that it will be based on Ethernet, so performance will initially be constrained to 10 gigabits/second throughput and multi-microsecond latencies. InfiniBand, of course, already offers much better performance, which is why it continues to expand its footprint in the HPC market. But since the technology behind lossless Ethernet is coming to resemble InfiniBand, vendors like Voltaire and Mellanox are using the convergence as an opportunity to enter the Ethernet arena. “We’re not naive enough to think the entire world is going to convert to InfiniBand,” says Mellanox marketing VP John Monson, who joined the company in March.

Voltaire has announced its intention to build 10 GigE datacenter switches, which the company plans to launch later this year. Meanwhile at Interop in Las Vegas, Mellanox demonstrated a number of Ethernet-centric technologies, including an RDMA over Ethernet (RDMAoE) capability on the company’s ConnectX EN adapters.

RDMAoE is not iWARP (Internet Wide Area RDMA Protocol), which is currently the only RDMA-based Ethernet standard that has a following with NIC vendors like Chelsio Communications and NetEffect (now part of Intel). Mellanox never jumped on the iWARP bandwagon, claiming that the technology’s TCP offload model makes the design too complex and expensive to attract widespread support, and that scaling iWARP to 40 gigabits per seconds (datacenter Ethernet’s next speed bump) would be problematic. More importantly, for a number of reasons Linux support for TCP offload never materialized.

“We believe than that there is already an RDMA transport mechanism that’s been proven and used heavily in the industry,” declares Monson. “It’s called InfiniBand.” According to him, you might as well use similar functionality inside an Ethernet wrapper if your goal is 10 GigE with lossless communication. Mellanox is calling its prototype RDMAoE implementation Low Latency Ethernet (LLE), but for all intents and purposes it’s InfiniBand over Ethernet.

Using the same TCP-free model as Fibre Channel over Ethernet (FCoE), RDMAoE replaces iWARP’s TCP transport with a much smaller RDMA-specific transport layer. Jettisoning the unwieldy TCP stack eliminates the transport context switch and reduces packet processing overhead. Also, since IEEE’s Data Center Bridging will support congestion management, it makes less sense to implement a TCP transport layer for LANs and SANs in a post-DCB world. You give up the ability to support TCP on a converged fabric, but that wasn’t the main rationale behind lossless datacenter Ethernet anyway.

At Interop, Mellanox is demonstrating port-to-port latencies as low as 3 microseconds for an RDMAoE implementation running on its 10 GigE ConnectX adapters. That’s not as good as the 1 microsecond latency that can be achieved with InfiniBand, but it’s far better than iWARP at 8-10 microseconds. Throughput is much improved as well, especially at smaller message sizes. RDMAoE can hit the 10 gigabit line rate with payloads as small as 512 bytes; with iWARP, that level of throughput requires message sizes of 64K bytes and above. Since there is no TCP offload to do, power efficiency is much improved as well.

The applications space for RDMAoE is much the same as it is for iWARP: high I/O transaction workloads in the enterprise and HPC apps that can get by with sub-InfiniBand-level performance. It would be especially useful in Wall Street datacenters, where stock trades have to be executed in sub-millisecond timeframes. Also, with the advent of solid state drives (SSDs) in the enterprise, applications that can benefit from high IOPS-type workloads are becoming more generalized

According to Monson, they are seeing very high levels of interest in RDMAoE from end users and OEMs, and standards are being considered by one or more industry groups, although he declined to say which ones. The most likely suspects are IEEE, the InfiniBand Trade Association (IBTA) and the Internet Engineering Task Force (ITEF). Getting these standards bodies to work together is going to be tricky though since they answer to different consistencies and work at different speeds. Practically speaking though, getting a standard shouldn’t be too difficult. According to a recent presentation (PDF) by System Fabric Works at the last Open Fabrics workshop in March, RDMA over Ethernet would not entail modifying the emerging DCB, nor would it be difficult for the IBTA to extend InfiniBand to run on top of DCB.

If RDMAoE manages to edge iWARP out as the low-latency Ethernet of choice for LANs and SANs, it brings converged fabrics one step closer to the InfiniBand way of doing business. For Mellanox, in particular, it will be a validation of the company’s long-term strategy to use a virtual interconnect model to support a multi-protocol fabric for clouds and other datacenters with heterogeneous workloads. History may be on the side of Mellanox. With computing and storage becoming virtual resources in the datacenter, networking is bound to follow.