A Critique of RDMA

By Patrick Geoffray, Ph.D.

August 18, 2006

Do you remember VIA, the Virtual Interface Architecture? I do. In 1998, according to its promoters — Intel, Compaq, and Microsoft — VIA was supposed to change the face of high-performance networking. VIA was a buzzword at the time; Venture Capital was flowing, and startups multiplying. Many HPC pundits were rallying behind this low-level programming interface, which promised scalable, low-overhead, high-throughput communication, initially for HPC and eventually for the data center. The hype was on and doom was spelled for the non-believers.

It turned out that VIA, based on RDMA (Remote Direct Memory Access, or Remote DMA), was not an improvement on existing APIs to support widely used application-software interfaces such as MPI and Sockets. After a while, VIA faded away, overtaken by other developments.

VIA was eventually reborn into the RDMA programming model that is the basis of various InfiniBand Verbs implementations, as well as DAPL (Direct Access Provider Library) and iWARP (Internet Wide Area RDMA Protocol). The pundits have returned, VCs are spending their money, and RDMA is touted as an ideal solution for the efficiency of high-performance networks.

However, the evidence I'll present here shows that the revamped RDMA model is more a problem than a solution. What's more, the objective that RDMA pretends to address of efficient user-level communication between computing nodes is already solved by the two-sided Send/Recv model in products such as Quadrics QsNet, Cray SeaStar (implementing Sandia Portals), Qlogic InfiniPath, and Myricom's Myrinet Express (MX).

Send/Recv versus RDMA

The difference between these two paradigms, Send/Receive (Send/Recv) and RDMA, resides essentially in the way to determine the destination buffer of a communication.

  • In the Send/Recv model, the source issues a Send that describes the location of the data to be sent, the destination posts a Receive that similarly indicates where the data is going to be written, and a matching capability is used to associate a posted Recv to an incoming Send. Each side has part of the information required for the completion of the communication; this is a “two-sided” interface.
  • In the RDMA model, both origin and destination buffers must be registered prior to any operations. This memory registration returns a handle that can be used in RDMA operations, Read or Write (a.k.a. Get or Put), to describe the origin or destination buffer. Only one side needs to have all of the information required for the completion of the communication; this is a “one-sided” interface.

In fact, most high-speed interconnects support both Send/Recv and RDMA (Put/Get) programming models, but the Send/Recv implementations of RDMA-based fabrics often lack a rich matching capability, which is a key element.

Matching

MPI is a two-sided interface with a large matching space: a MPI Recv is associated with a MPI Send according to several criteria such as Sender, Tag, and Context, with the first two possibly ignored (wildcard). Matching is not necessarily in order and, worse, a MPI Send can be posted before the matching MPI Recv is ready, creating the need to handle unexpected messages.

It is trivial to layer a Send/Recv interface on top of another Send/Recv interface as long as its matching space is wide enough. MPI requires 64 bits of matching information, and MX, Portals, and QsNet provide such a matching capability.

InfiniBand Verbs and other RDMA-based APIs do not support matching at all: they associate incoming Sends with outstanding Recvs in the order in which the Recvs were posted. If only one process can send to a specific receive queue, then you need to send messages in the same order you want to receive them. In the case of multiple senders (Shared Receive Queue), then you have no guarantee about order between senders. Worse, not having a Recv posted at the time an incoming message arrives is considered a fatal error in reliable mode. This lack of support for unexpected messages requires a specific flow-control mechanism to avoid this situation.

When implementing MPI on top of a RDMA programming model, you need to do the matching explicitly by sending a message containing the matching information to the receive side.

Overhead (zero-copy)

One of the key alleged benefits of RDMA is to avoid intermediate memory copies. How does that work with MPI? Not well, for two reasons: memory registration and synchronization.

Memory registration

Zero-copy requires both origin and destination buffers to be registered to allow safe access by the network device. Memory registration and deregistration are expensive operations. They involve a system call and an iteration over each virtual page in an OS-specific way. This registration overhead is actually similar to and sometimes exceeds the cost of a simple memory copy, going against the original objective of avoiding the overhead of intermediate copies.

So, is it possible to do zero-copy in MPI? Yes, but in very specific situations or with some unpleasant tricks:

  • Using lightweight kernels without virtual memory (Cray XT3, IBM Blue Gene).
  • Patching the various Linux kernels to implement safe long-term registration (Quadrics).
  • Hijacking the Malloc library to implement a non-portable and potentially unsafe user-level registration cache (MPICH-GM, MVAPICH, OpenMPI)
  • Optimize the registration code with specific NIC support so that the overhead is cheaper than the memory copy for large messages (MX).

In short, zero-copy, thus RDMA, is practical and efficient in the common case only when the memory registration can occur out of the critical path, which is incompatible with MPI semantics as well as Sockets semantics.

Synchronization

To do zero-copy, you also need to know where you are going to read/write the data on the remote side. Inasmuch as MPI is a two-sided interface, you need to do the matching prior to the RDMA operation, thus requiring an explicit synchronization of both the send and receive sides. The most dramatic side effect is that the MPI sender has to wait for the MPI receiver to be ready before completing the communication and reusing the send buffer. In some situations, the receive side can be very late and the matching synchronization propagates the delay to the send side. For this reason, it is not worth avoiding an intermediate copy on the send side for small/medium messages. Once the message is buffered, the MPI_Send can complete and the application can effectively move on, including reusing the send buffer.

So, it may be surprising for outsiders that most MPI implementations that claim zero-copy actually do a memory copy on both send and receive sides for small/medium messages, typically up to 16K or 32K messages, to avoid synchronization and memory-registration overhead. The RDMA model does not help in any way. The following graph, generated by my Myricom colleague Dr. Loïc Prylli, illustrates this trade-off in MX between using registration and deregistration, which provides the lowest overhead for large messages, versus memory copies, which provides the lowest overhead for short/medium messages.

Low Latency

Inasmuch as the RDMA operations are faster in native RDMA interconnects than the implementation of Send/Recv over RDMA, it is logical to try to use RDMA for small messages, even with intermediate copies. The protocol is straightforward: the send buffer is copied into a pre-registered internal buffer, RDMA-Written into a pre-registered internal buffer on the receive side, and then copied out to the corresponding MPI receive buffer. The problem lies in the one-sided characteristic of the RDMA Write: only one process can safely write into a remote buffer at a time. So the solution is to allocate a pre-registered internal buffer for each possible sender and look for messages in all of these separate buffers. Sounds nice? Not really. This strategy does not scale in time or in space. With only two processes, the receive side has only one buffer to poll. With N processes in the MPI job, the receive side has (N-1) locations to poll. Inasmuch as the polling time increases linearly with the number of processes, so does the latency. That's why most MPI implementations on RDMA-based fabrics use an RDMA method for small jobs, and fall back to a slower, more scalable, Send/Recv method for larger MPI jobs. For example, here are the short-message latencies reported in “InfiniBand Scalability in Open MPI“, Shipman, et al, IPDPS, May 2006, for the two modes of MVAPICH over InfiniBand.

What does it mean practically? Point-to-point micro-benchmarks (i.e., marketing benchmarks) are artificially better on small configurations, but performance is different in networks of useful scale. By contrast, all Send/Recv interconnects have consistent latencies, independent of the number of processes in the MPI job. Also, even the 4.19 us latency above compares unfavorably with the short-message latencies of QsNet, Portals/SeaStar, InfiniPath, or MX/Myri-10G.

Overlap of Communication and Computation (Progression)

Inasmuch as MPI is a two-sided interface requiring matching, it is important that this matching can occur independently of the MPI application, particularly for large messages. Without such asynchronous progress, it is not possible to overlap communication with computation, even when doing zero-copy.

The two following graphs are reproduced by permission from “Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces,” Doerfler, et al, EuroPVM/MPI, September 2006, a paper I highly recommend. Rather than showing overhead, the graphs show the send and receive overlap capability — the fraction of the processor time available for computation — with several recent HPC systems, listed here with their interconnects:

  • Red Storm: Cray SeaStar/Portals
  • Tbird: Cisco InfiniBand/MVAPICH
  • CBC-B: Qlogic InfiniPath
  • Odin: Myricom Myri-10G/MPICH-MX
  • Squall: Quadrics QsNetII

Asynchronous progress can be implemented by matching completely in the NIC (Quadrics QsNet and Cray SeaStar) or in the host with specific NIC support (Myricom MX). Some sort of progression could be done on top of a RDMA programming model, but would have a significant impact on short-message latency. Because matching is an integral part of the Send/Recv model, asynchronous implementation is easier in this context. Note that InfiniPath, even though it uses the Send/Recv model, always does a copy on the receive side for large messages, so it cannot overlap.

The graphs above were produced using the Sandia MPI Micro-Benchmark Suite (SMB).

Scalability

The biggest can of worms that the RDMA programming model brings to the table relates to scalability. RDMA is a connected model, i.e., one software entity called a Queue Pair (QP) in one process space is associated with one and only one such entity in a different process space. So, if you want to be able to communicate with all N processes in your MPI job, you need to use a different QP for each process except yourself. That's (N-1) QPs per process. Even with multiple processes per nodes, the number of QPs per RDMA-based NIC is large enough to accommodate all of them. However, this connected model directly affects the host memory footprint: each QP consumes host memory, so the total memory footprint grows linearly with the number of QPs. For example, it was not uncommon to see several gigabytes of memory required for large InfiniBand clusters. The following graph, which well illustrates this point, is from “InfiniBand Scalability in Open MPI“, Shipman et al, IPDPS, May 2006.

By contrast, the memory footprint of all Send/Recv-based interconnects is constant and independent of the number of processes in the MPI job.

Recently, the addition of the Shared Receive Queue (SRQ) concept, noted in the “Open MPI — SRQ” in the graph above, alleviates some of this scalability nightmare: a receive queue can be shared among several connections, thus consolidating the posted buffers used for the limited Send/Recv functionality. It is interesting to note that a shared receive queue is inherently a Send/Recv programming model feature, perhaps a sign that the native RDMA interconnects are slowly migrating to the more efficient Send/Recv programming model. However, SRQ does not address the overhead of initially establishing the connections, so a connection-on-demand mechanism is needed to hide the problem. This approach may work with MPI microbenchmarks, in which processes typically do not have to communicate with each peer in the MPI application, but this approach will not work well for many real-world MPI applications.

Conclusions

The RDMA programming model does not fit well with the two-sided semantic constraints of MPI or the Sockets interface. It's nothing new; it has been demonstrated before with previous incarnations of RDMA-based low-level communication layers such as VIA. However, the hype is on and the marketing message is wrongly promoting RDMA as the ultimate unifying programming model. Is the RDMA model completely useless? Of course not. It can be leveraged successfully for one-sided programming paradigms such as ARMCI (Aggregate Remote Memory Copy Interface) or UPC (Universal Parallel C), but these paradigms have yet to show better performance and ease of use than MPI.

History always repeats itself, so we can expect the current RDMA programming model to continue to struggle alongside the efficient Send/Recv model, and eventually to fade away.

The author would like to thank Charles Seitz and Loïc Prylli for their help in writing this article.

—–

Patrick Geoffray earned his Ph.D. at the University of Lyon, France, in 2001. His interests lie in high-performance computing. He is currently a Senior Software Architect at Myricom, in the Software Development Lab in Oak Ridge, TN. He is responsible for the design of the Myrinet Express (MX) message-passing system and its associated firmware implementations, and was earlier in charge of various middleware running on GM/Myrinet, including MPICH-GM and VIA-GM.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ORNL Helps Identify Challenges of Extremely Heterogeneous Architectures

March 21, 2019

Exponential growth in classical computing over the last two decades has produced hardware and software that support lightning-fast processing speeds, but advancements are topping out as computing architectures reach thei Read more…

By Laurie Varma

Interview with 2019 Person to Watch Jim Keller

March 21, 2019

On the heels of Intel's reaffirmation that it will deliver the first U.S. exascale computer in 2021, which will feature the company's new Intel Xe architecture, we bring you our interview with our 2019 Person to Watch Jim Keller, head of the Silicon Engineering Group at Intel. Read more…

By HPCwire Editorial Team

What’s New in HPC Research: TensorFlow, Buddy Compression, Intel Optane & More

March 20, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Insurance: Where’s the Risk?

Insurers are facing extreme competitive challenges in their core businesses. Property and Casualty (P&C) and Life and Health (L&H) firms alike are highly impacted by the ongoing globalization, increasing regulation, and digital transformation of their client bases. Read more…

At GTC: Nvidia Expands Scope of Its AI and Datacenter Ecosystem

March 19, 2019

In the high-stakes race to provide the AI life-cycle solution of choice, three of the biggest horses in the field are IBM, Intel and Nvidia. While the latter is only a fraction of the size of its two bigger rivals, and has been in business for only a fraction of the time, Nvidia continues to impress with an expanding array of new GPU-based hardware, software, robotics, partnerships and... Read more…

By Doug Black

At GTC: Nvidia Expands Scope of Its AI and Datacenter Ecosystem

March 19, 2019

In the high-stakes race to provide the AI life-cycle solution of choice, three of the biggest horses in the field are IBM, Intel and Nvidia. While the latter is only a fraction of the size of its two bigger rivals, and has been in business for only a fraction of the time, Nvidia continues to impress with an expanding array of new GPU-based hardware, software, robotics, partnerships and... Read more…

By Doug Black

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Nvidia Wins Mellanox Stakes for $6.9 Billion

March 11, 2019

The long-rumored acquisition of Mellanox came to fruition this morning with GPU chipmaker Nvidia’s announcement that it has purchased the high-performance net Read more…

By Doug Black

Optalysys Rolls Commercial Optical Processor

March 7, 2019

Optalysys, Ltd., a U.K. company seeking to advance it optical co-processor technology, moved a step closer this week with the unveiling of what it claims is th Read more…

By George Leopold

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This