A Critique of RDMA

By Patrick Geoffray, Ph.D.

August 18, 2006

Do you remember VIA, the Virtual Interface Architecture? I do. In 1998, according to its promoters — Intel, Compaq, and Microsoft — VIA was supposed to change the face of high-performance networking. VIA was a buzzword at the time; Venture Capital was flowing, and startups multiplying. Many HPC pundits were rallying behind this low-level programming interface, which promised scalable, low-overhead, high-throughput communication, initially for HPC and eventually for the data center. The hype was on and doom was spelled for the non-believers.

It turned out that VIA, based on RDMA (Remote Direct Memory Access, or Remote DMA), was not an improvement on existing APIs to support widely used application-software interfaces such as MPI and Sockets. After a while, VIA faded away, overtaken by other developments.

VIA was eventually reborn into the RDMA programming model that is the basis of various InfiniBand Verbs implementations, as well as DAPL (Direct Access Provider Library) and iWARP (Internet Wide Area RDMA Protocol). The pundits have returned, VCs are spending their money, and RDMA is touted as an ideal solution for the efficiency of high-performance networks.

However, the evidence I'll present here shows that the revamped RDMA model is more a problem than a solution. What's more, the objective that RDMA pretends to address of efficient user-level communication between computing nodes is already solved by the two-sided Send/Recv model in products such as Quadrics QsNet, Cray SeaStar (implementing Sandia Portals), Qlogic InfiniPath, and Myricom's Myrinet Express (MX).

Send/Recv versus RDMA

The difference between these two paradigms, Send/Receive (Send/Recv) and RDMA, resides essentially in the way to determine the destination buffer of a communication.

  • In the Send/Recv model, the source issues a Send that describes the location of the data to be sent, the destination posts a Receive that similarly indicates where the data is going to be written, and a matching capability is used to associate a posted Recv to an incoming Send. Each side has part of the information required for the completion of the communication; this is a “two-sided” interface.
  • In the RDMA model, both origin and destination buffers must be registered prior to any operations. This memory registration returns a handle that can be used in RDMA operations, Read or Write (a.k.a. Get or Put), to describe the origin or destination buffer. Only one side needs to have all of the information required for the completion of the communication; this is a “one-sided” interface.

In fact, most high-speed interconnects support both Send/Recv and RDMA (Put/Get) programming models, but the Send/Recv implementations of RDMA-based fabrics often lack a rich matching capability, which is a key element.

Matching

MPI is a two-sided interface with a large matching space: a MPI Recv is associated with a MPI Send according to several criteria such as Sender, Tag, and Context, with the first two possibly ignored (wildcard). Matching is not necessarily in order and, worse, a MPI Send can be posted before the matching MPI Recv is ready, creating the need to handle unexpected messages.

It is trivial to layer a Send/Recv interface on top of another Send/Recv interface as long as its matching space is wide enough. MPI requires 64 bits of matching information, and MX, Portals, and QsNet provide such a matching capability.

InfiniBand Verbs and other RDMA-based APIs do not support matching at all: they associate incoming Sends with outstanding Recvs in the order in which the Recvs were posted. If only one process can send to a specific receive queue, then you need to send messages in the same order you want to receive them. In the case of multiple senders (Shared Receive Queue), then you have no guarantee about order between senders. Worse, not having a Recv posted at the time an incoming message arrives is considered a fatal error in reliable mode. This lack of support for unexpected messages requires a specific flow-control mechanism to avoid this situation.

When implementing MPI on top of a RDMA programming model, you need to do the matching explicitly by sending a message containing the matching information to the receive side.

Overhead (zero-copy)

One of the key alleged benefits of RDMA is to avoid intermediate memory copies. How does that work with MPI? Not well, for two reasons: memory registration and synchronization.

Memory registration

Zero-copy requires both origin and destination buffers to be registered to allow safe access by the network device. Memory registration and deregistration are expensive operations. They involve a system call and an iteration over each virtual page in an OS-specific way. This registration overhead is actually similar to and sometimes exceeds the cost of a simple memory copy, going against the original objective of avoiding the overhead of intermediate copies.

So, is it possible to do zero-copy in MPI? Yes, but in very specific situations or with some unpleasant tricks:

  • Using lightweight kernels without virtual memory (Cray XT3, IBM Blue Gene).
  • Patching the various Linux kernels to implement safe long-term registration (Quadrics).
  • Hijacking the Malloc library to implement a non-portable and potentially unsafe user-level registration cache (MPICH-GM, MVAPICH, OpenMPI)
  • Optimize the registration code with specific NIC support so that the overhead is cheaper than the memory copy for large messages (MX).

In short, zero-copy, thus RDMA, is practical and efficient in the common case only when the memory registration can occur out of the critical path, which is incompatible with MPI semantics as well as Sockets semantics.

Synchronization

To do zero-copy, you also need to know where you are going to read/write the data on the remote side. Inasmuch as MPI is a two-sided interface, you need to do the matching prior to the RDMA operation, thus requiring an explicit synchronization of both the send and receive sides. The most dramatic side effect is that the MPI sender has to wait for the MPI receiver to be ready before completing the communication and reusing the send buffer. In some situations, the receive side can be very late and the matching synchronization propagates the delay to the send side. For this reason, it is not worth avoiding an intermediate copy on the send side for small/medium messages. Once the message is buffered, the MPI_Send can complete and the application can effectively move on, including reusing the send buffer.

So, it may be surprising for outsiders that most MPI implementations that claim zero-copy actually do a memory copy on both send and receive sides for small/medium messages, typically up to 16K or 32K messages, to avoid synchronization and memory-registration overhead. The RDMA model does not help in any way. The following graph, generated by my Myricom colleague Dr. Loïc Prylli, illustrates this trade-off in MX between using registration and deregistration, which provides the lowest overhead for large messages, versus memory copies, which provides the lowest overhead for short/medium messages.

Low Latency

Inasmuch as the RDMA operations are faster in native RDMA interconnects than the implementation of Send/Recv over RDMA, it is logical to try to use RDMA for small messages, even with intermediate copies. The protocol is straightforward: the send buffer is copied into a pre-registered internal buffer, RDMA-Written into a pre-registered internal buffer on the receive side, and then copied out to the corresponding MPI receive buffer. The problem lies in the one-sided characteristic of the RDMA Write: only one process can safely write into a remote buffer at a time. So the solution is to allocate a pre-registered internal buffer for each possible sender and look for messages in all of these separate buffers. Sounds nice? Not really. This strategy does not scale in time or in space. With only two processes, the receive side has only one buffer to poll. With N processes in the MPI job, the receive side has (N-1) locations to poll. Inasmuch as the polling time increases linearly with the number of processes, so does the latency. That's why most MPI implementations on RDMA-based fabrics use an RDMA method for small jobs, and fall back to a slower, more scalable, Send/Recv method for larger MPI jobs. For example, here are the short-message latencies reported in “InfiniBand Scalability in Open MPI“, Shipman, et al, IPDPS, May 2006, for the two modes of MVAPICH over InfiniBand.

What does it mean practically? Point-to-point micro-benchmarks (i.e., marketing benchmarks) are artificially better on small configurations, but performance is different in networks of useful scale. By contrast, all Send/Recv interconnects have consistent latencies, independent of the number of processes in the MPI job. Also, even the 4.19 us latency above compares unfavorably with the short-message latencies of QsNet, Portals/SeaStar, InfiniPath, or MX/Myri-10G.

Overlap of Communication and Computation (Progression)

Inasmuch as MPI is a two-sided interface requiring matching, it is important that this matching can occur independently of the MPI application, particularly for large messages. Without such asynchronous progress, it is not possible to overlap communication with computation, even when doing zero-copy.

The two following graphs are reproduced by permission from “Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces,” Doerfler, et al, EuroPVM/MPI, September 2006, a paper I highly recommend. Rather than showing overhead, the graphs show the send and receive overlap capability — the fraction of the processor time available for computation — with several recent HPC systems, listed here with their interconnects:

  • Red Storm: Cray SeaStar/Portals
  • Tbird: Cisco InfiniBand/MVAPICH
  • CBC-B: Qlogic InfiniPath
  • Odin: Myricom Myri-10G/MPICH-MX
  • Squall: Quadrics QsNetII

Asynchronous progress can be implemented by matching completely in the NIC (Quadrics QsNet and Cray SeaStar) or in the host with specific NIC support (Myricom MX). Some sort of progression could be done on top of a RDMA programming model, but would have a significant impact on short-message latency. Because matching is an integral part of the Send/Recv model, asynchronous implementation is easier in this context. Note that InfiniPath, even though it uses the Send/Recv model, always does a copy on the receive side for large messages, so it cannot overlap.

The graphs above were produced using the Sandia MPI Micro-Benchmark Suite (SMB).

Scalability

The biggest can of worms that the RDMA programming model brings to the table relates to scalability. RDMA is a connected model, i.e., one software entity called a Queue Pair (QP) in one process space is associated with one and only one such entity in a different process space. So, if you want to be able to communicate with all N processes in your MPI job, you need to use a different QP for each process except yourself. That's (N-1) QPs per process. Even with multiple processes per nodes, the number of QPs per RDMA-based NIC is large enough to accommodate all of them. However, this connected model directly affects the host memory footprint: each QP consumes host memory, so the total memory footprint grows linearly with the number of QPs. For example, it was not uncommon to see several gigabytes of memory required for large InfiniBand clusters. The following graph, which well illustrates this point, is from “InfiniBand Scalability in Open MPI“, Shipman et al, IPDPS, May 2006.

By contrast, the memory footprint of all Send/Recv-based interconnects is constant and independent of the number of processes in the MPI job.

Recently, the addition of the Shared Receive Queue (SRQ) concept, noted in the “Open MPI — SRQ” in the graph above, alleviates some of this scalability nightmare: a receive queue can be shared among several connections, thus consolidating the posted buffers used for the limited Send/Recv functionality. It is interesting to note that a shared receive queue is inherently a Send/Recv programming model feature, perhaps a sign that the native RDMA interconnects are slowly migrating to the more efficient Send/Recv programming model. However, SRQ does not address the overhead of initially establishing the connections, so a connection-on-demand mechanism is needed to hide the problem. This approach may work with MPI microbenchmarks, in which processes typically do not have to communicate with each peer in the MPI application, but this approach will not work well for many real-world MPI applications.

Conclusions

The RDMA programming model does not fit well with the two-sided semantic constraints of MPI or the Sockets interface. It's nothing new; it has been demonstrated before with previous incarnations of RDMA-based low-level communication layers such as VIA. However, the hype is on and the marketing message is wrongly promoting RDMA as the ultimate unifying programming model. Is the RDMA model completely useless? Of course not. It can be leveraged successfully for one-sided programming paradigms such as ARMCI (Aggregate Remote Memory Copy Interface) or UPC (Universal Parallel C), but these paradigms have yet to show better performance and ease of use than MPI.

History always repeats itself, so we can expect the current RDMA programming model to continue to struggle alongside the efficient Send/Recv model, and eventually to fade away.

The author would like to thank Charles Seitz and Loïc Prylli for their help in writing this article.

—–

Patrick Geoffray earned his Ph.D. at the University of Lyon, France, in 2001. His interests lie in high-performance computing. He is currently a Senior Software Architect at Myricom, in the Software Development Lab in Oak Ridge, TN. He is responsible for the design of the Myrinet Express (MX) message-passing system and its associated firmware implementations, and was earlier in charge of various middleware running on GM/Myrinet, including MPICH-GM and VIA-GM.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fine-Tuning Severe Hail Forecasting with Machine Learning

July 20, 2017

Depending on whether you’ve been caught outside during a severe hail storm, the sight of greenish tinted clouds on the horizon may cause serious knots in the pit of your stomach, or at least give you pause. There’s g Read more…

By Sean Thielen

Trinity Supercomputer’s Haswell and KNL Partitions Are Merged

July 19, 2017

Trinity supercomputer’s two partitions – one based on Intel Xeon Haswell processors and the other on Xeon Phi Knights Landing – have been fully integrated are now available for use on classified work in the Nationa Read more…

By HPCwire Staff

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's output. The Japanese multinational has made a raft of HPC and A Read more…

By Tiffany Trader

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the computer we use most (hopefully) and understand least. This mon Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Servers Deliver High Performance Remote Visualization

Whether generating seismic simulations, locating new productive oil reservoirs, or constructing complex models of the earth’s subsurface, energy, oil, and gas (EO&G) is a highly data-driven industry. Read more…

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee of the House of Representatives voted to accept the recomme Read more…

By Alex R. Larzelere

Summer Reading: IEEE Spectrum’s Chip Hall of Fame

July 17, 2017

Take a trip down memory lane – the Mostek MK4096 4-kilobit DRAM, for instance. Perhaps processors are more to your liking. Remember the Sh-Boom processor (1988), created by Russell Fish and Chuck Moore, and named after Read more…

By John Russell

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

July 13, 2017

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provides participants the opportunity to network with industry lea Read more…

By Tiffany Trader

Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface

July 13, 2017

New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series and timeliness in general, according to Paul Morin, directo Read more…

By Ken Chiacchia and Tiffany Jolley

Fine-Tuning Severe Hail Forecasting with Machine Learning

July 20, 2017

Depending on whether you’ve been caught outside during a severe hail storm, the sight of greenish tinted clouds on the horizon may cause serious knots in the Read more…

By Sean Thielen

Fujitsu Continues HPC, AI Push

July 19, 2017

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu's out Read more…

By Tiffany Trader

Researchers Use DNA to Store and Retrieve Digital Movie

July 18, 2017

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the com Read more…

By John Russell

The Exascale FY18 Budget – The Next Step

July 17, 2017

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee Read more…

By Alex R. Larzelere

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

July 13, 2017

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provid Read more…

By Tiffany Trader

Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface

July 13, 2017

New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series Read more…

By Ken Chiacchia and Tiffany Jolley

Intel Skylake: Xeon Goes from Chip to Platform

July 13, 2017

With yesterday’s New York unveiling of the new “Skylake” Xeon Scalable processors, Intel made multiple runs at multiple competitive threats and strategic Read more…

By Doug Black

Perverse Incentives? How Economics (Mis-)shaped Academic Science

July 12, 2017

The unintended consequences of how we fund academic research—in the U.S. and elsewhere—are strangling innovation, putting universities into debt and creatin Read more…

By Ken Chiacchia, Senior Science Writer, Pittsburgh Supercomputing Center

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This