A Critique of RDMA

By Patrick Geoffray, Ph.D.

August 18, 2006

Do you remember VIA, the Virtual Interface Architecture? I do. In 1998, according to its promoters — Intel, Compaq, and Microsoft — VIA was supposed to change the face of high-performance networking. VIA was a buzzword at the time; Venture Capital was flowing, and startups multiplying. Many HPC pundits were rallying behind this low-level programming interface, which promised scalable, low-overhead, high-throughput communication, initially for HPC and eventually for the data center. The hype was on and doom was spelled for the non-believers.

It turned out that VIA, based on RDMA (Remote Direct Memory Access, or Remote DMA), was not an improvement on existing APIs to support widely used application-software interfaces such as MPI and Sockets. After a while, VIA faded away, overtaken by other developments.

VIA was eventually reborn into the RDMA programming model that is the basis of various InfiniBand Verbs implementations, as well as DAPL (Direct Access Provider Library) and iWARP (Internet Wide Area RDMA Protocol). The pundits have returned, VCs are spending their money, and RDMA is touted as an ideal solution for the efficiency of high-performance networks.

However, the evidence I'll present here shows that the revamped RDMA model is more a problem than a solution. What's more, the objective that RDMA pretends to address of efficient user-level communication between computing nodes is already solved by the two-sided Send/Recv model in products such as Quadrics QsNet, Cray SeaStar (implementing Sandia Portals), Qlogic InfiniPath, and Myricom's Myrinet Express (MX).

Send/Recv versus RDMA

The difference between these two paradigms, Send/Receive (Send/Recv) and RDMA, resides essentially in the way to determine the destination buffer of a communication.

  • In the Send/Recv model, the source issues a Send that describes the location of the data to be sent, the destination posts a Receive that similarly indicates where the data is going to be written, and a matching capability is used to associate a posted Recv to an incoming Send. Each side has part of the information required for the completion of the communication; this is a “two-sided” interface.
  • In the RDMA model, both origin and destination buffers must be registered prior to any operations. This memory registration returns a handle that can be used in RDMA operations, Read or Write (a.k.a. Get or Put), to describe the origin or destination buffer. Only one side needs to have all of the information required for the completion of the communication; this is a “one-sided” interface.

In fact, most high-speed interconnects support both Send/Recv and RDMA (Put/Get) programming models, but the Send/Recv implementations of RDMA-based fabrics often lack a rich matching capability, which is a key element.

Matching

MPI is a two-sided interface with a large matching space: a MPI Recv is associated with a MPI Send according to several criteria such as Sender, Tag, and Context, with the first two possibly ignored (wildcard). Matching is not necessarily in order and, worse, a MPI Send can be posted before the matching MPI Recv is ready, creating the need to handle unexpected messages.

It is trivial to layer a Send/Recv interface on top of another Send/Recv interface as long as its matching space is wide enough. MPI requires 64 bits of matching information, and MX, Portals, and QsNet provide such a matching capability.

InfiniBand Verbs and other RDMA-based APIs do not support matching at all: they associate incoming Sends with outstanding Recvs in the order in which the Recvs were posted. If only one process can send to a specific receive queue, then you need to send messages in the same order you want to receive them. In the case of multiple senders (Shared Receive Queue), then you have no guarantee about order between senders. Worse, not having a Recv posted at the time an incoming message arrives is considered a fatal error in reliable mode. This lack of support for unexpected messages requires a specific flow-control mechanism to avoid this situation.

When implementing MPI on top of a RDMA programming model, you need to do the matching explicitly by sending a message containing the matching information to the receive side.

Overhead (zero-copy)

One of the key alleged benefits of RDMA is to avoid intermediate memory copies. How does that work with MPI? Not well, for two reasons: memory registration and synchronization.

Memory registration

Zero-copy requires both origin and destination buffers to be registered to allow safe access by the network device. Memory registration and deregistration are expensive operations. They involve a system call and an iteration over each virtual page in an OS-specific way. This registration overhead is actually similar to and sometimes exceeds the cost of a simple memory copy, going against the original objective of avoiding the overhead of intermediate copies.

So, is it possible to do zero-copy in MPI? Yes, but in very specific situations or with some unpleasant tricks:

  • Using lightweight kernels without virtual memory (Cray XT3, IBM Blue Gene).
  • Patching the various Linux kernels to implement safe long-term registration (Quadrics).
  • Hijacking the Malloc library to implement a non-portable and potentially unsafe user-level registration cache (MPICH-GM, MVAPICH, OpenMPI)
  • Optimize the registration code with specific NIC support so that the overhead is cheaper than the memory copy for large messages (MX).

In short, zero-copy, thus RDMA, is practical and efficient in the common case only when the memory registration can occur out of the critical path, which is incompatible with MPI semantics as well as Sockets semantics.

Synchronization

To do zero-copy, you also need to know where you are going to read/write the data on the remote side. Inasmuch as MPI is a two-sided interface, you need to do the matching prior to the RDMA operation, thus requiring an explicit synchronization of both the send and receive sides. The most dramatic side effect is that the MPI sender has to wait for the MPI receiver to be ready before completing the communication and reusing the send buffer. In some situations, the receive side can be very late and the matching synchronization propagates the delay to the send side. For this reason, it is not worth avoiding an intermediate copy on the send side for small/medium messages. Once the message is buffered, the MPI_Send can complete and the application can effectively move on, including reusing the send buffer.

So, it may be surprising for outsiders that most MPI implementations that claim zero-copy actually do a memory copy on both send and receive sides for small/medium messages, typically up to 16K or 32K messages, to avoid synchronization and memory-registration overhead. The RDMA model does not help in any way. The following graph, generated by my Myricom colleague Dr. Loïc Prylli, illustrates this trade-off in MX between using registration and deregistration, which provides the lowest overhead for large messages, versus memory copies, which provides the lowest overhead for short/medium messages.

Low Latency

Inasmuch as the RDMA operations are faster in native RDMA interconnects than the implementation of Send/Recv over RDMA, it is logical to try to use RDMA for small messages, even with intermediate copies. The protocol is straightforward: the send buffer is copied into a pre-registered internal buffer, RDMA-Written into a pre-registered internal buffer on the receive side, and then copied out to the corresponding MPI receive buffer. The problem lies in the one-sided characteristic of the RDMA Write: only one process can safely write into a remote buffer at a time. So the solution is to allocate a pre-registered internal buffer for each possible sender and look for messages in all of these separate buffers. Sounds nice? Not really. This strategy does not scale in time or in space. With only two processes, the receive side has only one buffer to poll. With N processes in the MPI job, the receive side has (N-1) locations to poll. Inasmuch as the polling time increases linearly with the number of processes, so does the latency. That's why most MPI implementations on RDMA-based fabrics use an RDMA method for small jobs, and fall back to a slower, more scalable, Send/Recv method for larger MPI jobs. For example, here are the short-message latencies reported in “InfiniBand Scalability in Open MPI“, Shipman, et al, IPDPS, May 2006, for the two modes of MVAPICH over InfiniBand.

What does it mean practically? Point-to-point micro-benchmarks (i.e., marketing benchmarks) are artificially better on small configurations, but performance is different in networks of useful scale. By contrast, all Send/Recv interconnects have consistent latencies, independent of the number of processes in the MPI job. Also, even the 4.19 us latency above compares unfavorably with the short-message latencies of QsNet, Portals/SeaStar, InfiniPath, or MX/Myri-10G.

Overlap of Communication and Computation (Progression)

Inasmuch as MPI is a two-sided interface requiring matching, it is important that this matching can occur independently of the MPI application, particularly for large messages. Without such asynchronous progress, it is not possible to overlap communication with computation, even when doing zero-copy.

The two following graphs are reproduced by permission from “Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces,” Doerfler, et al, EuroPVM/MPI, September 2006, a paper I highly recommend. Rather than showing overhead, the graphs show the send and receive overlap capability — the fraction of the processor time available for computation — with several recent HPC systems, listed here with their interconnects:

  • Red Storm: Cray SeaStar/Portals
  • Tbird: Cisco InfiniBand/MVAPICH
  • CBC-B: Qlogic InfiniPath
  • Odin: Myricom Myri-10G/MPICH-MX
  • Squall: Quadrics QsNetII

Asynchronous progress can be implemented by matching completely in the NIC (Quadrics QsNet and Cray SeaStar) or in the host with specific NIC support (Myricom MX). Some sort of progression could be done on top of a RDMA programming model, but would have a significant impact on short-message latency. Because matching is an integral part of the Send/Recv model, asynchronous implementation is easier in this context. Note that InfiniPath, even though it uses the Send/Recv model, always does a copy on the receive side for large messages, so it cannot overlap.

The graphs above were produced using the Sandia MPI Micro-Benchmark Suite (SMB).

Scalability

The biggest can of worms that the RDMA programming model brings to the table relates to scalability. RDMA is a connected model, i.e., one software entity called a Queue Pair (QP) in one process space is associated with one and only one such entity in a different process space. So, if you want to be able to communicate with all N processes in your MPI job, you need to use a different QP for each process except yourself. That's (N-1) QPs per process. Even with multiple processes per nodes, the number of QPs per RDMA-based NIC is large enough to accommodate all of them. However, this connected model directly affects the host memory footprint: each QP consumes host memory, so the total memory footprint grows linearly with the number of QPs. For example, it was not uncommon to see several gigabytes of memory required for large InfiniBand clusters. The following graph, which well illustrates this point, is from “InfiniBand Scalability in Open MPI“, Shipman et al, IPDPS, May 2006.

By contrast, the memory footprint of all Send/Recv-based interconnects is constant and independent of the number of processes in the MPI job.

Recently, the addition of the Shared Receive Queue (SRQ) concept, noted in the “Open MPI — SRQ” in the graph above, alleviates some of this scalability nightmare: a receive queue can be shared among several connections, thus consolidating the posted buffers used for the limited Send/Recv functionality. It is interesting to note that a shared receive queue is inherently a Send/Recv programming model feature, perhaps a sign that the native RDMA interconnects are slowly migrating to the more efficient Send/Recv programming model. However, SRQ does not address the overhead of initially establishing the connections, so a connection-on-demand mechanism is needed to hide the problem. This approach may work with MPI microbenchmarks, in which processes typically do not have to communicate with each peer in the MPI application, but this approach will not work well for many real-world MPI applications.

Conclusions

The RDMA programming model does not fit well with the two-sided semantic constraints of MPI or the Sockets interface. It's nothing new; it has been demonstrated before with previous incarnations of RDMA-based low-level communication layers such as VIA. However, the hype is on and the marketing message is wrongly promoting RDMA as the ultimate unifying programming model. Is the RDMA model completely useless? Of course not. It can be leveraged successfully for one-sided programming paradigms such as ARMCI (Aggregate Remote Memory Copy Interface) or UPC (Universal Parallel C), but these paradigms have yet to show better performance and ease of use than MPI.

History always repeats itself, so we can expect the current RDMA programming model to continue to struggle alongside the efficient Send/Recv model, and eventually to fade away.

The author would like to thank Charles Seitz and Loïc Prylli for their help in writing this article.

—–

Patrick Geoffray earned his Ph.D. at the University of Lyon, France, in 2001. His interests lie in high-performance computing. He is currently a Senior Software Architect at Myricom, in the Software Development Lab in Oak Ridge, TN. He is responsible for the design of the Myrinet Express (MX) message-passing system and its associated firmware implementations, and was earlier in charge of various middleware running on GM/Myrinet, including MPICH-GM and VIA-GM.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Samsung and a number of other corporations to its IBM Q Net Read more…

By Tiffany Trader

TACC Researchers Test AI Traffic Monitoring Tool in Austin

December 13, 2017

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new Read more…

By HPCwire Staff

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as several tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in Read more…

By Tiffany Trader

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as several tech giants jockey to establish a pole position in the race toward commercializ Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This