A Critique of RDMA

By Patrick Geoffray, Ph.D.

August 18, 2006

Do you remember VIA, the Virtual Interface Architecture? I do. In 1998, according to its promoters — Intel, Compaq, and Microsoft — VIA was supposed to change the face of high-performance networking. VIA was a buzzword at the time; Venture Capital was flowing, and startups multiplying. Many HPC pundits were rallying behind this low-level programming interface, which promised scalable, low-overhead, high-throughput communication, initially for HPC and eventually for the data center. The hype was on and doom was spelled for the non-believers.

It turned out that VIA, based on RDMA (Remote Direct Memory Access, or Remote DMA), was not an improvement on existing APIs to support widely used application-software interfaces such as MPI and Sockets. After a while, VIA faded away, overtaken by other developments.

VIA was eventually reborn into the RDMA programming model that is the basis of various InfiniBand Verbs implementations, as well as DAPL (Direct Access Provider Library) and iWARP (Internet Wide Area RDMA Protocol). The pundits have returned, VCs are spending their money, and RDMA is touted as an ideal solution for the efficiency of high-performance networks.

However, the evidence I'll present here shows that the revamped RDMA model is more a problem than a solution. What's more, the objective that RDMA pretends to address of efficient user-level communication between computing nodes is already solved by the two-sided Send/Recv model in products such as Quadrics QsNet, Cray SeaStar (implementing Sandia Portals), Qlogic InfiniPath, and Myricom's Myrinet Express (MX).

Send/Recv versus RDMA

The difference between these two paradigms, Send/Receive (Send/Recv) and RDMA, resides essentially in the way to determine the destination buffer of a communication.

  • In the Send/Recv model, the source issues a Send that describes the location of the data to be sent, the destination posts a Receive that similarly indicates where the data is going to be written, and a matching capability is used to associate a posted Recv to an incoming Send. Each side has part of the information required for the completion of the communication; this is a “two-sided” interface.
  • In the RDMA model, both origin and destination buffers must be registered prior to any operations. This memory registration returns a handle that can be used in RDMA operations, Read or Write (a.k.a. Get or Put), to describe the origin or destination buffer. Only one side needs to have all of the information required for the completion of the communication; this is a “one-sided” interface.

In fact, most high-speed interconnects support both Send/Recv and RDMA (Put/Get) programming models, but the Send/Recv implementations of RDMA-based fabrics often lack a rich matching capability, which is a key element.

Matching

MPI is a two-sided interface with a large matching space: a MPI Recv is associated with a MPI Send according to several criteria such as Sender, Tag, and Context, with the first two possibly ignored (wildcard). Matching is not necessarily in order and, worse, a MPI Send can be posted before the matching MPI Recv is ready, creating the need to handle unexpected messages.

It is trivial to layer a Send/Recv interface on top of another Send/Recv interface as long as its matching space is wide enough. MPI requires 64 bits of matching information, and MX, Portals, and QsNet provide such a matching capability.

InfiniBand Verbs and other RDMA-based APIs do not support matching at all: they associate incoming Sends with outstanding Recvs in the order in which the Recvs were posted. If only one process can send to a specific receive queue, then you need to send messages in the same order you want to receive them. In the case of multiple senders (Shared Receive Queue), then you have no guarantee about order between senders. Worse, not having a Recv posted at the time an incoming message arrives is considered a fatal error in reliable mode. This lack of support for unexpected messages requires a specific flow-control mechanism to avoid this situation.

When implementing MPI on top of a RDMA programming model, you need to do the matching explicitly by sending a message containing the matching information to the receive side.

Overhead (zero-copy)

One of the key alleged benefits of RDMA is to avoid intermediate memory copies. How does that work with MPI? Not well, for two reasons: memory registration and synchronization.

Memory registration

Zero-copy requires both origin and destination buffers to be registered to allow safe access by the network device. Memory registration and deregistration are expensive operations. They involve a system call and an iteration over each virtual page in an OS-specific way. This registration overhead is actually similar to and sometimes exceeds the cost of a simple memory copy, going against the original objective of avoiding the overhead of intermediate copies.

So, is it possible to do zero-copy in MPI? Yes, but in very specific situations or with some unpleasant tricks:

  • Using lightweight kernels without virtual memory (Cray XT3, IBM Blue Gene).
  • Patching the various Linux kernels to implement safe long-term registration (Quadrics).
  • Hijacking the Malloc library to implement a non-portable and potentially unsafe user-level registration cache (MPICH-GM, MVAPICH, OpenMPI)
  • Optimize the registration code with specific NIC support so that the overhead is cheaper than the memory copy for large messages (MX).

In short, zero-copy, thus RDMA, is practical and efficient in the common case only when the memory registration can occur out of the critical path, which is incompatible with MPI semantics as well as Sockets semantics.

Synchronization

To do zero-copy, you also need to know where you are going to read/write the data on the remote side. Inasmuch as MPI is a two-sided interface, you need to do the matching prior to the RDMA operation, thus requiring an explicit synchronization of both the send and receive sides. The most dramatic side effect is that the MPI sender has to wait for the MPI receiver to be ready before completing the communication and reusing the send buffer. In some situations, the receive side can be very late and the matching synchronization propagates the delay to the send side. For this reason, it is not worth avoiding an intermediate copy on the send side for small/medium messages. Once the message is buffered, the MPI_Send can complete and the application can effectively move on, including reusing the send buffer.

So, it may be surprising for outsiders that most MPI implementations that claim zero-copy actually do a memory copy on both send and receive sides for small/medium messages, typically up to 16K or 32K messages, to avoid synchronization and memory-registration overhead. The RDMA model does not help in any way. The following graph, generated by my Myricom colleague Dr. Loïc Prylli, illustrates this trade-off in MX between using registration and deregistration, which provides the lowest overhead for large messages, versus memory copies, which provides the lowest overhead for short/medium messages.

Low Latency

Inasmuch as the RDMA operations are faster in native RDMA interconnects than the implementation of Send/Recv over RDMA, it is logical to try to use RDMA for small messages, even with intermediate copies. The protocol is straightforward: the send buffer is copied into a pre-registered internal buffer, RDMA-Written into a pre-registered internal buffer on the receive side, and then copied out to the corresponding MPI receive buffer. The problem lies in the one-sided characteristic of the RDMA Write: only one process can safely write into a remote buffer at a time. So the solution is to allocate a pre-registered internal buffer for each possible sender and look for messages in all of these separate buffers. Sounds nice? Not really. This strategy does not scale in time or in space. With only two processes, the receive side has only one buffer to poll. With N processes in the MPI job, the receive side has (N-1) locations to poll. Inasmuch as the polling time increases linearly with the number of processes, so does the latency. That's why most MPI implementations on RDMA-based fabrics use an RDMA method for small jobs, and fall back to a slower, more scalable, Send/Recv method for larger MPI jobs. For example, here are the short-message latencies reported in “InfiniBand Scalability in Open MPI“, Shipman, et al, IPDPS, May 2006, for the two modes of MVAPICH over InfiniBand.

What does it mean practically? Point-to-point micro-benchmarks (i.e., marketing benchmarks) are artificially better on small configurations, but performance is different in networks of useful scale. By contrast, all Send/Recv interconnects have consistent latencies, independent of the number of processes in the MPI job. Also, even the 4.19 us latency above compares unfavorably with the short-message latencies of QsNet, Portals/SeaStar, InfiniPath, or MX/Myri-10G.

Overlap of Communication and Computation (Progression)

Inasmuch as MPI is a two-sided interface requiring matching, it is important that this matching can occur independently of the MPI application, particularly for large messages. Without such asynchronous progress, it is not possible to overlap communication with computation, even when doing zero-copy.

The two following graphs are reproduced by permission from “Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces,” Doerfler, et al, EuroPVM/MPI, September 2006, a paper I highly recommend. Rather than showing overhead, the graphs show the send and receive overlap capability — the fraction of the processor time available for computation — with several recent HPC systems, listed here with their interconnects:

  • Red Storm: Cray SeaStar/Portals
  • Tbird: Cisco InfiniBand/MVAPICH
  • CBC-B: Qlogic InfiniPath
  • Odin: Myricom Myri-10G/MPICH-MX
  • Squall: Quadrics QsNetII

Asynchronous progress can be implemented by matching completely in the NIC (Quadrics QsNet and Cray SeaStar) or in the host with specific NIC support (Myricom MX). Some sort of progression could be done on top of a RDMA programming model, but would have a significant impact on short-message latency. Because matching is an integral part of the Send/Recv model, asynchronous implementation is easier in this context. Note that InfiniPath, even though it uses the Send/Recv model, always does a copy on the receive side for large messages, so it cannot overlap.

The graphs above were produced using the Sandia MPI Micro-Benchmark Suite (SMB).

Scalability

The biggest can of worms that the RDMA programming model brings to the table relates to scalability. RDMA is a connected model, i.e., one software entity called a Queue Pair (QP) in one process space is associated with one and only one such entity in a different process space. So, if you want to be able to communicate with all N processes in your MPI job, you need to use a different QP for each process except yourself. That's (N-1) QPs per process. Even with multiple processes per nodes, the number of QPs per RDMA-based NIC is large enough to accommodate all of them. However, this connected model directly affects the host memory footprint: each QP consumes host memory, so the total memory footprint grows linearly with the number of QPs. For example, it was not uncommon to see several gigabytes of memory required for large InfiniBand clusters. The following graph, which well illustrates this point, is from “InfiniBand Scalability in Open MPI“, Shipman et al, IPDPS, May 2006.

By contrast, the memory footprint of all Send/Recv-based interconnects is constant and independent of the number of processes in the MPI job.

Recently, the addition of the Shared Receive Queue (SRQ) concept, noted in the “Open MPI — SRQ” in the graph above, alleviates some of this scalability nightmare: a receive queue can be shared among several connections, thus consolidating the posted buffers used for the limited Send/Recv functionality. It is interesting to note that a shared receive queue is inherently a Send/Recv programming model feature, perhaps a sign that the native RDMA interconnects are slowly migrating to the more efficient Send/Recv programming model. However, SRQ does not address the overhead of initially establishing the connections, so a connection-on-demand mechanism is needed to hide the problem. This approach may work with MPI microbenchmarks, in which processes typically do not have to communicate with each peer in the MPI application, but this approach will not work well for many real-world MPI applications.

Conclusions

The RDMA programming model does not fit well with the two-sided semantic constraints of MPI or the Sockets interface. It's nothing new; it has been demonstrated before with previous incarnations of RDMA-based low-level communication layers such as VIA. However, the hype is on and the marketing message is wrongly promoting RDMA as the ultimate unifying programming model. Is the RDMA model completely useless? Of course not. It can be leveraged successfully for one-sided programming paradigms such as ARMCI (Aggregate Remote Memory Copy Interface) or UPC (Universal Parallel C), but these paradigms have yet to show better performance and ease of use than MPI.

History always repeats itself, so we can expect the current RDMA programming model to continue to struggle alongside the efficient Send/Recv model, and eventually to fade away.

The author would like to thank Charles Seitz and Loïc Prylli for their help in writing this article.

—–

Patrick Geoffray earned his Ph.D. at the University of Lyon, France, in 2001. His interests lie in high-performance computing. He is currently a Senior Software Architect at Myricom, in the Software Development Lab in Oak Ridge, TN. He is responsible for the design of the Myrinet Express (MX) message-passing system and its associated firmware implementations, and was earlier in charge of various middleware running on GM/Myrinet, including MPICH-GM and VIA-GM.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Dark Matter, Arrhythmia, Sustainability & More

February 28, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Microsoft Announces General Availability of AMD-backed Azure HBv2 Instances for HPC

February 27, 2020

Nearly seven months after they were first announced, Microsoft Azure’s HPC-targeted HBv2 virtual machines (VMs) based on AMD second-generation Epyc processors are ready for primetime. The new VMs, which Azure claims of Read more…

By Staff report

Sequoia Decommissioned, Making Room for El Capitan

February 27, 2020

After eight years of service, Sequoia has been felled. Once the most powerful publicly ranked supercomputer in the world, Sequoia – hosted by Lawrence Livermore National Laboratory (LLNL) – has been decommissioned to Read more…

By Oliver Peckham

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Blue Waters Supercomputer Helps Tackle Pandemic Flu Simulations

February 26, 2020

While not the novel coronavirus that is now sweeping across the world, the 2009 H1N1 flu pandemic (pH1N1) infected up to 21 percent of the global population and killed over 200,000 people. Now, a team of researchers from Read more…

By Staff report

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Micron Accelerator Bumps Up Memory Bandwidth

February 26, 2020

Deep learning accelerators based on chip architectures coupled with high-bandwidth memory are emerging to enable near real-time processing of machine learning algorithms. Memory chip specialist Micron Technology argues t Read more…

By George Leopold

Quantum Bits: Q-Ctrl, D-Wave Start News Flow on Eve of APS March Meeting

February 27, 2020

The annual trickle of quantum computing news during the lead-up to next week’s APS March Meeting 2020 has begun. Yesterday D-Wave introduced a significant upgrade to its quantum portal and tool suite, Leap2. Today quantum computing start-up Q-Ctrl announced the beta release of its ‘professional-grade’ tool Boulder Opal software... Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

NOAA Lays Out Aggressive New AI Strategy

February 24, 2020

Roughly coincident with last week’s announcement of a planned tripling of its compute capacity, the National Oceanic and Atmospheric Administration issued an Read more…

By John Russell

New Supercomputer Cooling Method Saves Half-Million Gallons of Water at Sandia National Laboratories

February 24, 2020

A new cooling method for supercomputer systems is picking up steam – literally. After saving millions of gallons of water at a National Renewable Energy Laboratory (NREL) datacenter, this innovative approach, called... Read more…

By Oliver Peckham

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

February 19, 2020

Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity a Read more…

By John Russell

UK Announces £1.2 Billion Weather and Climate Supercomputer

February 19, 2020

While the planet is heating up, so is the race for global leadership in weather and climate computing. In a bombshell announcement, the UK government revealed p Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This