A Critique of RDMA

By Patrick Geoffray, Ph.D.

August 18, 2006

Do you remember VIA, the Virtual Interface Architecture? I do. In 1998, according to its promoters — Intel, Compaq, and Microsoft — VIA was supposed to change the face of high-performance networking. VIA was a buzzword at the time; Venture Capital was flowing, and startups multiplying. Many HPC pundits were rallying behind this low-level programming interface, which promised scalable, low-overhead, high-throughput communication, initially for HPC and eventually for the data center. The hype was on and doom was spelled for the non-believers.

It turned out that VIA, based on RDMA (Remote Direct Memory Access, or Remote DMA), was not an improvement on existing APIs to support widely used application-software interfaces such as MPI and Sockets. After a while, VIA faded away, overtaken by other developments.

VIA was eventually reborn into the RDMA programming model that is the basis of various InfiniBand Verbs implementations, as well as DAPL (Direct Access Provider Library) and iWARP (Internet Wide Area RDMA Protocol). The pundits have returned, VCs are spending their money, and RDMA is touted as an ideal solution for the efficiency of high-performance networks.

However, the evidence I'll present here shows that the revamped RDMA model is more a problem than a solution. What's more, the objective that RDMA pretends to address of efficient user-level communication between computing nodes is already solved by the two-sided Send/Recv model in products such as Quadrics QsNet, Cray SeaStar (implementing Sandia Portals), Qlogic InfiniPath, and Myricom's Myrinet Express (MX).

Send/Recv versus RDMA

The difference between these two paradigms, Send/Receive (Send/Recv) and RDMA, resides essentially in the way to determine the destination buffer of a communication.

  • In the Send/Recv model, the source issues a Send that describes the location of the data to be sent, the destination posts a Receive that similarly indicates where the data is going to be written, and a matching capability is used to associate a posted Recv to an incoming Send. Each side has part of the information required for the completion of the communication; this is a “two-sided” interface.
  • In the RDMA model, both origin and destination buffers must be registered prior to any operations. This memory registration returns a handle that can be used in RDMA operations, Read or Write (a.k.a. Get or Put), to describe the origin or destination buffer. Only one side needs to have all of the information required for the completion of the communication; this is a “one-sided” interface.

In fact, most high-speed interconnects support both Send/Recv and RDMA (Put/Get) programming models, but the Send/Recv implementations of RDMA-based fabrics often lack a rich matching capability, which is a key element.

Matching

MPI is a two-sided interface with a large matching space: a MPI Recv is associated with a MPI Send according to several criteria such as Sender, Tag, and Context, with the first two possibly ignored (wildcard). Matching is not necessarily in order and, worse, a MPI Send can be posted before the matching MPI Recv is ready, creating the need to handle unexpected messages.

It is trivial to layer a Send/Recv interface on top of another Send/Recv interface as long as its matching space is wide enough. MPI requires 64 bits of matching information, and MX, Portals, and QsNet provide such a matching capability.

InfiniBand Verbs and other RDMA-based APIs do not support matching at all: they associate incoming Sends with outstanding Recvs in the order in which the Recvs were posted. If only one process can send to a specific receive queue, then you need to send messages in the same order you want to receive them. In the case of multiple senders (Shared Receive Queue), then you have no guarantee about order between senders. Worse, not having a Recv posted at the time an incoming message arrives is considered a fatal error in reliable mode. This lack of support for unexpected messages requires a specific flow-control mechanism to avoid this situation.

When implementing MPI on top of a RDMA programming model, you need to do the matching explicitly by sending a message containing the matching information to the receive side.

Overhead (zero-copy)

One of the key alleged benefits of RDMA is to avoid intermediate memory copies. How does that work with MPI? Not well, for two reasons: memory registration and synchronization.

Memory registration

Zero-copy requires both origin and destination buffers to be registered to allow safe access by the network device. Memory registration and deregistration are expensive operations. They involve a system call and an iteration over each virtual page in an OS-specific way. This registration overhead is actually similar to and sometimes exceeds the cost of a simple memory copy, going against the original objective of avoiding the overhead of intermediate copies.

So, is it possible to do zero-copy in MPI? Yes, but in very specific situations or with some unpleasant tricks:

  • Using lightweight kernels without virtual memory (Cray XT3, IBM Blue Gene).
  • Patching the various Linux kernels to implement safe long-term registration (Quadrics).
  • Hijacking the Malloc library to implement a non-portable and potentially unsafe user-level registration cache (MPICH-GM, MVAPICH, OpenMPI)
  • Optimize the registration code with specific NIC support so that the overhead is cheaper than the memory copy for large messages (MX).

In short, zero-copy, thus RDMA, is practical and efficient in the common case only when the memory registration can occur out of the critical path, which is incompatible with MPI semantics as well as Sockets semantics.

Synchronization

To do zero-copy, you also need to know where you are going to read/write the data on the remote side. Inasmuch as MPI is a two-sided interface, you need to do the matching prior to the RDMA operation, thus requiring an explicit synchronization of both the send and receive sides. The most dramatic side effect is that the MPI sender has to wait for the MPI receiver to be ready before completing the communication and reusing the send buffer. In some situations, the receive side can be very late and the matching synchronization propagates the delay to the send side. For this reason, it is not worth avoiding an intermediate copy on the send side for small/medium messages. Once the message is buffered, the MPI_Send can complete and the application can effectively move on, including reusing the send buffer.

So, it may be surprising for outsiders that most MPI implementations that claim zero-copy actually do a memory copy on both send and receive sides for small/medium messages, typically up to 16K or 32K messages, to avoid synchronization and memory-registration overhead. The RDMA model does not help in any way. The following graph, generated by my Myricom colleague Dr. Loïc Prylli, illustrates this trade-off in MX between using registration and deregistration, which provides the lowest overhead for large messages, versus memory copies, which provides the lowest overhead for short/medium messages.

Low Latency

Inasmuch as the RDMA operations are faster in native RDMA interconnects than the implementation of Send/Recv over RDMA, it is logical to try to use RDMA for small messages, even with intermediate copies. The protocol is straightforward: the send buffer is copied into a pre-registered internal buffer, RDMA-Written into a pre-registered internal buffer on the receive side, and then copied out to the corresponding MPI receive buffer. The problem lies in the one-sided characteristic of the RDMA Write: only one process can safely write into a remote buffer at a time. So the solution is to allocate a pre-registered internal buffer for each possible sender and look for messages in all of these separate buffers. Sounds nice? Not really. This strategy does not scale in time or in space. With only two processes, the receive side has only one buffer to poll. With N processes in the MPI job, the receive side has (N-1) locations to poll. Inasmuch as the polling time increases linearly with the number of processes, so does the latency. That's why most MPI implementations on RDMA-based fabrics use an RDMA method for small jobs, and fall back to a slower, more scalable, Send/Recv method for larger MPI jobs. For example, here are the short-message latencies reported in “InfiniBand Scalability in Open MPI“, Shipman, et al, IPDPS, May 2006, for the two modes of MVAPICH over InfiniBand.

What does it mean practically? Point-to-point micro-benchmarks (i.e., marketing benchmarks) are artificially better on small configurations, but performance is different in networks of useful scale. By contrast, all Send/Recv interconnects have consistent latencies, independent of the number of processes in the MPI job. Also, even the 4.19 us latency above compares unfavorably with the short-message latencies of QsNet, Portals/SeaStar, InfiniPath, or MX/Myri-10G.

Overlap of Communication and Computation (Progression)

Inasmuch as MPI is a two-sided interface requiring matching, it is important that this matching can occur independently of the MPI application, particularly for large messages. Without such asynchronous progress, it is not possible to overlap communication with computation, even when doing zero-copy.

The two following graphs are reproduced by permission from “Measuring MPI Send and Receive Overhead and Application Availability in High Performance Network Interfaces,” Doerfler, et al, EuroPVM/MPI, September 2006, a paper I highly recommend. Rather than showing overhead, the graphs show the send and receive overlap capability — the fraction of the processor time available for computation — with several recent HPC systems, listed here with their interconnects:

  • Red Storm: Cray SeaStar/Portals
  • Tbird: Cisco InfiniBand/MVAPICH
  • CBC-B: Qlogic InfiniPath
  • Odin: Myricom Myri-10G/MPICH-MX
  • Squall: Quadrics QsNetII

Asynchronous progress can be implemented by matching completely in the NIC (Quadrics QsNet and Cray SeaStar) or in the host with specific NIC support (Myricom MX). Some sort of progression could be done on top of a RDMA programming model, but would have a significant impact on short-message latency. Because matching is an integral part of the Send/Recv model, asynchronous implementation is easier in this context. Note that InfiniPath, even though it uses the Send/Recv model, always does a copy on the receive side for large messages, so it cannot overlap.

The graphs above were produced using the Sandia MPI Micro-Benchmark Suite (SMB).

Scalability

The biggest can of worms that the RDMA programming model brings to the table relates to scalability. RDMA is a connected model, i.e., one software entity called a Queue Pair (QP) in one process space is associated with one and only one such entity in a different process space. So, if you want to be able to communicate with all N processes in your MPI job, you need to use a different QP for each process except yourself. That's (N-1) QPs per process. Even with multiple processes per nodes, the number of QPs per RDMA-based NIC is large enough to accommodate all of them. However, this connected model directly affects the host memory footprint: each QP consumes host memory, so the total memory footprint grows linearly with the number of QPs. For example, it was not uncommon to see several gigabytes of memory required for large InfiniBand clusters. The following graph, which well illustrates this point, is from “InfiniBand Scalability in Open MPI“, Shipman et al, IPDPS, May 2006.

By contrast, the memory footprint of all Send/Recv-based interconnects is constant and independent of the number of processes in the MPI job.

Recently, the addition of the Shared Receive Queue (SRQ) concept, noted in the “Open MPI — SRQ” in the graph above, alleviates some of this scalability nightmare: a receive queue can be shared among several connections, thus consolidating the posted buffers used for the limited Send/Recv functionality. It is interesting to note that a shared receive queue is inherently a Send/Recv programming model feature, perhaps a sign that the native RDMA interconnects are slowly migrating to the more efficient Send/Recv programming model. However, SRQ does not address the overhead of initially establishing the connections, so a connection-on-demand mechanism is needed to hide the problem. This approach may work with MPI microbenchmarks, in which processes typically do not have to communicate with each peer in the MPI application, but this approach will not work well for many real-world MPI applications.

Conclusions

The RDMA programming model does not fit well with the two-sided semantic constraints of MPI or the Sockets interface. It's nothing new; it has been demonstrated before with previous incarnations of RDMA-based low-level communication layers such as VIA. However, the hype is on and the marketing message is wrongly promoting RDMA as the ultimate unifying programming model. Is the RDMA model completely useless? Of course not. It can be leveraged successfully for one-sided programming paradigms such as ARMCI (Aggregate Remote Memory Copy Interface) or UPC (Universal Parallel C), but these paradigms have yet to show better performance and ease of use than MPI.

History always repeats itself, so we can expect the current RDMA programming model to continue to struggle alongside the efficient Send/Recv model, and eventually to fade away.

The author would like to thank Charles Seitz and Loïc Prylli for their help in writing this article.

—–

Patrick Geoffray earned his Ph.D. at the University of Lyon, France, in 2001. His interests lie in high-performance computing. He is currently a Senior Software Architect at Myricom, in the Software Development Lab in Oak Ridge, TN. He is responsible for the design of the Myrinet Express (MX) message-passing system and its associated firmware implementations, and was earlier in charge of various middleware running on GM/Myrinet, including MPICH-GM and VIA-GM.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at current count) across the European Union and supplanting HPC Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for high-performance computing, a newly created position that is a Read more…

By Tiffany Trader

Swiss Supercomputer Enables Ultra-Precise Climate Simulations

September 17, 2020

As smoke from the record-breaking West Coast wildfires pours across the globe and tropical storms continue to form at unprecedented rates, the state of the global climate is once again looming in the public eye. Owing to Read more…

By Oliver Peckham

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk management, and high-frequency trading, as told by a group of l Read more…

By Alex Woodie,Tiffany Trader and Todd R. Weiss

Legacy HPC System Seeds Supercomputing Excellence at UT Dallas

September 16, 2020

What happens to supercomputers after their productive life at an academic research center ends? The question often arises when people hear that the average age of a top supercomputer at retirement is about five years. Rest assured — systems aren’t simply scrapped. Instead, they’re donated to organizations and institutions that can make... Read more…

By Aaron Dubrow

AWS Solution Channel

Next-generation aerospace modeling and simulation: benchmarking Amazon Web Services High Performance Computing services

The aerospace industry has been using Computational Fluid Dynamics (CFD) for decades to create and optimize designs digitally, from the largest passenger planes and fighter jets to gliders and drones. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie,Tiffany Trader and Todd R. Weiss

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

Nvidia Commits to Buy Arm for $40B

September 14, 2020

Nvidia is acquiring semiconductor design company Arm Ltd. for $40 billion from SoftBank in a blockbuster deal that catapults the GPU chipmaker to a dominant position in the datacenter while helping troubled SoftBank reverse its financial woes. The deal, which has been rumored for... Read more…

By Todd R. Weiss and George Leopold

AMD’s Massive COVID-19 HPC Fund Adds 18 Institutions, 5 Petaflops of Power

September 14, 2020

Almost exactly five months ago, AMD announced its COVID-19 HPC Fund, an ongoing flow of resources and equipment to research institutions studying COVID-19 that began with an initial donation of $15 million. In June, AMD announced major equipment donations to several major institutions. Now, AMD is making its third major COVID-19 HPC Fund... Read more…

By Oliver Peckham

HPC Strategist Dave Turek Joins DNA Storage (and Computing) Company Catalog

September 11, 2020

You've heard the saying "flash is the new disk and disk is the new tape," which traces its origins back to Jim Gray*. But what if DNA-based data storage could o Read more…

By Tiffany Trader

Google’s Quantum Chemistry Simulation Suggests Promising Path Forward

September 9, 2020

A much-anticipated prize in quantum computing is the ability to more accurately model chemical bonding behavior. Doing so should lead to better chemical synthes Read more…

By John Russell

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This