A recent paper published by a public cloud vendor and others reveals the trials and tribulations of their RoCE experience, and the scale of their deployment woes1.
The authors, including the RoCE vendor and some of the staunchest of RoCE proponents, reveal the severity of the problems faced, with “poor application performance,” “head-of-line blocking,” “unfairness,” “congestion that spreads” in the network, and “performance that degrades” under load.
The authors then attempt to patch together pieces from Ethernet and Data Center TCP into a sideband congestion control scheme (DCQCN). However, DCQCN remains an unsuccessful attempt to address the issues, as it continues to require the same blunt pause mechanisms that expose the network to congestion collapse.
Thus, the paper serves as a proof that the basis behind RoCE of airlifting the “incompatible InfiniBand” into the Ethernet space is doomed to fail, due to missing critical stability mechanisms.
Heeding the warnings from this paper and other known experiences, users are avoiding scalability limitations and dangerous network meltdowns by staying clear of RoCE. Instead, many are selecting the iWARP RDMA over Ethernet standard. iWARP is a scalable, easy to use, plug-and-play protocol, which leverages a proven and mature TCP/IP foundation, and originates from the fully open IETF standards process. There is no reason to slide down the RoCE path, when a stable, robust, cloud ready alternative is available and provides competitive performance and benefits.
And so, in light of mounting evidence that users of RoCE are facing major deployment problems, the truth is finally emerging and exposing the misleading claims by the aggressive FUD campaign accompanying the push for InfiniBand over Ethernet.
The previously mentioned paper discussed the widespread congestion failures observed when RoCE is deployed at scale, as reported by Microsoft Azure, widely advertised as the main datacenter proof point for RoCE. The paper also discussed the DCQCN scheme put together to plug some of the gaping holes in InfiniBand’s Ethernet incursion 1.
The perils of using PFC in a large-scale deployment are well known. The need for PFC alone should prevent one’s slide down the RoCE path. Unfortunately, the repetition of baseless marketing fluff and technically meaningless statements such as “due to end-to-end loss recovery, iWarp [sic] cannot offer ultra-low latency like RoCEv2” and the shallowness of their understanding of iWARP, put into question the objectiveness of their analysis and conclusions.
Nevertheless, there is no hiding the fact that the RoCE problems are real, and the successively attempted solutions haphazard and incomplete, with RoCE users repeatedly having to undergo emergency protocol surgery. These hard learnt lessons are helping others avoid the same mistake by selecting iWARP, and across the board, a sea change is in effect as the industry reels back from the edge of a RoCE abyss.
Note: This Center Stage article is an excerpt of a Chelsio white paper, RoCE Fails to Scale: Repetitive Protocol Surgery or the Dangers of Sliding Down the RoCE Path. You can download a copy at http://www.chelsio.com/wp-content/uploads/resources/RoCE-Deployment-Challenges-for-Clouds.pdf
1 Yibo Zhu et al., SIGCOMM 2015, Congestion Control for Large-Scale RDMA Deployments, http://www.cs.ucsb.edu/~yibo/