Interconnect battles have taken different paths over the years. There have been two main battles – the battle of “offload” based architectures vs. “onload” based architecture, and the battle of standard-based networks vs. proprietary-based networks.
It appears that the first battle has ended in favor of offload-based architectures, with InfiniBand markedly in leadership. Onload based architectures, including Pathscale InfiniPath and QLogic TrueScale, are not in the market anymore, and Intel Omni-Path development has stopped. The key advantages of offload-based architectures, namely the reduction of CPU utilization and the enablement of asynchronous progress, have proven to provide higher application performance.
The second battle is still ongoing. On one side you have InfiniBand and Ethernet as the main standard-based networks, while on the other side there is a list of proprietary-protocol networks including Myricom Myrinet, Quadrics QsNet, Intel Omni-Path, Cray Seastar, Cray Gemini, Fujitsu Tofu, Cray Aries, and the latest addition – Cray Slingshot. There were and are several other proprietary networks, but their usage was or is very minimal. From this list, Fujitsu Tofu and Cray Slingshot are the ones with existing development efforts.
Standards-based networks have multiple advantages over proprietary ones, including:
- Backward and forward compatibility – the ability to connect old network generations to future network generations;
- Robust software support and the ability to use the same software and applications on different network generations;
- Established software ecosystem – the software drivers are typically part of the operating system distributions, and there is a large ecosystem of ISV support;
- Established hardware ecosystem – including server, storage, management and more platforms;
- Strong and more aggressive roadmap – with the large ecosystem support, there is no need to re-build the ecosystem over and over again, as in the case of proprietary networks. Therefore, standard network development can be focused on delivering better and faster generations to better meet the needs of future applications;
- Advanced capabilities – due to the same reasons, we see that standards-based networks introduce better and more advanced capabilities vs. proprietary networks. For example, while congestion control has been native to InfiniBand for many years, it is just being introduced by Slingshot, to be deployed in 2020;
- Investment protection – data center IT managers can re-use existing platforms with future platforms, protecting their financial investments for the long term.
The InfiniBand standard, developed by the InfiniBand Trade Association (IBTA), provides all of the above benefits and more. Therefore, it is the leading 200 gigabit-per-second end-to end interconnect technology for high performance computing, artificial intelligence, cloud, storage and more applications. It is highly scalable from hundreds of nodes to tens and hundreds of thousands of nodes, supports smart In-Network Computing engines to allow data algorithms to be executed by the network, provides extremely low latency, full transport offloads, remote direct memory access (RDMA), GPUDirect and other features.
Slingshot is probably based on a similar combination to that of the old Quadrics QsNet and Gnodal products. Gnodal technology was similar to the Quadrics technology approach, with added support for internal gateways to bridge the proprietary protocol to standard Ethernet, in order to offer Ethernet switch products to the market. Slingshot has a similar approach to Gnodal’s, namely to support two different network protocols: a proprietary network, and the ability to bridge over to standard Ethernet. Most, if not all, of the new features introduced by Cray, which did not exist in their previous proprietary network named Aries, are available obviously only with the proprietary Slingshot network, and not via the gate to the standard Ethernet connectivity.
Many years ago, Mellanox decided to bring together the two standard protocols, InfiniBand and Ethernet, into the same network adapter silicon devices (the ConnectX® family) and into the switch (named SwitchX®). The motivation, of course, was ease of use, as users can deploy one network, and decide later whether to use it as InfiniBand (high performance network) or Ethernet, or both at the same time. While combining InfiniBand and Ethernet on the network adapter has been a great success, the combination of the two protocols on the switch has created performance limitations, mainly due to increased switch latency. InfiniBand, designed as the ultimate software-defined network (SDN) and to deliver extremely low latency, suffered from the addition of Ethernet components, leading to increased switch latency. Therefore Mellanox decided to separate the protocols and create two switch device lines – one for InfiniBand (Mellanox Quantum™ family) and one for Ethernet (Mellanox Spectrum® family). With this change, InfiniBand switch devices demonstrate extremely low latency of ~100ns.
Ensuring lowest latency for high performance applications is one of the key elements for performance and scalability. If there is need to connect to external Ethernet networks, it is better to use external InfiniBand to Ethernet gateways, while ensuring lowest latency within the data center.
Slingshot design is similar to the old Mellanox SwitchX concept – supporting both a high-performance network (in this case the proprietary Slingshot) and the option to connect to standard Ethernet. With this approach one can save the external gateway boxes to Ethernet and connect the external Ethernet network directly to the Slingshot network, but the cost is an increase in latency. The Slingshot switch has 300ns latency, nearly 3 times higher than the InfiniBand switch devices. As such, a 2-switch layer InfiniBand network at full 200 gigabit per second connecting 800 nodes will have nearly the same latency of a single Slingshot switch device connecting 64 nodes. Obviously, it is better to use external gateway boxes rather than a switch silicon that embeds the gateway functionality and reduces performance for the data center applications.
The Slingshot proprietary network is the first of its kind — the same way all the previous proprietary networks were. Its major highlights are adaptive routing and congestion control, elements that have existed in InfiniBand for many years now. Moreover, InfiniBand offers also the SHIELD technology, bringing first to the market self-healing capabilities for resilient Exascale infrastructures, and many more other advantages.
Due to the disadvantages of proprietary network approaches compared to standards-based networks, proprietary network companies may try to market their products as “semi standards,” making claims, for example, that they have designed a “high performance” version of a standard network, in which they have modified the network protocol headers or packet sizes, and added new mechanisms for the network exchange protocols. Once one changes the network protocol, it is no longer the standard protocol. It is a proprietary protocol. If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
InfiniBand is the best choice for high performance computing infrastructure. It is a standard network protocol delivering: lowest latency, end-to-end 200 gigabit-per-second throughout today, In-Network Computing engines, Self-Healing engines, congestion control, adaptive routing, RDMA and more. InfiniBand is being used to connect the top supercomputers around the world, and it is designed to scale out and to support any network topology that can be created.
InfiniBand connected data centers can be directly connected to InfiniBand based storage platforms. And if there is need to connect to external Ethernet networks, one can use the 100 gigabit and 200 gigabit Mellanox Skyway™ InfiniBand-to-Ethernet gateway systems. InfiniBand also offers long-reach connectivity of 10 and 40 kilometers, enabling to connect remote data centers, remote storage or remote research offices directly to an InfiniBand supercomputer, with low latency, native RDMA, adaptive routing and support of Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ all the way. There are also third party products enabling to connect InfiniBand centers over thousands of miles.
With the IBTA roadmap guidelines, it appears that InfiniBand will demonstrate the 400 gigabit NDR speeds, while other proprietary products might finally support 200 gigabit for an end-to-end connectivity. Therefore, InfiniBand will continue to demonstrate leading performance and capabilities, protecting data center hardware and software investments, and delivering advantages one generation ahead.
Nothing against ducks. But when it comes to connecting high performance supercomputing infrastructures, ducks will not be your best choice…
References:
[1] https://www.hpcwire.com/2019/06/10/super-connecting-the-supercomputers/
[4] https://www.hpcwire.com/2016/06/18/offloading-vs-onloading-case-cpu-utilization/
[5] https://www.hpcwire.com/2016/04/12/interconnect-offloading-versus-onloading/