Supercomputers are the essential tools we need to conduct research, enable scientific discoveries, design new products, and develop self-learning software algorithms. Supercomputing leadership means scientific leadership, which explains the investments made by many governments and research institutes to build faster and more powerful supercomputing platforms.
The heart of a supercomputer is the network that connects the compute elements together, enabling parallel and synchronized computing cycles. Over the past decades, multiple network technologies were created and multiple have disappeared. InfiniBand, an industry standard developed in 1999, continues to show a strong presence in the high-performance computing market. It connected one of the top three supercomputers in 2013 and maintains a strong roadmap into the future.
Many proprietary networks that existed 10 and 15 years ago are no longer in use today; QsNet, Myrinet, Seastar are but a few examples. QSNet technology was later used by Gnodal, which added Ethernet gateways to form an Ethernet switch network, but its development was halted several years ago. Part of that technology and concept is being used in the first generation of Slingshot. Slingshot is planned to replace a former proprietary Aries technology, which replaced Gemini proprietary technology, which replaced Seastar. One of the main disadvantages of a proprietary network is that it requires recreating old concepts again and again—concepts such as congestion control, routing schemes and more.
Being a standard-based interconnect, InfiniBand enjoys the continuous development of new capabilities, better performance, and scalability. It is used in many of the leading supercomputers around the world, demonstrating 96% network utilization with probably the most advanced adaptive routing capabilities (“The Design, Deployment, and Evaluation of the CORAL Pre-Exascale Systems”), and delivering leading performance for the most demanding high compute intensive applications.
InfiniBand technology can be separated into three main pillars: connectivity, network, and communication. The connectivity pillar refers to the elements around the interconnect infrastructure such as topologies. The network pillar refers to the network transport and routing for example. And the communication pillar refers to co-design elements related to communication frameworks such as MPI, SHMEM/PGAS and more.
The Connectivity Pillar
InfiniBand was specified and designed as the ultimate software-defined network. One can define and manage complete routing schemes of the network from a centralized place, and everything is programmable. This advantage enables support for any interconnect topology and optimizes topologies to best fit the applications and workloads needs. Many of today’s supercomputers use the Fat Tree topology as it provides low latency and effectively supports a variety of applications. There are some Torus topologies in use, which best serve stencil applications. Other topologies including Hypercube, Enhanced Hypercube, Dragonfly+ and more are coming in the future.
Dragonfly+ is hybrid topology based on the conventional Dragonfly and extended using the properties of Fat Tree providing the benefit of both. It includes a Fully Progressive Adapting Routing technique, is more scalable than Dragonfly at the same cost, and able to provide the same or better throughput for equivalent Dragonfly and Fat Tree topologies under various traffic patterns (“Dragonfly+: Low Cost Topology for Scaling Datacenters,” Alexander Shpiner, Zachy Haramaty, Saar Eliad, Vladimir Zdornov, Barak Gafni and Eitan Zahavi).
Furthermore, the traditional Dragonfly presents performance limitations for adversarial traffic, as within a group, there is only one route from ingress switch to egress switch. Therefore, network bandwidth decreases with higher switch radix. The more ports on the switch, the lower the data throughput. InfiniBand Dragonfly+ includes multiple routes from ingress switch to egress switch, thereby delivering the highest data throughput. Moreover, due to its hybrid design, Dragonfly+ can simply be extended over time with no need to reroute any of the long cables—an advantage over Fat Trees and traditional Dragonfly networks.
Multi-Host technology enables multiple hosts to connect to a single interconnect adapter by separating the PCIe interface into multiple and independent interfaces, with no performance degradation. This results in lower total cost of ownership (TCO) in the data center by reducing CAPEX requirements from multiple cables, network adapters, and switch ports to only one of each, and by reducing OPEX by cutting down on switch port management and overall power usage.
The Network Pillar
InfiniBand is a pure offload interconnect, managing all network function and transport at the network level, and not imposing overheads on the CPU as other networks such as Ethernet or OmniPath. This results in more CPU cycles being dedicated to the applications and higher overall performance and scalability.
In many networks, a management software utility is responsible for receiving notifications of network errors and in order to modify network routes or change job scheduling to avoid the errors. But this can be time consuming—around 5 seconds for 1000 nodes and 30 seconds for clusters with 10000 or more endpoints—certainly not fast enough to ensure the seamless integrity of a running computation. In fact, no software mechanism can be responsive enough at very large scales to detect and fix fabrics that suffer from a link failure. To address this problem, InfiniBand includes a new and innovative solution called SHIELD (Self-Healing Interconnect Enhancement for Intelligent Datacenters), which takes advantage of the intelligence already built into InfiniBand switches. By providing the fabric with self-healing autonomy, the speed with which communications can be corrected in the face of a link failure can be sped up by 5000x. This is fast enough to save communications from expensive retransmissions or absolute failure.
The Communication Pillar
Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology is included in the EDR and HDR InfiniBand switches. SHARP improves upon the performance of MPI operation by offloading collective operations from the CPU to the switch network, and eliminating the need to send data multiple times between endpoints. This innovative approach decreases the amount of data traversing the network as aggregation nodes are reached, and dramatically reduces the MPI operations time. Implementing collective communication algorithms in the network also has additional benefits, such as freeing up valuable CPU resources for computation rather than using them to process communication.
SHARP provides lower and flat latencies for data aggregation and reduction operations (e.g., MPI Reduce, All-Reduce, Barrier, Broadcast, etc.) compared to other options, so adding more nodes to compute clusters does not adversely affect. SHARP is also the best technology to enable the Exascale supercomputing generation.
Furthermore, SHARP provides key performance enhancement for deep learning and artificial intelligence applications. The combination of SHARP with leading GPUs and the NVIDIA Collective Communications Library (NCCL) deliver leading efficiency and scalability for example.
Another new technology is SNAP (Software-defined Network Accelerated Processing) which enables hardware virtualization of PCIe devices, such as NVMe storage. The NVMe SNAP framework allows users to easily integrate networked storage solutions into their high-performance compute and storage infrastructures. It enables the efficient disaggregation of compute and storage to facilitate fully-optimized resource utilization.
NVMe SNAP logically presents networked storage, such as NVMe over Fabrics (NVMe-oF), as a local NVMe drive. This allows the host operating system to use a standard NVMe-driver instead of a remote networking storage protocol. The host benefits from the performance and simplicity of local NVMe storage, unaware that remote InfiniBand connected storage is being utilized and virtualized by NVMe SNAP.
Furthermore SNAP may apply sophisticated logic and data protection mechanisms (mirroring, compression, data-deduplication, thin-provisioning, encryption, etc.) to the network storage that it virtualizes as local NVMe.
Super-Connecting the #1 Supercomputers
By providing leading data throughput, extremely low latency, and, most importantly, In-Network Computing engines and full programmability, InfiniBand is the leading interconnect technology for compute intensive applications, high performance computing, deep learning and other applications. InfiniBand overtakes proprietary networks and accelerates many of the top supercomputers around the world.