Intelligence and integration are the watchwords of an era in which the insatiable demand for faster, more powerful computers can no longer ride the coattails of a strong Moore’s law. These are also the hallmarks of co-design, an approach that is championed by interconnect fabric vendor Mellanox Technologies and others in the community as essential for supercomputing to progress to exascale and beyond.
As Mellanox evolves its 100 Gb/s Enhanced Data Rate (EDR) InfiniBand product line, it is leveraging synergies between software and hardware and adding intelligence to the interconnect in the process. Put another way, Mellanox is moving compute closer to the network to free up server CPUs for more high-level tasks, a strategy that is crystalizing with the company’s latest product additions: Switch-IB 2, its next-generation 100 Gb/s InfiniBand switch targeted at high-performance computing and hyper-scale workloads; and the ConnectX-4 Lx Programmable adapter, designed to provide FPGA-based acceleration for a range of network applications.
Like the original Switch-IB, the new 36-port Switch-IB 2 (announced Nov. 12) integrates 144 SerDes, which can operate at 1 Gb/s to 25 Gb/s speeds per lane for a total of 7.2 Tb/s throughput. However, thanks to the addition of SHArP technology (SHArP stands for Scalable, Hierarchical, Aggregation Protocol), Switch-IB 2 can do something its predecessor cannot: offload collective MPI operations from the CPU to the network — for a claimed 10X performance boost.
As Mellanox explains, SHArP is a co-design architecture that enables the usage of all active datacenter devices to accelerate the communications frameworks, in this case taking the MPI operations that run on the CPU and executing them on the switch.
“Today, MPI collective operations run on the server, which means that each endpoint needs to communicate with every other endpoint (server),” Mellanox’s Gilad Shainer said in an interview. “We were able to move some of those operations to the NIC side, but still it’s running on the server. When a server needs to run those synchronization operations, it needs to communicate with every other server in the cluster. This requires multiple communications over the network that goes from the server to every other endpoint on the cluster and back. This is the wall, and you cannot reduce the latencies. When we take this load and move it to be executed and managed by the switch silicon, the switch can execute an MPI communication in one transaction because it is connected to everything. It can go to all of the endpoints at once and get the data back and that’s it. So instead of multiple transactions over the network, you combine everything to a single transaction. That means you go from tens of microseconds to a low single digit of microseconds.”
By becoming an active element, Switch-IB 2 enables application managers to use the power of data. Shainer attributes the company’s inclusion in the CORAL project to this offload capability. The DOE labs were funding some of the developments of the SHArP technology and being able to gain this 10X performance improvement on their codes was key, he said.
The new switch touts sub-90 nanosecond latency, 7.2Tb/s throughput, 7.02 billion messages/sec, as well as adaptive routing, congestion control, and support for multiple topologies. Pricing isn’t available yet, but Shainer reports it is fairly close to switch IB-1 pricing.
Mellanox is also using the SC15 launch pad to announce the ConnectX-4 Lx Programmable adapter, which puts a Mellanox NIC and a Xilinx FPGA on a single board/adapter to accelerate network applications, including security, deep packet inspection, compression/decompression, high-frequency trading and others. Today users that require this acceleration must use discrete components but the new adapter facilitates a closer connection and it’s more cost-effective and space-efficient because it’s just one card, said Shainer.
Another technology that Mellanox is showing at SC is Multi-Host Direct Socket, designed to enable low latency socket communication and be transparent to the application. Shainer explained that multi-host gives the CPU direct network access by taking the PCIe interface from a NIC and divvying it up into separate PCIe interfaces, each connected to a different socket. This makes more cycles available to the application by avoiding the QPI route, allowing for 50 percent lower CPU utilization and 20 percent lower latency, according to Shainer. Mellanox Multi-Host technology is available today in the company’s line of ConnectX-4 10/25/50/100 Gigabit Ethernet adapters ICs, and in OCP-based boards as part of Facebook’s Yosemite platform.
This slide provides an overview of Mellanox’s end-to-end portfolio:
InfiniBand is currently the de-facto interconnect solution for performance demanding applications, with Mellanox InfiniBand holding a solid half of the petascale TOP500 systems (Cray has 19, BlueGene 8, and other proprietary 6) on the June TOP500 list. Shainer expects this growth to continue on the current list (announced today). We are seeing faster adoption of EDR versus the previous generation, FDR, he observed.
Mellanox CEO Eyal Waldman echoed this sentiment in a recent financial report. “We are seeing revenues from our 10, 25, 40, 50 and 100 Gigabit Ethernet solutions and traction with large data center customers for these products,” he stated. “We are happy to see our EDR 100 Gigabit InfiniBand revenues growing at a faster pace than FDR did, to approximately 12 percent of InfiniBand revenues.”
Following the trend of other long-time HPC vendors, Mellanox says it still remains dedicated to traditional HPC, but it is seeing growth outside the traditional lab and government datacenters. “Paypal is a known case,” Shainer shared, ticking off several more examples of the new-school InfiniBand users, including “financial institutes for latencies, Baidu, and other Tier 1 companies outside of HPC in the Web 2.0 sphere.”
Mellanox has also decided the time is right to start addressing the fire outside its doors, specifically Intel’s next-generation 100 Gb/s networking fabric, Omni-Path, which Shainer characterized as “an opposite architecture to what Mellanox is doing.” Mellanox’s main focus is offloading compute and moving intelligence to the network to overcome performance walls, while Omni-Path “is built on a non-offload network,” Shainer stated.
“We don’t think that Omni-Path can compete on application performance. Yes, they will show the basic numbers of 100 Gb/s and perhaps an equivalent latency [to our solution], but when it goes to the datacenter performance, the application performance, the lack of offloading network does not allow you to scale or provide efficiencies,” noted Shainer. “It puts a burden on the CPU, and it doesn’t provide the same performance.” He takes this argument one step further to suggest that keeping this burden on the CPU boosts CPU sales volumes, which would be beneficial to their bottom line as a chip company.
While Mellanox is advancing its strategy of pushing intelligence into the network, Intel’s been working to drive the fabric closer to the CPU. Intel has done this through both acquired IP and its own technology advances with a strong focus on integration. And make no mistake, Intel has been busy positioning its Omni-Path fabric as a superior alternative to InfiniBand. Intel has said that the Omni-Path precursor, True Scale, was designed to optimize the performance and scalability of MPI based applications.
Intel calls Omni-Path, which hit general availability today, the successor to Intel True Scale Fabric, but Intel has put this end-to-end networking fabric together based largely around acquisitions: True Scale InfiniBand IP from QLogic in 2012, Aries IP from Cray a few months later, and going back a few years, the Fulcrum Microsystems Inc. purchase. Intel recorded strong True Scale sales last year and it’s been sampling Omni-Path “with most major HPC and OEM vendors” in the months leading up to today’s GA announcement. The first Omni-Path products (the initial Intel OPA 100 series) will utilize discrete adapters that fit into PCIe slots, but the company has plans integrate Omni-Path connectivity into Intel Xeon Phi and then Xeon processors, enabling better latency and less power use.
We’ll leave a deeper Omni-Path dive for later this week, but here’s a few specs to help you make your own comparisons:
Another distinction that Shainer put forth was the potential drawbacks to being a proprietary network. Mellanox, an OpenPOWER partner, says it is focused on enabling performance and scalability for all infrastructure platforms: x86, Power, GPU, ARM and FPGA-based platforms at 10, 20, 25, 40, 50, 56 and 100Gb/s speeds. “We introduced the first 100 Gb/s interconnect in 2014; we’re going to have a complete end-to-end solutions in 2017 for 200 Gb/s,” Shainer said.