Mellanox Cracks 100 Gbps with New InfiniBand Adapters
Interconnect maker Mellanox has developed a new architecture for high performance InfiniBand. Known as Connect-IB, this is the company’s fourth major InfiniBand adapter redesign, following in the footsteps of its InfiniHost, InfiniHost III and ConnectX lines. The new adapters double the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps.
Over the past 10 years, CPU compute power has increased roughly 100-fold, but interconnect bandwidth has been lagging, creating communications bottlenecks in servers. At the same time clusters are getting larger, further compounding the problem. This is certainly happening in HPC, but also in the commercial realm of cloud computing, and now, big data.
In all cases, the trend is toward larger and larger clusters with CPUs whose core counts are increasing at aMoore’s Law pace. With Connect-IB, Mellanox is attempting to re-sync the interconnect with the performance curve, with the goal to provide a balanced ratio of computational power and network bandwidth.
Connect-IB was designed as a foundational technology for future exascale systems and ultra-scale datacenters. Gilad Shainer, vice president of marketing development at Mellanox, claims the redesign offers unlimited interconnect scalability via its new Dynamic Connected Transport technology. “If you build something, you need it to handle tens of thousands and even hundreds of thousands [of nodes] if you want that architecture to last for the next couple of years,” he told HPCwire.
Connect-IB increases performance for both MPI- and PGAS-based applications. The architecture also features the latest GPUDirect RDMA technology, known as GPUDirect v3. This allows direct GPU-to-GPU communication, bypassing the OS and CPU. Overall, new adapters can process 130 million messages per second. The current generation ConnectX/VPI adapters, which handle both InfiniBand and Ethernet, deliver just 33 million messages per second, or roughly a quarter of Connect-IB’s capabilities.
Latency on the new adapters is 0.7 microseconds, which is equal to that of the latest Connect-X hardware for FDR InfiniBand. That’s pretty much tops in the commodity interconnect space today. Ethernet RDMA (RoCE), for example, comes in slightly behind at 1.3 microsecond latency.
When asked about the latency numbers, Shainer said the technology is approaching its physical limits and that further improvements would be minimal. “We’re getting very close to what you can cut,” he noted. “Right now the bigger portion of the latency is on the server side. It will be reduced moving to the future, but it’s not going to be a huge reduction.”
Connect-IB’s throughput marks the architecture’s greatest advantage. The highest-end part, which needs a PCI Express 3.0 interface, can break 100 Gbps. The increased bandwidth is welcome among a variety of applications and Shainer explained one hypothetical case involving SSD storage.
He noted that a server loaded with 24 SATA III SSDs could support a theoretical data throughput of 12 GB/second. To achieve that level of I/O without bottlenecks, the server’s interconnect would have to deliver 96 Gbps. This would require the equivalent of 15 8 Gbps Fibre Channel (FC)cards, 10 10GbE cards, or a single Connect-IB card with dual-FDR InfiniBand (56 Gbps) ports. Of course, there are no standard servers with more than a handful of I/O ports, so an FC or Ethernet solution for a heavily loaded SSD configuration is essentially out of the question.
“If you want to go the Fibre Channel way, you would have to put 15 cards in that box,” explained Shainer. “There is no way you’re going to do it. You create storage density, but from the other side you can’t take it out, so you lose the ability to do storage density.”
Mellanox will initially be releasing five InfiniBand adapters using the Connect-IB technology. The first unit will support PCIe 2.0 x16 with one port of 56 Gbps connectivity, which for the first time delivers FDR speeds to AMD-based servers. Two adapters have been also been developed with a PCIe 3.0 x8 interface. With a maximum throughput of 56 Gbps, these adapters can be ordered in one- or two-port configurations.
The last pair of adapters use a full PCIe 3.0 x16 interface. The maximum Connect-IB bandwidth of 112 Gbps is achieved with the dual-FDR-port adapter. In this case, multiple cables would be required between the adapter and the next hop. Mellanox is also offering a single-port PCIe 3.0 x16 adapter, providing 56 Gbps. Since maximum throughput from each port is the same as that of FDR InfiniBand, the new adapters are compatible with current switches.
Supported operating systems include Windows Server 2008 and a variety of Linux distributions including Red Hat Enterprise and Novell SLES. Connect-IB will also work with VMWare ESX 5.1, OpenFabrics Enterprise Distribution (OFED) and OpenFabrics Windows Distribution (WinOF).
The current Connect-X/VPI adapter line is not going away as a result of the Connect-IB introduction. In fact, the company plans to incorporate the more performant architecture in the fourth generation of Connect-X adapters, which support both InfiniBand and Ethernet.
A number of organizations across HPC, Web 2.0, cloud and storage have been lining up for the new Connect-IB products, according to Shainer. “We might see deployments this year, but definitely early next year,” he said. “Right now it’s too early to expose the names, but yes, we have customers.”
Prototypes are currently working at Mellanox labs and samples will be sent to customers in Q3, with general availability expected in early Q4. Mellanox will be running a lab demonstration of Connect-IB at ISC’12 this week inHamburg,Germany.