Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

March 26, 2009

Still on the InfiniBandwagon

Michael Feldman

In the realm of datacenter interconnects, much of the IT industry continues to be focused on the rollout of 10 Gigabit Ethernet offerings, with a raft of switches, adapters and other 10GigE paraphernalia having made its way into the marketplace over the past 18 months. Cisco’s recent foray into the datacenter, for example, will be premised on 10GigE-based blades. This next generation of Ethernet products will not only bring higher bandwidth and lower latencies, but also lossless fabrics suitable for both compute and storage interconnects.

But despite all the hoopla over 10GigE, InfiniBand continues to be the interconnect that excites the HPC crowd. The majority of new HPC systems of note all seem to be InfiniBand-based. The most prominent example of an Ethernet-based system is the ATLAS cluster at the Max Planck Institute for Gravitational Physics in Germany, which we reported on last year. From a performance standpoint, the choice between Ethernet and InfiniBand is not so much a bandwidth issue — multiple 10GigE links can always be aggregated to achieve InfiniBand-like bandwidth — as a latency one. Today, even the most capable 10GigE implementations have higher latencies than InfiniBand, and it is this attribute that many HPC workloads find indispensable.

A recent market study by Tabor Research points to InfiniBand’s continued popularity in the HPC space. Citing an August 2008 site survey, the Taborites found that 60 percent of HPC systems installed since the start of 2007 were employing InfiniBand as a system interconnect. That’s a much bigger percentage than you see on the latest TOP500 list, where only 28 percent are InfiniBand-based versus 56 percent for Ethernet — the remainder being a smattering of proprietary interconnects. In fact, it’s probable that the majority of these really big Ethernet-connected clusters are running loosely-coupled parallel applications, rather than latency-sensitive HPC workloads. It’s notable that as of November 2008, no TOP500 systems were using 10GigE.

More importantly, InfiniBand usage in HPC is growing. According to the same Tabor Research survey, in 2006 the proportion of HPC systems employing InfiniBand and Ethernet were about equal. It was in 2007 that InfiniBand jumped into the lead. With QDR IB (40 Gbps) expected to hit its stride in 2009, InfiniBand should consolidate its lead in the HPC interconnect market. InfiniBand has also made some inroads into more traditional enterprise applications, most notably in the HP-Oracle database machine. Time will tell whether this is just an outlier or the beginning of a wider trend.

Mellanox continues to be the dominant vendor in the InfiniBand marketplace, having recently added switches and gateways to its adapter and silicon business. But with QLogic now offering home-grown InfiniBand ASICs alongside its own switches and HCAs, HPC system vendors will have a wider choice of interconnect options. Although this introduces an element of competition, Tabor Research believes that the InfiniBand market is now big enough for two vendors to succeed. Considering that Mellanox enjoyed record revenues through the front end of the recession — $107.7 million in FY2008 — this seems like a fair assessment.

InfiniBand’s success in HPC doesn’t seem to quiet the naysayers, though. The Ethernet drumbeat that pervades the industry invariably leads to press coverage that casts InfiniBand as an endangered technology. Chris Mellor’s recent piece in The Register, titled InfiniBand: Caught in the Ethernet meatgrinder, sounds ominous, but the main thrust of that article is actually about fabric convergence and how Ethernet and InfiniBand are learning to co-exist.

In fact, converged fabrics are likely to be the real story of datacenter interconnects over the next several years, as vendors look to accommodate multiple networking, clustering and storage communication protocols on top of lossless communication technologies like InfiniBand and RDMA Ethernet. It’s not surprising that the major InfiniBand vendors — Mellanox, QLogic and Voltaire — have developed converged fabric offerings in various flavors, and Ethernet vendors are layering protocols like Fibre Channel on top of lossless Ethernet.

The whole process resembles the convergence of RISC and CISC technologies in the microprocessor arena. There, instead of one architecture killing off the other one, Intel was able to maintain the dominance of its legacy x86 CISC ISA by incorporating a RISC-like core underneath the covers. Meanwhile, true RISC processors found other markets to play in. Ethernet and InfiniBand look like they’re on a similar trajectory.

Share This