Buoyed by the recent good news of IDC’s upgraded forecast for InfiniBand (IB) adoption, the InfiniBand Trade Association (IBTA) has been trumpeting the fabric’s growing prospects in both HPC and more traditional enterprise settings. With the OpenFabrics Alliance driving standardized software solutions for Linux and Windows and InfiniBand vendors coming to market with 40 Gbps switches and adapters, there’s little doubt that the fabric has the potential to go mainstream. All the interested parties have been working hard to make that happen.
Most of the impediments to InfiniBand are outside of the technology itself. The latest 40 Gbps (QDR) InfiniBand technology would not be practical without the new PCI Express 2.0 chipsets that are just now going mainstream. Optical or active copper cables will also be required in applications that need larger clusters or better signal integrity. These cable products are just coming into the market now, but the lower volumes needed for InfiniBand deployments will tend to keep prices high.
Despite that, InfiniBand’s prospects look bright. Based on better than expected 2007 shipments of DDR switches and adapters, IDC has cranked up its InfiniBand forecast for 2008, placing it on a steeper growth curve than the previous May 2007 forecast. Last year the expectation was that DDR would not overtake SDR until 2008. But that crossover has already occurred. Based on the new data, the compounded annual growth rate (CAGR) forecast for HCA revenue went from 29 percent to 35 percent ($279.7 million by 2011) and the CAGR forecast for switch revenue increased from 45 percent to 47.1 percent ($656.3 million by 2011).
Though the numbers were bumped only modestly, the trend is telling. Behind the vendors, analysts are often the second line of cheerleaders for the market. In this case, they were uncharacteristicly conservative.
Perhaps the most interesting development for InfiniBand is the interest the technology has drawn from outside the HPC base. The desire for real-time transactions with terabyte-sized databases is putting a strain on existing networks. Mainstream enterprise applications in areas such as eCommerce, financial services, health services, retail, supply chain management and Web services represent a greenfield market for InfiniBand.
Two of the biggest technology drivers are multicore processors and virtualization, both of which increase demands on I/O performance. As platforms become denser, the advantages of consolidating I/O onto a single, high-bandwidth wire become more attractive. IDC is predicting InfiniBand will represent more than 10 percent of switch revenue by 2011.
In the past six to nine months, InfiniBand has started to penetrate the enterprise datacenter in earnest. Some Oracle RAC and IBM DB2 setups are now using IB connectivity, and Mellanox says there are more than 30 proof-of-concept deployments for VMware on top of InfiniBand.
Alexa Internet, an Amazon subsidiary, deployed InfiniBand to power its search engine. Using the NFS over RDMA (NFSoRDMA) protocol to crawl its 240 terabytes of Web data, Alexa was able to run its data mining application at 1/5 the cost of an equivalent 10GbE-connected storage setup. NFSoRDMA is a high-level protocol that bypasses the TCP/IP network baggage and takes advantage of InfiniBand’s inherently high bandwidth and low latency. With NFSoRDMA being accepted in mainline Linux kernels later this year, InfiniBand-connected storage should become increasingly attractive.
Enterprise storage, in general, is one area where InfiniBand seems poised for big growth. The Taneja Group, a consistent proponent of the technology, recently released its forecast for native InfiniBand storage in the datacenter. Based on its surveys of vendors like DataDirect Networks, Engenio, Xiranet, Terrascale and IBRIX, over the last three years, they’ve seen a growth rate for native InfiniBand storage of 60-70 percent. They expect that level of growth to continue as the demand for compute and I/O intensive applications continues to increase and the datacenter becomes more consolidated through virtualization. The better scalability of IB fabrics will attract large-scale storage deployments, but even more modest installations can take advantage of the better performance.
According the Taneja report, “Today, storage and server administrators in even run of the mill enterprises are more often than not facing HPC-like computing needs, or will be facing one in the near future. In our view, once a system approaches 16 nodes and requires substantial I/O, connections to communication and storage fabrics make traditional connectivity impractical.”
Taneja sees InfiniBand technology with a sustained advantage in bandwidth and latency — one that cannot be matched by either 10 Gigabit Ethernet or Fibre Channel over Ethernet (FCoE). QDR InfiniBand products that will be introduced into the market in the second half of 2008 will extend that advantage even further. With the advent of solid state drives, the microsecond latency characteristics of InfiniBand will make it the fabric of choice for these devices.
Especially outside of the HPC realm, increased adoption of InfiniBand storage will likely start with connecting the fabric to a Fibre Channel gateway. As FC get squeezed out of the datacenter by both InfiniBand and Ethernet, the gateway will get tossed in favor of a direct connect. Despite the recent flurry of FCoE product introductions (Cisco Systems, Emulex and QLogic) at last week’s Storage Networking World in Orlando, standard products are probably a year away. In the meantime, InfiniBand consolidation is viable today and will have a performance advantage even after FCoE goes mainstream.
That’s not to say Ethernet will be shut out of the storage market. Even IB-loving Mellanox is pushing out 10GbE products in parallel with their InfiniBand offerings. Their latest ConnectX FCoE adapter was also announced at Storage Networking World last week and is part of the company’s fabric-neutral strategy of unifying data traffic on a single I/O pipe. But like most IB vendors, Mellanox sees InfiniBand as the clear choice where bandwidth and latency are driving the application requirements. Ethernet will continue to be the favorite where I/O performance is not the critical factor, which is still the case in the majority of applications. Because of the diverse nature of enterprise applications, InfiniBand and Ethernet are destined to share the datacenter for the foreseeable future.