Last week, IDC released a report that projects a rather healthy future for InfiniBand (IB) adoption over the next five years. The study predicts that InfiniBand host channel adaptor factory revenues will grow from $62.3 million in 2006 to $224.7 million in 2011, while InfiniBand switch port sales are expected to grow from $94.9 million to $612.2 million over the same period. By the end of the decade, the deployments of double data rate and quad data rate InfiniBand adapters are expected to overtake their single data rate forbearers.
Compared to Ethernet or Fibre Channel, InfiniBand has the advantage of providing a high bandwidth, low latency interconnect in a less expensive package. Current implementations are being delivered at 10, 20 and 40 Gbps, with products in the works for 120 Gbps. Its superior performance has made it the interconnect of choice in many HPC deployments. According to the latest Top500 list, 12 percent of the top systems use InfiniBand, up from 5 percent just a year ago. By itself , this statistic is largely irrelevant. The fact that 60 or so of the fastest computers use a particular interconnect may give the IB vendors some bragging rights, but it doesn't say much about overall industry adoption. What is important is that InfiniBand's use is expanding in commercial cluster computing, where interconnect performance and price-performance is the driving factor. This includes such applications as automotive crash simulations, oil and gas reservoir simulations, and financial analytics. These kinds of computing workloads have become mainstream as HPC has become a cost-effective tool for a wide array of businesses.
But much of the projected growth of InfiniBand is expected outside of its traditional role in connecting high performance computing servers and storage — in mainstream datacenters. The IDC report makes the case that the growth of multicore processors, server virtualization, and I/O consolidation is going to drive InfiniBand adoption in the enterprise. All three of these trends are helping to make systems more computationally dense, which requires proportionately more communication bandwidth per server and per storage device. Besides raw performance, InfiniBand includes quality-of-service (QoS) features that enable multiple types of traffic to be safely managed over a single pipe.
Over the past few years, more mainstream enterprise users have started using InfiniBand. This is especially apparent in the capital markets, where the goal for ever-faster automated trading is on a collision course with increasing trade volumes. In this environment, even sub-second delays in transactions can cost millions of dollars. The types of computing systems that manage these trades have come under increased scrutiny as financial engineers ponder how best to minimize transaction latencies and enhance trade predictibility. Wombat Financial Software and Reuters are two companies that have qualified their market feed applications on InfiniBand technology to address these stringent performance requirements. InfiniBand's market penetration in this sector is largely unknown, since financial institutions tend to be rather tight-lipped about what goes on inside their datacenters. But with so much money in the balance, one can assume that all market trading institutions are taking a hard look at InfiniBand.
SOA platforms can benefit from high performance interconnects too. In May, TIBCO Software announced it had qualified its message passing middleware on top of Cisco's InfiniBand Server Fabric Switches to enhance performance and predictability for event-driven SOA. The types of applications targeted include data distribution, web services and order management systems. Order management, in particular, is becoming a time-critical component of inventory control for many companies. TIBCO claims their IB-enabled platform increased throughput by a factor of four, while reducing latency.
Database clustering, as is used in Oracle's 10G RAC and IBM's DB2, is another prime target. Following the general model of concentrating computational power into smaller boxes, database clusters are becoming more dependent on high performance interconnects to talk between the nodes. For databases applications, there is the additional incentive to unify the interconnect fabric with the storage components. JDA, a provider of supply chain management software, is using InfiniBand to improve system throughput on its platform. Using an Oracle 10G RAC system and QLogic InfiniBand gear, the company was able to decrease the time to plan one million SKUs (Stock Keep Units) from 66 minutes, with Gigabit Ethernet, to 25 minutes, with InfiniBand. They were also able to realize a 35 percent cost advantage by switching from Ethernet to InfiniBand.
It's no surprise that as the application and database tiers in the datacenter start to act like HPC systems, they will start to look like them. In doing so, we're bound to see InfiniBand adoption increase in the larger enterprise market. Since IB technology is projected to maintain its performance and price-performance advantage for at least the next five years, users will look to InfiniBand for their most challenging interconnect demands.
That's not to say Ethernet or Fibre Channel are going away. Both are established standards and have momentum that will carry them for a long time. The massive set of applications that are run on standard Ethernet will insure its longevity. For the foreseeable future, Ethernet is expected to represent the most common networking technology in the world. The web tier of the enterprise is one area where Ethernet has no serious challengers. And if you've been reading this publication for any length of time, you know that creative engineers have been devising new ways to enhance the performance and predictibility of Ethernet.
The culture of the InfiniBand vendors has matured as well. There's little talk of InfiniBand conquering the world these days. Even IB evangelists like Mellanox and Voltaire are shipping Ethernet products. Over the past couple of years, acquisition of companies like Topspin (Cisco), PathScale and Silverstorm (QLogic) has brought InfiniBand into more enterprise-focused vendors. The realization that all the standard network interconnects will live peaceably in the datacenter may be sinking in.
“From the Mellanox perspective we have technology that will address both markets,” says Thad Omura, Mellanox VP of Product Marketing. “We're layering one growth market on top of another. It's no secret that 10 Gigabit Ethernet will emerge at some point. There's no reason why we shouldn't address the 10Gig solution for LANs and continue to drive InfiniBand as the best price-performance interconnect for servers and storage.”
—–
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].