Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
September 9, 2008

InfiniBand and the Enterprise Datacenter

Tiffany Trader (HPC)

InfiniBand was once billed as the foundational, system-wide interconnect to unify all of enterprise networking. While that didn’t happen, the protocol is playing an increasingly important role in the datacenter. With the steady adoption of more powerful business-continuity, disaster recovery and grid computing applications, many enterprises are turning to InfiniBand as the enabler of their most latency-intolerant, bandwidth-intensive applications across Wavelength Division Multiplexing (WDM) optical networks.

Dr. Casimer DeCusatis, distinguished engineer of the IBM System and Technology Group, and Todd Bundy, director with ADVA Optical Networking, are longtime shapers and observers of enterprise datacenter networking. In this conversation, they offer their thoughts on InfiniBand’s place in the enterprise datacenter moving forward. Can InfiniBand co-exist with emerging Fibre Channel over Ethernet (FCoE)? What strategic factors must enterprise datacenter managers weigh in ensuring that today’s and tomorrow’s needs are cost-effectively met?

HPCwire: What are the most important business drivers and trends that enterprise datacenter managers are negotiating today?

Dr. Casimer DeCusatis: First, the pace of innovation is accelerating. When you consider that it took close to a century for absolutely world-changing technologies like the automobile and telephone to reach 50-percent market adoption, it’s just astounding to see what has happened and what is happening with the Internet, mobile, wireless, storage, etc. These advancements in technology are enabling business transformation — look, as an example, at how advancements in storage technologies have fueled revolutionary capabilities in medical and financial networking. And the business innovations push back to drive continued technology advancement. It’s a cycle.

So, that accelerating pace of innovation obviously has tremendous impact on the enterprise datacenter. In addition, there’s the ongoing emphasis on network convergence, for the sake of simplicity and cost efficiency. Plus, there are interesting new datacenter architectures coming out that demand evaluation.

Those are the converging forces at a broad level, and they have come together to drive the most prevalent contemporary vision for the new enterprise datacenter — an evolutionary model that provides for efficient IT service delivery today and seamlessly accommodates change for tomorrow.

HPCwire: What are the technology underpinnings of that vision?

Todd Bundy: There are some basic requirements that enterprise datacenters share, though in varying degrees of importance depending on the business objectives that a particular datacenter is striving to meet. These requirements include unified fabric infrastructure, high bandwidth, low latency, unified cloud management, connectivity over extended distances, security, resiliency, energy efficiency, open standards for multi-vendor interoperability, etc. We can see that the world wants to eventually get to an end state of global networking with zero downtime. But in the evolution from here to there, there will be a lot of different needs among enterprises — and even a lot of different needs among applications and services run by a given enterprise.
Problem: Connectivity Performance

HPCwire: Where does InfiniBand fit into this story?

Bundy: InfiniBand developed out of precisely this type of conversation, and it was envisioned as the powerful, unifying interconnect fabric for business networking.

DeCusatis: So was Fibre Channel. So was ATM [Asynchronous Transfer Mode].

Bundy: So now is FCoE.

DeCusatis: Network convergence is a long-standing goal of the industry. Datacenters have wanted to consolidate traffic onto one network with one protocol for a very long time.

HPCwire: What has been lacking in the prior convergence efforts?

DeCusatis: In some cases, there have been failures to meet the unique requirements of all the competing protocols. Or there has been too much emphasis on trying to incorporate proprietary features. Or critical production volumes and, in turn, cost points just haven’t been met. It’s obviously a very challenging goal.

HPCwire: So InfiniBand failed?

DeCusatis: Not at all. It’s playing a very important role and increasingly so.

Bundy: We’re seeing more requests to extend InfiniBand over our FSP WDM systems in research and education, government and enterprise.

DeCusatis: InfiniBand provides the ideal combination of high performance and low latency for our GDPS STP [Geographically Dispersed Parallel Sysplex Server Time Protocol] environment, for example. These are must-have benefits when it comes to synchronous applications for high-end clustering, business continuity, disaster recovery and grid computing — all of which are increasingly important services across markets.

HPCwire: Doesn’t FCoE stand to ultimately take over this whole space?

DeCusatis: FCoE may have a chance to succeed as the single, unifying fabric for every business application and service, bringing together SAN and LAN. What’s different about this convergence attempt is that FCoE developers think they can forge an industry-standard protocol; plus, the obstacles met in the prior convergence efforts can be anticipated with FCoE. It’s based on enhancements to conventional Ethernet that improve flow control, quality of service and prevent packet loss, so those are some excellent and promising inroads.

But the standard is just being finalized this year, and mass adoption is not likely for at least several years. FCoE will take time to mature.

HPCwire: In what ways is FCoE still immature?

Bundy: FCoE is a promising emerging technology, but enterprise datacenter managers can’t get caught up in the hype. At this point, you can’t just take your existing SAN, put it on the existing low-cost LAN infrastructure, deploy FCoE in the middle and have everything operate as you need it to. It isn’t going to work. Migration to FCoE will require more than just a ratified standard. It will require new low-latency switches, and this means the existing Ethernet infrastructure has to change. And no one is going to undertake a massive, forklift upgrade of the core of their network based on FCoE’s hype. It’s too disruptive and too expensive.

DeCusatis: The best opportunity for convergence lies with a new generation of fabric switches that not only provides these new features at very competitive cost points, but also enables current datacenters to reach their goals without expensive, large-scale disruptions or performance impacts due to increased latency. Convergence technologies must also demonstrate their ability to scale into the largest Internet datacenter applications.

Also, simply calling it Ethernet doesn’t mean we fully know how it’s going to work — and, really, this won’t be clear until we see a good number of customer installations running FCoE. Even at this point, we know that some proposed implementations of FCoE don’t talk about latency, synchronous recovery, continuous availability or longstanding problems such as creating true non-blocking, non-congested fabrics without packet loss.

I know that at IBM we’ve looked at the alternatives, and we will continue to use InfiniBand to meet the application requirements that many of our enterprise customers have in the areas of clustering, business continuity, disaster recovery and grid computing. We will have customers who need FCoE in the future, and we will meet those needs. But the idea that the next generation of IBM enterprise servers is going to have FCoE and nothing else is premature. The wide area networks interconnecting multiple datacenters will need to continue supporting multiple protocols, by extension.

Bundy: What we’re really talking about is an issue of behavior and organization. Fundamentally, the network group is telling the server and storage groups to move all of their GDPS STP channels, all of their ESCON [Enterprise System Connection] channels, all of their FICON [Fiber Connection] and Fibre Channel over to FCoE overnight. It’s reminiscent of SONET [Synchronous Optical Network] versus Ethernet in the voice world. You can go back ten years and hear people who said that SONET was dead — that everything would go Ethernet over optical and applications like VoIP would be adopted overnight. And, yes, the volumes have gone down, but SONET’s still around.

HPCwire: So then how should the manager of an enterprise datacenter go about evaluating the interconnect options?

DeCusatis: You start with the problems you need to solve, and you look at what solutions are available to fix those. Then you start costing out the options that meet those technical requirements. No CTO worth his or her salt is going to rip apart a datacenter without understanding those basic fundamentals.

HPCwire: What requirements would point a datacenter manager to InfiniBand?

DeCusatis: InfiniBand is an especially good fit for areas like real-time stock trading, medical-image analysis, server clustering and other computation-intensive applications that require very high bandwidth and low latency. For these areas, InfiniBand is a cost-effective solution, available today, with proven technology. Because these applications have needs that aren’t met by FCoE — at its current level of maturity, anyway — these InfiniBand applications aren’t going away anytime soon.

HPCwire: In what situations might FCoE be a better fit than InfiniBand?

DeCusatis: If you are fortunate enough to have a true greenfield opportunity, then you can play around with new technologies a little. But those technologies still have to fit the datacenter’s technical requirements. Or if you have a large, Internet-scale datacenter that could benefit immediately from a reduction in the number of servers, adapters and cables, then consolidation of the SAN and LAN using FCoE could make sense.

Bundy: There are environments such as social-networking sites and search engines where the goal is low-cost connectivity, not reliability. This isn’t true in the enterprise datacenter where reliability and 100-percent uptime are critical to running the business. And 100-percent uptime is also the target for the cloud computing arena, where there is a need to move to this type of fault-tolerant environment.

The opportunity to converge Fibre Channel and Ethernet might lead a datacenter manager to experiment with FCoE in a greenfield environment. But in a financial or medical network, factors such as reliability, performance and low latency are all critically, critically important. InfiniBand provides key, uncommon benefits there that are available today.

Most datacenters will need to end up strategically mixing and matching services with protocols based on a host of factors. Cost issues always matter. Access options at each enterprise location have to be factored in to the decision. Then there are the particular application’s technical requirements and the distances to be covered among facilities. IBM’s campus in Poughkeepsie, N.Y., is a terrific example. To consolidate all of the buildings, hardware and software focus areas and expertise into a seamless metropolitan area network meant bringing together a wide variety of protocols — including InfiniBand, Fibre Channel, Ethernet, ESCON, FICON and iSCSI [Internet Small Computer Systems Interface] — across WDM.

Just converging Fibre Channel and Ethernet isn’t the whole story here, and it’s not going to be with the vast majority of enterprises. Yes, those are the two highest-volume applications, by far. Beyond Fibre Channel and Ethernet, however, there are always going to be other protocols that serve very important purposes, and that stuff is not going to disappear.

DeCusatis: FCoE, InfiniBand or any other interconnect would have to subsume all of the requirements of all of these competing protocols in order for everything else to go away. This is why WDM is so important in the middle of the network. WDM allows an enterprise to cost-effectively and simply converge InfiniBand-based services with FCoE and the rest of its network traffic.

HPCwire: Will there be one true protocol winner eventually?

Bundy: It’s hard to say. Only two interconnect protocols on the landscape today stand to keep up with Moore’s Law, and those two are InfiniBand and Ethernet. FCoE promises to allow you to consolidate the SAN on the same low-cost infrastructure as your LAN and be as fast, reliable and low latency as InfiniBand — but the protocol is definitely not there yet, and any migration requires new low-latency Ethernet switches. I think this means that InfiniBand is going to be around a lot longer than most people think.

DeCusatis: It’s important that we evaluate these protocols in context of the larger trend toward convergence. Enterprises are determined to ultimately converge all of their traffic on the same, commonly managed, reliable, high-performance infrastructure, and they need to be able to do so as cost-effectively as possible.

Out of these business objectives has grown the FCoE movement, but InfiniBand isn’t going to just go away overnight. I don’t think this is going to be an either/or scenario for the foreseeable future; it’s going to continue to be a case of matching the right technologies with the right applications in a given environment.