InfiniBand and the Enterprise Datacenter

By Tiffany Trader (HPC)

September 9, 2008

InfiniBand was once billed as the foundational, system-wide interconnect to unify all of enterprise networking. While that didn’t happen, the protocol is playing an increasingly important role in the datacenter. With the steady adoption of more powerful business-continuity, disaster recovery and grid computing applications, many enterprises are turning to InfiniBand as the enabler of their most latency-intolerant, bandwidth-intensive applications across Wavelength Division Multiplexing (WDM) optical networks.

Dr. Casimer DeCusatis, distinguished engineer of the IBM System and Technology Group, and Todd Bundy, director with ADVA Optical Networking, are longtime shapers and observers of enterprise datacenter networking. In this conversation, they offer their thoughts on InfiniBand’s place in the enterprise datacenter moving forward. Can InfiniBand co-exist with emerging Fibre Channel over Ethernet (FCoE)? What strategic factors must enterprise datacenter managers weigh in ensuring that today’s and tomorrow’s needs are cost-effectively met?

HPCwire: What are the most important business drivers and trends that enterprise datacenter managers are negotiating today?

Dr. Casimer DeCusatis: First, the pace of innovation is accelerating. When you consider that it took close to a century for absolutely world-changing technologies like the automobile and telephone to reach 50-percent market adoption, it’s just astounding to see what has happened and what is happening with the Internet, mobile, wireless, storage, etc. These advancements in technology are enabling business transformation — look, as an example, at how advancements in storage technologies have fueled revolutionary capabilities in medical and financial networking. And the business innovations push back to drive continued technology advancement. It’s a cycle.

So, that accelerating pace of innovation obviously has tremendous impact on the enterprise datacenter. In addition, there’s the ongoing emphasis on network convergence, for the sake of simplicity and cost efficiency. Plus, there are interesting new datacenter architectures coming out that demand evaluation.

Those are the converging forces at a broad level, and they have come together to drive the most prevalent contemporary vision for the new enterprise datacenter — an evolutionary model that provides for efficient IT service delivery today and seamlessly accommodates change for tomorrow.

HPCwire: What are the technology underpinnings of that vision?

Todd Bundy: There are some basic requirements that enterprise datacenters share, though in varying degrees of importance depending on the business objectives that a particular datacenter is striving to meet. These requirements include unified fabric infrastructure, high bandwidth, low latency, unified cloud management, connectivity over extended distances, security, resiliency, energy efficiency, open standards for multi-vendor interoperability, etc. We can see that the world wants to eventually get to an end state of global networking with zero downtime. But in the evolution from here to there, there will be a lot of different needs among enterprises — and even a lot of different needs among applications and services run by a given enterprise.
Problem: Connectivity Performance

HPCwire: Where does InfiniBand fit into this story?

Bundy: InfiniBand developed out of precisely this type of conversation, and it was envisioned as the powerful, unifying interconnect fabric for business networking.

DeCusatis: So was Fibre Channel. So was ATM [Asynchronous Transfer Mode].

Bundy: So now is FCoE.

DeCusatis: Network convergence is a long-standing goal of the industry. Datacenters have wanted to consolidate traffic onto one network with one protocol for a very long time.

HPCwire: What has been lacking in the prior convergence efforts?

DeCusatis: In some cases, there have been failures to meet the unique requirements of all the competing protocols. Or there has been too much emphasis on trying to incorporate proprietary features. Or critical production volumes and, in turn, cost points just haven’t been met. It’s obviously a very challenging goal.

HPCwire: So InfiniBand failed?

DeCusatis: Not at all. It’s playing a very important role and increasingly so.

Bundy: We’re seeing more requests to extend InfiniBand over our FSP WDM systems in research and education, government and enterprise.

DeCusatis: InfiniBand provides the ideal combination of high performance and low latency for our GDPS STP [Geographically Dispersed Parallel Sysplex Server Time Protocol] environment, for example. These are must-have benefits when it comes to synchronous applications for high-end clustering, business continuity, disaster recovery and grid computing — all of which are increasingly important services across markets.

HPCwire: Doesn’t FCoE stand to ultimately take over this whole space?

DeCusatis: FCoE may have a chance to succeed as the single, unifying fabric for every business application and service, bringing together SAN and LAN. What’s different about this convergence attempt is that FCoE developers think they can forge an industry-standard protocol; plus, the obstacles met in the prior convergence efforts can be anticipated with FCoE. It’s based on enhancements to conventional Ethernet that improve flow control, quality of service and prevent packet loss, so those are some excellent and promising inroads.

But the standard is just being finalized this year, and mass adoption is not likely for at least several years. FCoE will take time to mature.

HPCwire: In what ways is FCoE still immature?

Bundy: FCoE is a promising emerging technology, but enterprise datacenter managers can’t get caught up in the hype. At this point, you can’t just take your existing SAN, put it on the existing low-cost LAN infrastructure, deploy FCoE in the middle and have everything operate as you need it to. It isn’t going to work. Migration to FCoE will require more than just a ratified standard. It will require new low-latency switches, and this means the existing Ethernet infrastructure has to change. And no one is going to undertake a massive, forklift upgrade of the core of their network based on FCoE’s hype. It’s too disruptive and too expensive.

DeCusatis: The best opportunity for convergence lies with a new generation of fabric switches that not only provides these new features at very competitive cost points, but also enables current datacenters to reach their goals without expensive, large-scale disruptions or performance impacts due to increased latency. Convergence technologies must also demonstrate their ability to scale into the largest Internet datacenter applications.

Also, simply calling it Ethernet doesn’t mean we fully know how it’s going to work — and, really, this won’t be clear until we see a good number of customer installations running FCoE. Even at this point, we know that some proposed implementations of FCoE don’t talk about latency, synchronous recovery, continuous availability or longstanding problems such as creating true non-blocking, non-congested fabrics without packet loss.

I know that at IBM we’ve looked at the alternatives, and we will continue to use InfiniBand to meet the application requirements that many of our enterprise customers have in the areas of clustering, business continuity, disaster recovery and grid computing. We will have customers who need FCoE in the future, and we will meet those needs. But the idea that the next generation of IBM enterprise servers is going to have FCoE and nothing else is premature. The wide area networks interconnecting multiple datacenters will need to continue supporting multiple protocols, by extension.

Bundy: What we’re really talking about is an issue of behavior and organization. Fundamentally, the network group is telling the server and storage groups to move all of their GDPS STP channels, all of their ESCON [Enterprise System Connection] channels, all of their FICON [Fiber Connection] and Fibre Channel over to FCoE overnight. It’s reminiscent of SONET [Synchronous Optical Network] versus Ethernet in the voice world. You can go back ten years and hear people who said that SONET was dead — that everything would go Ethernet over optical and applications like VoIP would be adopted overnight. And, yes, the volumes have gone down, but SONET’s still around.

HPCwire: So then how should the manager of an enterprise datacenter go about evaluating the interconnect options?

DeCusatis: You start with the problems you need to solve, and you look at what solutions are available to fix those. Then you start costing out the options that meet those technical requirements. No CTO worth his or her salt is going to rip apart a datacenter without understanding those basic fundamentals.

HPCwire: What requirements would point a datacenter manager to InfiniBand?

DeCusatis: InfiniBand is an especially good fit for areas like real-time stock trading, medical-image analysis, server clustering and other computation-intensive applications that require very high bandwidth and low latency. For these areas, InfiniBand is a cost-effective solution, available today, with proven technology. Because these applications have needs that aren’t met by FCoE — at its current level of maturity, anyway — these InfiniBand applications aren’t going away anytime soon.

HPCwire: In what situations might FCoE be a better fit than InfiniBand?

DeCusatis: If you are fortunate enough to have a true greenfield opportunity, then you can play around with new technologies a little. But those technologies still have to fit the datacenter’s technical requirements. Or if you have a large, Internet-scale datacenter that could benefit immediately from a reduction in the number of servers, adapters and cables, then consolidation of the SAN and LAN using FCoE could make sense.

Bundy: There are environments such as social-networking sites and search engines where the goal is low-cost connectivity, not reliability. This isn’t true in the enterprise datacenter where reliability and 100-percent uptime are critical to running the business. And 100-percent uptime is also the target for the cloud computing arena, where there is a need to move to this type of fault-tolerant environment.

The opportunity to converge Fibre Channel and Ethernet might lead a datacenter manager to experiment with FCoE in a greenfield environment. But in a financial or medical network, factors such as reliability, performance and low latency are all critically, critically important. InfiniBand provides key, uncommon benefits there that are available today.

Most datacenters will need to end up strategically mixing and matching services with protocols based on a host of factors. Cost issues always matter. Access options at each enterprise location have to be factored in to the decision. Then there are the particular application’s technical requirements and the distances to be covered among facilities. IBM’s campus in Poughkeepsie, N.Y., is a terrific example. To consolidate all of the buildings, hardware and software focus areas and expertise into a seamless metropolitan area network meant bringing together a wide variety of protocols — including InfiniBand, Fibre Channel, Ethernet, ESCON, FICON and iSCSI [Internet Small Computer Systems Interface] — across WDM.

Just converging Fibre Channel and Ethernet isn’t the whole story here, and it’s not going to be with the vast majority of enterprises. Yes, those are the two highest-volume applications, by far. Beyond Fibre Channel and Ethernet, however, there are always going to be other protocols that serve very important purposes, and that stuff is not going to disappear.

DeCusatis: FCoE, InfiniBand or any other interconnect would have to subsume all of the requirements of all of these competing protocols in order for everything else to go away. This is why WDM is so important in the middle of the network. WDM allows an enterprise to cost-effectively and simply converge InfiniBand-based services with FCoE and the rest of its network traffic.

HPCwire: Will there be one true protocol winner eventually?

Bundy: It’s hard to say. Only two interconnect protocols on the landscape today stand to keep up with Moore’s Law, and those two are InfiniBand and Ethernet. FCoE promises to allow you to consolidate the SAN on the same low-cost infrastructure as your LAN and be as fast, reliable and low latency as InfiniBand — but the protocol is definitely not there yet, and any migration requires new low-latency Ethernet switches. I think this means that InfiniBand is going to be around a lot longer than most people think.

DeCusatis: It’s important that we evaluate these protocols in context of the larger trend toward convergence. Enterprises are determined to ultimately converge all of their traffic on the same, commonly managed, reliable, high-performance infrastructure, and they need to be able to do so as cost-effectively as possible.

Out of these business objectives has grown the FCoE movement, but InfiniBand isn’t going to just go away overnight. I don’t think this is going to be an either/or scenario for the foreseeable future; it’s going to continue to be a case of matching the right technologies with the right applications in a given environment.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire