Battle of the Network Fabrics

By Michael Feldman

December 8, 2006

Over the last few years, the penetration of InfiniBand into the HPC market has become well established. InfiniBand's high bandwidth, low latency connectivity has encouraged its use wherever performance or price/performance are the driving factors, such as HPC clusters and supercomputers. In the commercial data center, where LAN/WAN connectivity over TCP/IP is still important, Ethernet is still king. And for storage connectivity, Fibre Channel has become an established fabric. Most tier one OEMs — including IBM, HP, Sun, and Dell — that offer systems across a variety of markets, offer connectivity with all fabrics.

With the support of InfiniBand and iWARP (10 Gigabit Ethernet implemented as RDMA over TCP) by the OpenFabrics Alliance, software stacks now exist that have open interconnect and protocol standards for HPC clusters, data centers, and storage systems. But convergence still seems a long way off. InfiniBand and iWARP are based on fundamentally different architectures, representing two approaches to high performance connectivity.

That's not to say Ethernet and InfiniBand can't mix. With their recently announced ConnectX multi-protocol technology, Mellanox will be supporting both InfiniBand and Ethernet fabrics with a single adapter. This will enable storage and server OEMs to develop systems that are able to support both interconnects with a single piece of hardware. With this move, Mellanox appears to be conceding the point that 10 GbE will be the interconnect-of-choice for an important class of systems — the medium-scale commercial cluster.

With ConnectX, each port can be configured as either a 4X InfiniBand or a 1 or 10 Gigabit Ethernet, based on OEM preferences. The InfiniBand port supports single, double and quad data rates (SDR, DDR and QDR) delivering 10, 20 and 40 Gbps full duplex bandwidth. The application interfaces, supported over either fabric, include IP, sockets, MPI, SCSI, iSCSI, Fibre Channel (for InfiniBand only).

The first Mellanox ConnectX adapter products are scheduled for availability in Q1 of 2007. These will include both a multi-protocol and an InfiniBand-only (10 and 20 Gbps) offering. A 40 Gbps InfiniBand ConnectX adapter is also scheduled to be delivered when the corresponding switches become available — probably sometime in 2008.

The multi-protocol architecture will allow compatibility with software based on either the legacy Ethernet networking and storage stacks or the OpenFabrics RDMA software stack. In addition, system software that has been ported to InfiniBand RDMA can now be extended to Ethernet environments, bringing some of the advantages of InfiniBand to the Ethernet applications.

“All of the RDMA-capable software solutions that have been proven over InfiniBand can now run over an Ethernet fabric as well,” said Thad Omura, vice president of product marketing for Mellanox Technologies. “This is not iWARP, which is implemented over legacy TCP/IP stacks.  What we're doing is leveraging existing InfiniBand RDMA stacks over Ethernet.”

Mellanox's decision not to support iWARP was based on a couple of factors. Omura believes the multi-chip solution required for iWARP's TCP offload makes the design too complex and expensive to attract widespread support. In addition, the technology's scalability remains a question. Omura says iWARP silicon would need to be redesigned to reach 40 or 100 Gbps of bandwidth performance.

In contrast, InfiniBand is already architected to support performance of 40 Gbps with 1 microsecond latency, and can do so on a single chip.  Beyond that, there's a clear path to 120 Gbps InfiniBand within the next few years. But, according to Omura, in the commercial data center environments, connectivity to Ethernet (from a LAN/WAN perspective) is often more important than performance or cost.

“Mellanox believes InfiniBand will always deliver the best price/performance for server and storage connectivity,” says Omura. “At the same time we see that in enterprise solutions, 10 Gigabit Ethernet will emerge from medium- to low-scale types of application in a clustering environment. Our expertise in high performance connectivity will serve both markets.”

While Mellanox passed on iWARP as an Ethernet solution, others are embracing it. So far NetEffect is the only vendor that supports the full implementation with its iWARP adapter. But the attraction of a standardized RDMA-enabled Ethernet solution will probably attract other companies as well. As Rick Maule, CEO of NetEffect, likes to say, “iWARP has arrived.”

Maule believes the world pretty much accepts that Ethernet is the de facto networking fabric. And for storage devices, there's no longer a big performance mismatch between Fibre Channel and Ethernet. According to him, in the storage sector, determining which fabric is preferable is more a matter of economics now.

“The thing that no one has been able to prove is that Ethernet can really do clustering fabrics on par with Myrinet or InfiniBand or whatever — until now,” says Maule. According to him, “Ethernet can now be a true clustering fabric without any apology.”

InfiniBand had a head start to 10 Gbps, low-latency performance. But now that 10 GbE iWARP has arrived, Maule believes it makes a compelling alternative. With RDMA technology, Ethernet has become competitive with InfiniBand and Myrinet in both bandwidth and latency performance metrics.

Maule envisions the adoption of iWARP as a cluster interconnect will drive broader adoption of 10 GbE in the data center. While clustering, networking and storage fabrics have evolved separately in the past, he believes that a high-performance Ethernet solution will start to converge them in 2007.

Adoption of iWARP in the storage area will trail clustering, but the requirement for 10 Gbps bandwidth will start to pressure Fibre Channel-based storage. Maule thinks at some point soon the storage market will have to choose between adopting 10 GbE or moving to 8 Gbps Fibre Channel. For the networking segment, increased aggregated bandwidth requirements and server consolidation will encourage more servers to use 10 GbE. Maule thinks adding iWARP to the data center in one of these three areas becomes a doorway to assessing the technology for broader adoption in the other two areas.

The stakes are high. Maule estimates that there are around 20 million Gigabit Ethernet ports shipped each year. He sees each one as an opportunity to upgrade from GbE to 10 GbE. His prediction is that over the next three to five years they will become iWARP ports, not InfiniBand ports.

However, Maule admits that InfiniBand is technologically sound. He should know. NetEffect actually started out as Banderacom Inc., a company that was founded in 1999 to develop InfiniBand silicon. But Banderacom became disaffected with the technology when InfiniBand failed to take hold as a new fabric standard. The company was restructured (and renamed) to develop chips based on the emerging iWARP Ethernet standard.

Like many people, Maule thinks that if the industry could have easily adopted InfiniBand, it already would have done so to a much greater degree. He believes that since IT managers already have a large investment in Ethernet technology (in personnel training, software and hardware), they will seek the path of least resistance to improve network performance. Because of this, he's betting that InfiniBand will not be the volume play in the interconnect market.

“We did InfiniBand in a previous part of our life,” said Maule, referring to the Banderacom adventure. “The recognition that we got to is that it's not a technology problem; it's an ecosystem and economic problem. Basically the marketplace has been waiting on Ethernet to get its act together and go to the next level.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire