Visit additional Tabor Communication Publications
March 07, 2008
There seems to be a general consensus that the datacenter needs to settle on a unified network fabric. The question is, which one? Today, storage and cluster connectivity use some combination of Ethernet, InfiniBand and Fibre Channel, which creates a hodgepodge of cables, switches and host channel adapters. As readers of this publication are aware, both Ethernet and InfiniBand vendors have staked claim to unifying the datacenter on their blessed fabric. With the introduction of 10 GbE products and the upcoming Fibre Channel over Ethernet (FCoE) standards, Ethernet proponents claim cluster and storage unification is at hand. Meanwhile, the InfiniBand crowd is pushing their technology as the technically superior solution and the one that is available today.
Sun Microsystems seems to be getting impatient waiting for high performance Ethernet gear and FCoE standards, and has openly talked about going the InfiniBand route. In a recent EETimes article by Rick Merritt, Sun's John Fowler (executive VP of the Systems Division) said the company would be introducing a set of products this year unified on InfiniBand. While Fowler admitted they're continuing their involvement in the FCoE effort, he notes that Fibre Channel over InfiniBand should be available in 2008.
The strength of Ethernet -- its ubiquity -- is also it weakness. Changing or adding standards involves dragging the whole vendor community along. Look at how long the IEEE study group took just to decide to move forward on the 40GbE/100GbE standards. That process began in 2006 and standards aren't expected to be completed until mid-2010. And while vendors may come up with proprietary versions of FCoE relatively soon, the standard is still being hashed out by the vendor and user communities. Not only does the process suffer from a "too many cooks in the kitchen" problem, but the cooks themselves are competing with one another.
Meanwhile, the more nimble InfiniBand community has been able to establish its high bandwidth, low-latency fabric as the standard for high performance interconnects. Mellanox has used its position as the only InfiniBand switch silicon vendor to set the pace. With DDR (20 Gbps) products now mainstream and QDR (40 Gbps) products due out at the end of this year, InfiniBand has left 10GbE behind from the standpoint of raw performance.
Increasingly, the InfiniBand vendors that have traditionally focused on the HPC market are looking at the broader enterprise marker. A couple of weeks ago at VMworld Europe, Mellanox announced that it is seeing demand for unified InfiniBand I/O in VMware environments. The company is reporting interest from sectors such as banking, managed hosting services, Web 2.0, insurance services and health care.
In the above mentioned EETimes piece, Sun Chief Architect Andy Bechtolsheim also points to the performance advantages of InfiniBand and, like Fowler, thinks InfiniBand may be the fabric that first swallows up Fibre Channel for storage connectivity. It's a bit ironic that Bechtolsheim would be so enthusiastic about InfiniBand. A co-founder of Sun, he left the company in 1995 to start Granite Systems, a gigabit Ethernet switch vendor. A year later Cisco bought the company for $220 million. Now back at Sun, Bechtolsheim doesn't seem to retain any nostalgia for his Ethernet roots. His most recent InfiniBand project, the 3,456 double data rate (DDR) Magnum switch, which powers the Sun Constellation "Ranger" supercomputer at TACC, is an example of Bechtolsheim's enthusiasm for the technology.
The fact that he's now bullish on InfiniBand should give unified fabric Ethernet enthusiasts something to think about. A shrewd investor, Bechtolsheim always seems to know which way the tech wind is blowing. As one of the original backers of Google, he turned a $100,000 investment into $1.5 billion. But like any accomplished investor, Bechtolsheim knows when to hedge his bets. He recently turned up as a backer for Arastra Inc., a vendor that has recently announced a high density, low-latency 10GbE switch for the datacenter.
In fact, almost everyone is expecting Ethernet to become the standard datacenter fabric ... eventually. Even InfiniBand-loving Mellanox is hedging. The company's recently announced ConnectX EN 10GigE NIC adapters are a nod to the ubiquity of Ethernet-based applications. And the ConnectX architecture is explicitly designed to support both InfiniBand and Ethernet connectivity on the same adapter. Essentially, the Mellanox approach is about unifying data traffic on a single I/O pipe rather than on a single protocol.
So is the fix in for Ethernet? Maybe not. As virtualization takes hold in the datacenter, the demand for ever more bandwidth and ever less latency is growing and some users can't wait for 10GbE to get their act together. As John Lennon once said: "Life is what happens when you're busy making plans" (or writing standards). Even mainstream technologies can collapse under their own weight (e.g., mainframes) or split up into a confusing array of variations (e.g., Unix). And let's face it, if Ethernet gets pushed to the edge of the datacenter, civilization would manage to survive. But still, that scenario seems unlikely.
As I reported at the end of 2007, the Ethernet crowd thinks this year will be the one when 10GbE gets traction in the datacenter and even starts cutting away at InfiniBand's dominance in supercomputing. But they've made that claim before. Sometimes, when I observe how the InfiniBand vendors are courting the OEMs, I'm reminded of the 2005 movie "Wedding Crashers." Near the end of the story, lovestruck Owen Wilson pleads with Rachel McAdams: "I'm not standing here asking you to marry me. I'm just asking you not to marry him," referring to McAdams' obnoxious fiance. But while InfiniBand may be the HPC sentimental favorite, it might not be as fortunate as Wilson in getting that Hollywood ending.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - March 06, 2008 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.