Visit additional Tabor Communication Publications
March 07, 2008
There seems to be a general consensus that the datacenter needs to settle on a unified network fabric. The question is, which one? Today, storage and cluster connectivity use some combination of Ethernet, InfiniBand and Fibre Channel, which creates a hodgepodge of cables, switches and host channel adapters. As readers of this publication are aware, both Ethernet and InfiniBand vendors have staked claim to unifying the datacenter on their blessed fabric. With the introduction of 10 GbE products and the upcoming Fibre Channel over Ethernet (FCoE) standards, Ethernet proponents claim cluster and storage unification is at hand. Meanwhile, the InfiniBand crowd is pushing their technology as the technically superior solution and the one that is available today.
Sun Microsystems seems to be getting impatient waiting for high performance Ethernet gear and FCoE standards, and has openly talked about going the InfiniBand route. In a recent EETimes article by Rick Merritt, Sun's John Fowler (executive VP of the Systems Division) said the company would be introducing a set of products this year unified on InfiniBand. While Fowler admitted they're continuing their involvement in the FCoE effort, he notes that Fibre Channel over InfiniBand should be available in 2008.
The strength of Ethernet -- its ubiquity -- is also it weakness. Changing or adding standards involves dragging the whole vendor community along. Look at how long the IEEE study group took just to decide to move forward on the 40GbE/100GbE standards. That process began in 2006 and standards aren't expected to be completed until mid-2010. And while vendors may come up with proprietary versions of FCoE relatively soon, the standard is still being hashed out by the vendor and user communities. Not only does the process suffer from a "too many cooks in the kitchen" problem, but the cooks themselves are competing with one another.
Meanwhile, the more nimble InfiniBand community has been able to establish its high bandwidth, low-latency fabric as the standard for high performance interconnects. Mellanox has used its position as the only InfiniBand switch silicon vendor to set the pace. With DDR (20 Gbps) products now mainstream and QDR (40 Gbps) products due out at the end of this year, InfiniBand has left 10GbE behind from the standpoint of raw performance.
Increasingly, the InfiniBand vendors that have traditionally focused on the HPC market are looking at the broader enterprise marker. A couple of weeks ago at VMworld Europe, Mellanox announced that it is seeing demand for unified InfiniBand I/O in VMware environments. The company is reporting interest from sectors such as banking, managed hosting services, Web 2.0, insurance services and health care.
In the above mentioned EETimes piece, Sun Chief Architect Andy Bechtolsheim also points to the performance advantages of InfiniBand and, like Fowler, thinks InfiniBand may be the fabric that first swallows up Fibre Channel for storage connectivity. It's a bit ironic that Bechtolsheim would be so enthusiastic about InfiniBand. A co-founder of Sun, he left the company in 1995 to start Granite Systems, a gigabit Ethernet switch vendor. A year later Cisco bought the company for $220 million. Now back at Sun, Bechtolsheim doesn't seem to retain any nostalgia for his Ethernet roots. His most recent InfiniBand project, the 3,456 double data rate (DDR) Magnum switch, which powers the Sun Constellation "Ranger" supercomputer at TACC, is an example of Bechtolsheim's enthusiasm for the technology.
The fact that he's now bullish on InfiniBand should give unified fabric Ethernet enthusiasts something to think about. A shrewd investor, Bechtolsheim always seems to know which way the tech wind is blowing. As one of the original backers of Google, he turned a $100,000 investment into $1.5 billion. But like any accomplished investor, Bechtolsheim knows when to hedge his bets. He recently turned up as a backer for Arastra Inc., a vendor that has recently announced a high density, low-latency 10GbE switch for the datacenter.
In fact, almost everyone is expecting Ethernet to become the standard datacenter fabric ... eventually. Even InfiniBand-loving Mellanox is hedging. The company's recently announced ConnectX EN 10GigE NIC adapters are a nod to the ubiquity of Ethernet-based applications. And the ConnectX architecture is explicitly designed to support both InfiniBand and Ethernet connectivity on the same adapter. Essentially, the Mellanox approach is about unifying data traffic on a single I/O pipe rather than on a single protocol.
So is the fix in for Ethernet? Maybe not. As virtualization takes hold in the datacenter, the demand for ever more bandwidth and ever less latency is growing and some users can't wait for 10GbE to get their act together. As John Lennon once said: "Life is what happens when you're busy making plans" (or writing standards). Even mainstream technologies can collapse under their own weight (e.g., mainframes) or split up into a confusing array of variations (e.g., Unix). And let's face it, if Ethernet gets pushed to the edge of the datacenter, civilization would manage to survive. But still, that scenario seems unlikely.
As I reported at the end of 2007, the Ethernet crowd thinks this year will be the one when 10GbE gets traction in the datacenter and even starts cutting away at InfiniBand's dominance in supercomputing. But they've made that claim before. Sometimes, when I observe how the InfiniBand vendors are courting the OEMs, I'm reminded of the 2005 movie "Wedding Crashers." Near the end of the story, lovestruck Owen Wilson pleads with Rachel McAdams: "I'm not standing here asking you to marry me. I'm just asking you not to marry him," referring to McAdams' obnoxious fiance. But while InfiniBand may be the HPC sentimental favorite, it might not be as fortunate as Wilson in getting that Hollywood ending.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - March 06, 2008 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.