Visit additional Tabor Communication Publications
March 26, 2009
In the realm of datacenter interconnects, much of the IT industry continues to be focused on the rollout of 10 Gigabit Ethernet offerings, with a raft of switches, adapters and other 10GigE paraphernalia having made its way into the marketplace over the past 18 months. Cisco's recent foray into the datacenter, for example, will be premised on 10GigE-based blades. This next generation of Ethernet products will not only bring higher bandwidth and lower latencies, but also lossless fabrics suitable for both compute and storage interconnects.
But despite all the hoopla over 10GigE, InfiniBand continues to be the interconnect that excites the HPC crowd. The majority of new HPC systems of note all seem to be InfiniBand-based. The most prominent example of an Ethernet-based system is the ATLAS cluster at the Max Planck Institute for Gravitational Physics in Germany, which we reported on last year. From a performance standpoint, the choice between Ethernet and InfiniBand is not so much a bandwidth issue -- multiple 10GigE links can always be aggregated to achieve InfiniBand-like bandwidth -- as a latency one. Today, even the most capable 10GigE implementations have higher latencies than InfiniBand, and it is this attribute that many HPC workloads find indispensable.
A recent market study by Tabor Research points to InfiniBand's continued popularity in the HPC space. Citing an August 2008 site survey, the Taborites found that 60 percent of HPC systems installed since the start of 2007 were employing InfiniBand as a system interconnect. That's a much bigger percentage than you see on the latest TOP500 list, where only 28 percent are InfiniBand-based versus 56 percent for Ethernet -- the remainder being a smattering of proprietary interconnects. In fact, it's probable that the majority of these really big Ethernet-connected clusters are running loosely-coupled parallel applications, rather than latency-sensitive HPC workloads. It's notable that as of November 2008, no TOP500 systems were using 10GigE.
More importantly, InfiniBand usage in HPC is growing. According to the same Tabor Research survey, in 2006 the proportion of HPC systems employing InfiniBand and Ethernet were about equal. It was in 2007 that InfiniBand jumped into the lead. With QDR IB (40 Gbps) expected to hit its stride in 2009, InfiniBand should consolidate its lead in the HPC interconnect market. InfiniBand has also made some inroads into more traditional enterprise applications, most notably in the HP-Oracle database machine. Time will tell whether this is just an outlier or the beginning of a wider trend.
Mellanox continues to be the dominant vendor in the InfiniBand marketplace, having recently added switches and gateways to its adapter and silicon business. But with QLogic now offering home-grown InfiniBand ASICs alongside its own switches and HCAs, HPC system vendors will have a wider choice of interconnect options. Although this introduces an element of competition, Tabor Research believes that the InfiniBand market is now big enough for two vendors to succeed. Considering that Mellanox enjoyed record revenues through the front end of the recession -- $107.7 million in FY2008 -- this seems like a fair assessment.
InfiniBand's success in HPC doesn't seem to quiet the naysayers, though. The Ethernet drumbeat that pervades the industry invariably leads to press coverage that casts InfiniBand as an endangered technology. Chris Mellor's recent piece in The Register, titled InfiniBand: Caught in the Ethernet meatgrinder, sounds ominous, but the main thrust of that article is actually about fabric convergence and how Ethernet and InfiniBand are learning to co-exist.
In fact, converged fabrics are likely to be the real story of datacenter interconnects over the next several years, as vendors look to accommodate multiple networking, clustering and storage communication protocols on top of lossless communication technologies like InfiniBand and RDMA Ethernet. It's not surprising that the major InfiniBand vendors -- Mellanox, QLogic and Voltaire -- have developed converged fabric offerings in various flavors, and Ethernet vendors are layering protocols like Fibre Channel on top of lossless Ethernet.
The whole process resembles the convergence of RISC and CISC technologies in the microprocessor arena. There, instead of one architecture killing off the other one, Intel was able to maintain the dominance of its legacy x86 CISC ISA by incorporating a RISC-like core underneath the covers. Meanwhile, true RISC processors found other markets to play in. Ethernet and InfiniBand look like they're on a similar trajectory.
Posted by Michael Feldman - March 26, 2009 @ 6:35 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.