Visit additional Tabor Communication Publications
June 01, 2007
Last week, IDC released a report that projects a rather healthy future for InfiniBand (IB) adoption over the next five years. The study predicts that InfiniBand host channel adaptor factory revenues will grow from $62.3 million in 2006 to $224.7 million in 2011, while InfiniBand switch port sales are expected to grow from $94.9 million to $612.2 million over the same period. By the end of the decade, the deployments of double data rate and quad data rate InfiniBand adapters are expected to overtake their single data rate forbearers.
Compared to Ethernet or Fibre Channel, InfiniBand has the advantage of providing a high bandwidth, low latency interconnect in a less expensive package. Current implementations are being delivered at 10, 20 and 40 Gbps, with products in the works for 120 Gbps. Its superior performance has made it the interconnect of choice in many HPC deployments. According to the latest Top500 list, 12 percent of the top systems use InfiniBand, up from 5 percent just a year ago. By itself , this statistic is largely irrelevant. The fact that 60 or so of the fastest computers use a particular interconnect may give the IB vendors some bragging rights, but it doesn't say much about overall industry adoption. What is important is that InfiniBand's use is expanding in commercial cluster computing, where interconnect performance and price-performance is the driving factor. This includes such applications as automotive crash simulations, oil and gas reservoir simulations, and financial analytics. These kinds of computing workloads have become mainstream as HPC has become a cost-effective tool for a wide array of businesses.
But much of the projected growth of InfiniBand is expected outside of its traditional role in connecting high performance computing servers and storage -- in mainstream datacenters. The IDC report makes the case that the growth of multicore processors, server virtualization, and I/O consolidation is going to drive InfiniBand adoption in the enterprise. All three of these trends are helping to make systems more computationally dense, which requires proportionately more communication bandwidth per server and per storage device. Besides raw performance, InfiniBand includes quality-of-service (QoS) features that enable multiple types of traffic to be safely managed over a single pipe.
Over the past few years, more mainstream enterprise users have started using InfiniBand. This is especially apparent in the capital markets, where the goal for ever-faster automated trading is on a collision course with increasing trade volumes. In this environment, even sub-second delays in transactions can cost millions of dollars. The types of computing systems that manage these trades have come under increased scrutiny as financial engineers ponder how best to minimize transaction latencies and enhance trade predictibility. Wombat Financial Software and Reuters are two companies that have qualified their market feed applications on InfiniBand technology to address these stringent performance requirements. InfiniBand's market penetration in this sector is largely unknown, since financial institutions tend to be rather tight-lipped about what goes on inside their datacenters. But with so much money in the balance, one can assume that all market trading institutions are taking a hard look at InfiniBand.
SOA platforms can benefit from high performance interconnects too. In May, TIBCO Software announced it had qualified its message passing middleware on top of Cisco's InfiniBand Server Fabric Switches to enhance performance and predictability for event-driven SOA. The types of applications targeted include data distribution, web services and order management systems. Order management, in particular, is becoming a time-critical component of inventory control for many companies. TIBCO claims their IB-enabled platform increased throughput by a factor of four, while reducing latency.
Database clustering, as is used in Oracle's 10G RAC and IBM's DB2, is another prime target. Following the general model of concentrating computational power into smaller boxes, database clusters are becoming more dependent on high performance interconnects to talk between the nodes. For databases applications, there is the additional incentive to unify the interconnect fabric with the storage components. JDA, a provider of supply chain management software, is using InfiniBand to improve system throughput on its platform. Using an Oracle 10G RAC system and QLogic InfiniBand gear, the company was able to decrease the time to plan one million SKUs (Stock Keep Units) from 66 minutes, with Gigabit Ethernet, to 25 minutes, with InfiniBand. They were also able to realize a 35 percent cost advantage by switching from Ethernet to InfiniBand.
It's no surprise that as the application and database tiers in the datacenter start to act like HPC systems, they will start to look like them. In doing so, we're bound to see InfiniBand adoption increase in the larger enterprise market. Since IB technology is projected to maintain its performance and price-performance advantage for at least the next five years, users will look to InfiniBand for their most challenging interconnect demands.
That's not to say Ethernet or Fibre Channel are going away. Both are established standards and have momentum that will carry them for a long time. The massive set of applications that are run on standard Ethernet will insure its longevity. For the foreseeable future, Ethernet is expected to represent the most common networking technology in the world. The web tier of the enterprise is one area where Ethernet has no serious challengers. And if you've been reading this publication for any length of time, you know that creative engineers have been devising new ways to enhance the performance and predictibility of Ethernet.
The culture of the InfiniBand vendors has matured as well. There's little talk of InfiniBand conquering the world these days. Even IB evangelists like Mellanox and Voltaire are shipping Ethernet products. Over the past couple of years, acquisition of companies like Topspin (Cisco), PathScale and Silverstorm (QLogic) has brought InfiniBand into more enterprise-focused vendors. The realization that all the standard network interconnects will live peaceably in the datacenter may be sinking in.
"From the Mellanox perspective we have technology that will address both markets," says Thad Omura, Mellanox VP of Product Marketing. "We're layering one growth market on top of another. It's no secret that 10 Gigabit Ethernet will emerge at some point. There's no reason why we shouldn't address the 10Gig solution for LANs and continue to drive InfiniBand as the best price-performance interconnect for servers and storage."
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - May 31, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.