The battle for the lowest possible latency has been raging in the financial services sector for years but is not the most critical factor for every segment of the industry. While low latency will be enhanced in cloud developments, for now it appears that only this small non-latency-obsessed market can be reached by cloud vendors.
Company claims products provide 4X the bandwidth of 10GbE at the same price point.
Cluster computing systems have caused disruptive changes in the HPC market. One consequence of the range of requirements for cluster networking is that the leading interconnects in HPC are Gigabit Ethernet (GbE), which is based on Ethernet networking standard, and InfiniBand, delivering upwards of 10X performance vs. GbE. Both show significant deployment in HPC.
We have developed something of a tradition at HPCwire in the weeks leading up to each year’s SC conference; we interview the chairman of the OpenFabrics Alliance (OFA). Jim Ryan of Intel has been the OFA’s chair all these years, and our annual interview with Jim was as interesting as ever.
Interconnect latencies have been generally recognized as a limiting factor for high-performance computing applications in and among cloud centers, but a variety of protocol innovations have appeared in the marketplace to clear this hurdle.
How big of a cluster can you build?
When 1,500 leading members of the world’s high performance computing community convene June 23-26 at the 2009 International Supercomputing Conference, The opening keynote address will be presented by Andreas “Andy” von Bechtolsheim, the legendary co-founder of Sun Microsystems and founder and Chief Development Officer of Arista Networks. Von Bechtolsheim will discuss “The Evolution of Interconnects for High Performance Computing.”
QLogic Corp. made its InfiniBand presence felt this week with the announcement of an OEM deal with IBM. Under the agreement, IBM will offer QLogic’s quad data rate director-class switches as part of IBM’s new Dynamic Infrastructure product set.
While 10 Gigabit Ethernet is getting all the press, InfiniBand keeps chugging along.
Did you know there are two projects that can give Ethernet a performance boost?