Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
June 8, 2010

InfiniBand Hits the Accelerator

by Michael Feldman

Last week, the InfiniBand Trade Association (IBTA) used the International Supercomputing Conference (ISC’10) to unveil the new roadmap for InfiniBand. In a nutshell, the IBTA is upping the signal rate from the current 10 Gbps to 26 Gbps on a single lane. Using a new coding scheme, a 4-lane configuration will deliver 100 Gbps of useful data for node-to-node communication. For switch-to-switch connectivity, the same technology will deliver up to 300 Gbps. The first products supporting these speeds are expected to arrive in late 2011 and early 2012.

The 100 Gbps data rate for node-to-node connectivity represents a bump up from the IBTA’s former roadmap of 80 Gbps they originally specified for 4X EDR. In fact, EDR originally stood for Eight Data Rate to represent the doubling of the current Quad Data Rate (QDR) specification. With the 100 Gbps acceleration, EDR now stands for Enhanced Data Rate. (Yes, the marketing folks must have been up all night coming up with that one.)

The IBTA has also invented a sort of half-EDR, called Fourteen Data Rate, or FDR, which yields 50 Gbps (56 Gbps signal rate) in a 4-lane configuration. FDR was added for “midrange enterprise datacenter solutions,” according to the IBTA. That places it 10 Gbps ahead of the emerging 40 GigE standard, expected to arrive soon, and well ahead of the 10 GigE solutions making their way into the datacenter today.

The new coding scheme uses a more efficient 64/66 encoding that delivers a much better useful data rate than the current 8B/10B scheme. So instead of delivering 32 Gbps from a 40 Gbps QDR signal rate (80 percent efficient), as it does today, EDR will actually yield 100 Gbps of useful data from a raw signal rate of 104 Gbps (almost 97 percent efficient). That three-fold bandwidth improvement represents the biggest jump in InfiniBand performance in its decade long history.

Although latency is not directly addressed by the new specification, it’s likely that this too will improve due to more performant InfiniBand ASICs that will be required to drive the faster signaling rates. Port-to-port latency for QDR implementations are already in the sub-microsecond range.

The new IBTA roadmap seems, in part, designed to blunt some of the latest Ethernet performance advancements. Although the original 80 Gbps EDR would have easily outrun the new 40 GigE standard for cluster connectivity, the 50 and 100 Gbps InfiniBand speed puts even more daylight between the two solutions, and will do so at less power and cost than corresponding Ethernet solutions. Although InfiniBand is mostly geared for HPC infrastructure, vendors are looking to expand into cloud computing, telecom, Web 2.0, retail banking, and other network-bound application areas that have until now been almost entirely under the domain of Ethernet.

For more traditional InfiniBand applications, the speedier data rates are designed to keep pace with the ever-increasing bandwidth requirements of HPC clusters, which are continually expanding outward (more server nodes) and upward (more cores per server). “Based upon the latest technologies coming out, in terms of PCI-Express 3.0, more cores per CPU, and now GPU computing, it seemed that 80 gigs just wasn’t enough for the time — 2012 and beyond,” explained Brian Sparks, senior director of marketing communications at Mellanox and co-chair of the IBTA’s Marketing Working Group.

Further down the road, IBTA is planning to come up with HDR and then NDR versions of the technology, but the specific timeframes and data rates for those specifications are yet to be determined. Suffice to say that the InfiniBand roadmap is well ahead that of Ethernet, performance-wise, and will likely remain so for the foreseeable future.

The greater bandwidth for EDR and FDR will be especially welcome news for the optical cable and active copper cable vendors. Conventional (passive) copper cabling can’t reach much beyond 10 meters at the current 40 Gbps speeds. At 50 and 100 Gbps, those cable distances will get much shorter, setting the stage for a broader deployment of optical and other active cabling solutions.

The other promising news for InfiniBand proponents last week was its strong showing on the TOP500 list. The latest rankings have 207 systems using InfiniBand as the interconnect, up from 151 just a year ago. GigE-based systems, on the other hand, are down to 242 systems, from 282 in June 2009. Unless 10 GigE systems make a big surge in high-end HPC, it looks like this time next year, InfiniBand will officially take over as the dominant interconnect for the top supercomputers.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video