Infiniband Snaps Up Strong Super Share
Not long ago we reported on some figures from the International Supercomputing Conference showing the breakdown of interconnect technologies across the Top 500 supercomputer list.
Not surprisingly, Infiniband proved the dominant interconnect type, taking a 41% share of the list. It was also just made the grade for the most used interconnect for all petascale systems, holding that distinction on 16 out of 33 systems.
Mellanox claimed its healthy share this year on the Infiniband front–the results of this year’s Top 500 represent over a 3x growth for its FDR Infiniband from the same time last year. Among these systems are the Stampede machine at TACC as well as the oil and gas giant, Total’s, SGI-built Top 20 super.
While Blue Gene, Cray and other custom interconnects have mostly maintained a flat line since 2007, Infiniband has been making a slow climb, entwining itself with Ethernet on both sides of the curve, as Mellanox highlights below in a slide from ISC.
Performance aside, the interesting story here is found in the efficiency benchmarks. According to the company’s Gilad Shainer, it is “more cost effective to build a large datacenter with Infiniband over Ethernet” because of this efficiency–at least for the Top 500.
Shainer says that Infiniband averages around 95% to 95% efficiency whereas Ethernet lags significantly behind. Even though it may seem on the outset that it’s far more expensive to string with Infiniband, these efficiencies are the hidden key to Infiniband’s growth position.
Shainer says Mellanox is also seeing an uptick on the commercial side and with cloud datacenter customers, including its recent adoption from Microsoft with its Azure platform. They were able to snag Azure because of the 95% or more efficiency they could offer, Shainer says, as well as the performance they require.
“The efficiency characteristics of InfiniBand on the Top500 list aren’t surprising,” said Addison Snell, CEO of Intersect360 Research. “Because the Linpack benchmark is sensitive to interconnect bandwidth, though it doesn’t stress other aspects of memory architecture or parallelism.
It’s this efficiency angle that Mellanox is pushing heavier than in recent years, in part because of the increased attention around performance per watt. The addition of GPUs in their chart above is interesting here as well because they’re showing that any reductions in performance aren’t because of the interconnect–the blame falls squarely on the GPU.
There are technologies being developed now that are set to address such pitfalls and more will follow as it becomes clearer that interconnect differentiation is a make-or-break matter. In terms of some its competitors, including partner and rival Intel, Shainer says that while they cooperate well, Intel has made some investments in “old” technology, including the Cray Aries interconnect and the assets from QLogic.
He stresses that Mellanox’s position is one of innovation and when they buy, they look to “new and interesting” assets, as evidenced by their recent purchase of the optical interconnects company IPtronics for around $48 million. Shainer says that this and other R&D efforts are pushing them closer to their looming 100 Gb/S goal, which they hope to reach by 2014.
In addition to addressing that barrier, Shainer says they’re hoping to stay ahead of curve through other technologies that address the needs of HPC systems today, including the ability to cut down the lag on GPU-boosted machines. As we reported during ISC, Mellanox announced its FDR offering that supports NVIDIA’s GPUDirect RDMA technology.
Despite this growth, some suggest that Mellanox’s growth could lie elsewhere, including the storage interconnect market.
“The best near-term growth opportunity for InfiniBand is to continue to gain adoption not only as a system interconnect, but also as a storage interconnect,” said Snell. “Our latest survey data showed 41% of HPC InfiniBand users deploying it at least partly for storage, up from 27% two years ago.”
Snell went on to stress that high-performance interconnects could see adoption in big data markets beyond traditional HPC, another growth opportunity for InfiniBand.