During SC10 in New Orleans, we had a chance to drop by a number of exhibits to check in with what’s going on for some vendors who are improving the HPC ecosystem and by proxy, the ability for improvements in cloud computing.
Interconnects are, as you can imagine, a rather big piece of this ecosystem that supports HPC and cloud, yet we often don’t spend enough time talking about them—and if we do, we tend to often focus on the one vendor in this space with the vast majority of the market share, Mellanox.
On Wednesday I dropped by the QLogic booth to have a chat with Joe Yaworski about the interconnects market as a whole and what elements of differentiation there are with such a market share imbalance.
To be more direct, I flat-out asked Yaworski how QLogic was different and what case studies there were to demonstrate that there are variations in performance or other factors.
His response was that since QLogic’s point of differentiation is that it did not retrofit its products with MPI on top, which others did because in the beginning, InfiniBand was originally designed to become the datacenter backbone replacement for Ethernet and fiber channel. In other words, it had a rich set of features and capabilities that had nothing to do with HPC. However, once InfiniBand found its niche in HPC, QLogic stepped up to design InfiniBand products that were MPI-targeted from the beginning, thus eliminating any hitches that might have existed due to the retrofitting. His argument is that the messaging rate is thereby superior and that this was the reason why they were chosen for a large-scale implementation at Lawrence Livermore.
Here we have Mr. Yarowski providing more details on the above points…
While on the surface, this conversation might seem to have little to do directly with clouds, it is worth noting that there are some areas of possible differentiation in this market that might exist—and the more improvements on interconnects that emerge means that the possibility for more finely-tuned cloud computing capabilities could exist. Mellanox, for instance, often sees this connection and produces news releases around it but oftentimes QLogic steers clear of cloud tie-ins, at least relative to its much larger and pervasive competitor.
More from Joe on the Livermore connection…
This is an interesting market to watch, especially since the problems that it needs to solve to improve latency have an incredible bearing not only for HPC in general, but for related uses in cloud computing capabilities for high-performance computing applications.