IEEE’s Hot Interconnects symposium that kicks off later this month should be a real treat for the HPC crowd. The event focuses exclusively on cutting-edge developments in the interconnect arena, everything from the latest commodity networking technologies to the K supercomputer’s “Tofu” custom network. For the most part, the two-day program, which takes place at Intel’s Silicon Valley headquarters in Santa Clara on August 24-25, is geared toward developers and researchers in the field, with a day of tutorials on August 26.
The sessions reflect the dynamics between custom interconnects, designed for top-end supercomputers, and commodity technologies like Ethernet and InfiniBand. We asked the technical chairs of the event — IBM researcher Fabrizio Petrini, along with University of Illinois’ Torsten Hoefler, and Myricom’s Patrick Geoffray — how they perceive this dichotomy and how it will play out in the future as the industry moves toward cloud computing, and in the HPC world, toward exascale supercomputing. Here’s what they had to say.
HPCwire: Despite the commercial advantages of industry-standard interconnects like Ethernet and InfiniBand, there always seems to be a demand for proprietary ones like Cray’s Gemini and Fujitsu’s Tofu. What do you see as the principal drivers for these custom interconnects?
Proprietary interconnects will be needed to satisfy the special demands of the extreme-scale community. Currently, none of the available general-purpose interconnects is designed to serve such a large number (>10.000) of endpoints in a single LAN. Thus, such proprietary networks will drive innovation at large scale and may guide standardization. However, such innovation often requires substantial investments, often at national scales — for example in the Tofu or PERCS networks.
Commodity interconnects, such as Ethernet and InfiniBand serve different markets — much larger, but at a much smaller scale of single LAN segments. The standardization bodies typically move slowly and are constrained by compatibility issues. Thus, quick innovation in those areas is harder. Mellanox’ approach to large-scale is hybrid. They introduce custom features to commodity interconnects, for example, CORE-Direct, to serve separate communities.
HPCwire: How do you see this trend developing? Will standard interconnects eventually push out proprietary ones or do you think there will always be a role for the proprietary technologies?
At the extreme scale, there will always be custom interconnects. For exactly the same reason that there will be custom cars in Formula 1. But for the general high-performance computing area, commodity interconnects are already dominating the market. The arguments here are simply that there is no need to extreme scalability or absolutely highest performance. We can expect that some features trickle down from the custom interconnect area to commodity interconnects. For example, collective communication offload may be such a feature that could soon be available to a wider audience.
HPCwire: With the advent of cloud computing, Sun Microsystems’ vision of “the network is the computer” seems to be coming true in a very fundamental way. Will cloud computing impact the interconnect landscape?
Yes and no. Depending on what strategy wins for datacenter design: the high-performance datacenter or the commodity Google-style server farm. Both approaches have advantages and disadvantages. But for large-scale computing of any kind, for example, MapReduce, one needs a strong interconnect. However, due to price and fierce competition in this area, I expect commodity interconnects to rule this field. This is why our program, especially the tutorial, also focusses on latest developments for commodity interconnects.
HPCwire: What role will standard interconnects like Ethernet and InfiniBand play in exascale systems?
That is an interesting question. Standard interconnects will probably play only a side role. However, extensions to InfiniBand could be worthwhile to consider. We will have a presentation by Mellanox’ Eitan Zahavi who will present an InfiniBand-centric vision towards exascale.
HPCwire: In general, do you see interconnect technologies becoming more diverse or less diverse in the future? Is it possible that there will eventually just be various flavors of Ethernet with different sets of capabilities to meet a range of market niches?
That may very well become true. Right now, we’re still in two-lanes in the commodity market with InfiniBand and Ethernet. But convergence towards Ethernet is often advertised. InfiniBand is still slightly ahead of the Ethernet development in bandwidth and latency but clearly behind in market share. So it may be a three-tier market with Ethernet at the low-mid end, InfiniBand at the mid-high end and specialized interconnects at the high-end.
HPCwire: The upcoming Hot Interconnects symposium will feature talks on a range of high-performance interconnects, both standard and proprietary. Could you give us a quick rundown on what you expect to be some of the more unique presentations there?
Our program is covering the whole spectrum of HPC interconnects. The technical session on “High-Performance Interconnect Architectures” will have talks on the networks running on the world’s most powerful machines, System K and Tianhe-1A. We will also have interesting presentations on how commodity interconnects, InfiniBand (Mellanox) and Ethernet (Gnodal), plan to scale to very large configurations. We will hear about interesting features, such as collective communication offload and support for the PGAS programming model.
The keynotes on “The IBM Blue Gene/Q Interconnection Network and Message Unit”, “The Computer that is the Network: A Future History of Big Systems”, and “Will interconnect help or limit the future of computing?” will provide a comprehensive vision of where HPC networking is going.
We will also feature two intriguing panels on data-center and cloud networking, with representatives from the big players: Cisco, HP, Mellanox, Intel, Arista, Avaya, IBM, Microsoft, and Facebook. In addition, the conference will provide great tutorials on OpenFlow, Infiniband, and large-scale data center and cloud networking.
We have never had such dynamic developments to hear about and debate!