Visit additional Tabor Communication Publications
August 03, 2011
IEEE's Hot Interconnects symposium that kicks off later this month should be a real treat for the HPC crowd. The event focuses exclusively on cutting-edge developments in the interconnect arena, everything from the latest commodity networking technologies to the K supercomputer's "Tofu" custom network. For the most part, the two-day program, which takes place at Intel's Silicon Valley headquarters in Santa Clara on August 24-25, is geared toward developers and researchers in the field, with a day of tutorials on August 26.
The sessions reflect the dynamics between custom interconnects, designed for top-end supercomputers, and commodity technologies like Ethernet and InfiniBand. We asked the technical chairs of the event -- IBM researcher Fabrizio Petrini, along with University of Illinois' Torsten Hoefler, and Myricom's Patrick Geoffray -- how they perceive this dichotomy and how it will play out in the future as the industry moves toward cloud computing, and in the HPC world, toward exascale supercomputing. Here's what they had to say.
HPCwire: Despite the commercial advantages of industry-standard interconnects like Ethernet and InfiniBand, there always seems to be a demand for proprietary ones like Cray's Gemini and Fujitsu's Tofu. What do you see as the principal drivers for these custom interconnects?
Proprietary interconnects will be needed to satisfy the special demands of the extreme-scale community. Currently, none of the available general-purpose interconnects is designed to serve such a large number (>10.000) of endpoints in a single LAN. Thus, such proprietary networks will drive innovation at large scale and may guide standardization. However, such innovation often requires substantial investments, often at national scales -- for example in the Tofu or PERCS networks.
Commodity interconnects, such as Ethernet and InfiniBand serve different markets -- much larger, but at a much smaller scale of single LAN segments. The standardization bodies typically move slowly and are constrained by compatibility issues. Thus, quick innovation in those areas is harder. Mellanox' approach to large-scale is hybrid. They introduce custom features to commodity interconnects, for example, CORE-Direct, to serve separate communities.
HPCwire: How do you see this trend developing? Will standard interconnects eventually push out proprietary ones or do you think there will always be a role for the proprietary technologies?
At the extreme scale, there will always be custom interconnects. For exactly the same reason that there will be custom cars in Formula 1. But for the general high-performance computing area, commodity interconnects are already dominating the market. The arguments here are simply that there is no need to extreme scalability or absolutely highest performance. We can expect that some features trickle down from the custom interconnect area to commodity interconnects. For example, collective communication offload may be such a feature that could soon be available to a wider audience.
HPCwire: With the advent of cloud computing, Sun Microsystems' vision of "the network is the computer" seems to be coming true in a very fundamental way. Will cloud computing impact the interconnect landscape?
Yes and no. Depending on what strategy wins for datacenter design: the high-performance datacenter or the commodity Google-style server farm. Both approaches have advantages and disadvantages. But for large-scale computing of any kind, for example, MapReduce, one needs a strong interconnect. However, due to price and fierce competition in this area, I expect commodity interconnects to rule this field. This is why our program, especially the tutorial, also focusses on latest developments for commodity interconnects.
HPCwire: What role will standard interconnects like Ethernet and InfiniBand play in exascale systems?
That is an interesting question. Standard interconnects will probably play only a side role. However, extensions to InfiniBand could be worthwhile to consider. We will have a presentation by Mellanox' Eitan Zahavi who will present an InfiniBand-centric vision towards exascale.
HPCwire: In general, do you see interconnect technologies becoming more diverse or less diverse in the future? Is it possible that there will eventually just be various flavors of Ethernet with different sets of capabilities to meet a range of market niches?
That may very well become true. Right now, we're still in two-lanes in the commodity market with InfiniBand and Ethernet. But convergence towards Ethernet is often advertised. InfiniBand is still slightly ahead of the Ethernet development in bandwidth and latency but clearly behind in market share. So it may be a three-tier market with Ethernet at the low-mid end, InfiniBand at the mid-high end and specialized interconnects at the high-end.
HPCwire: The upcoming Hot Interconnects symposium will feature talks on a range of high-performance interconnects, both standard and proprietary. Could you give us a quick rundown on what you expect to be some of the more unique presentations there?
Our program is covering the whole spectrum of HPC interconnects. The technical session on "High-Performance Interconnect Architectures" will have talks on the networks running on the world's most powerful machines, System K and Tianhe-1A. We will also have interesting presentations on how commodity interconnects, InfiniBand (Mellanox) and Ethernet (Gnodal), plan to scale to very large configurations. We will hear about interesting features, such as collective communication offload and support for the PGAS programming model.
The keynotes on "The IBM Blue Gene/Q Interconnection Network and Message Unit", "The Computer that is the Network: A Future History of Big Systems", and "Will interconnect help or limit the future of computing?" will provide a comprehensive vision of where HPC networking is going.
We will also feature two intriguing panels on data-center and cloud networking, with representatives from the big players: Cisco, HP, Mellanox, Intel, Arista, Avaya, IBM, Microsoft, and Facebook. In addition, the conference will provide great tutorials on OpenFlow, Infiniband, and large-scale data center and cloud networking.
We have never had such dynamic developments to hear about and debate!
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
Jun 12, 2013 |
At 31 petaflops of sustained LINPACK capacity, the new Chinese Tianhe-2 supercomputer will be the fastest supercomputer in the world when this month's Top 500 list comes out, as we reported previously in HPCwire.
Jun 12, 2013 |
HPC system makers are lining up to announce compatibility with the new fourth generation Intel Core processor, codenamed "Haswell." The new Iris GPUs based on the Haswell architecture are giving Intel new credibility in the graphics processing department.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.