Improving data communication performance in HPC has turned out to be one of the most difficult challenges for system designers. As a result, the topic is getting a lot of attention from academic researchers around the world. Some of that work will be presented at this year’s ISC High Performance conference in Frankfurt, Germany, where University of Heidelberg Professor Holger Fröning will be chairing a session on future high performance networks and interconnects. We caught up with Fröning recently and asked him about the kinds of innovations that we can look forward to and how these fit into today’s technologies like PCIe and Ethernet.
ISC: Providing faster data movement has emerged as the central challenge for many HPC applications. How will computer architectures have to adapt?
Holger Fröning: Data movement is crucial regarding energy and time, an observation that has been continuously made for quite some time already. It seems that few technological solutions exist to overcome this problem, with maybe optical transmissions as the only (feasible) solution. Computer architectures are trying to diminish this problem by tighter integration, as such an approach reduces average communication lengths and thus the impact on transmission time and energy. At the same time, applications have to optimize for locality to reduce the amount of communication as much as possible. Concepts like CUDA or OpenCL are fostering this, as they require the programmer to explicitly specify locality. At the end, this challenge requires optimizations at all level, not only computer architecture.
ISC: How much progress is being made on the on-chip and off-chip data bottlenecks? What do you think are the most promising approaches?
Fröning: I am still surprised how much progress electrical transmission has made in the last years, in particular regarding equalization technology. I didn’t expect such a development during the last decade. At the same time, effective maximum length for electrical transmission will continue to decrease, making the case for optical transmission stronger. However, even for such a technology electrical serialization is mandatory, and we will have to see if the energy saving related to optical transmission or the energy increase related to electro-optical conversion will dominate.
ISC: How are heterogeneous architectures influencing interconnect designs?
Fröning: This is a highly interesting question and directly targets the main research focus of my group at Heidelberg University. There is a significant influence, but of course not regarding the switch or link designs, as these components are only seeing packets. But our research over the last three years has shown that specialized processors require specialized communication models and methods for upmost performance and energy efficiency. The network interface has to be aware of the processor type that sourcing and sinking traffic, and provide the right interface to handle the interactions with this processor in the best way. And, if you extend this exploration towards collective communication, of course the switch design comes back into play. We will have to explore how this relates to heterogeneity.
ISC: Better interconnect performance is often tied to higher power consumption. What types of technologies are on the horizon to improve both speed and energy efficiency?
Fröning: I don’t see good candidates except for optical transmission. The current trend of decreasing lengths for efficient electrical transmission will continue, and soon even within racks the case for commodity optical solutions will be made, in terms of performance, energy and maybe even costs. This is in particular true as soon as you have to replace standard board material like FR-4 with high-frequency materials, which are quite cumbersome to handle and are expensive regarding manufacturing.
ISC: How will industry-standard interconnects like PCIe and Ethernet fare in the future? Will they be part of the solution or will they hinder innovation?
Fröning: We are facing an era with a huge technological diversity. I can’t remember having so many possibilities to mix-and-match clusters and other computing systems like I want, better said, like my application wants. Given that, it is mandatory to rely on industry standards to interconnect these different components. I clearly see PCIe as part of the solution, even though it might be completed with specialized solutions. Ethernet is part of almost any computing system, even if it’s only for controlling and monitoring purposes. However, even though it is the prime choice for low-cost solutions, in my opinion it is optimized for the mass market, resulting in too many compromises when talking about HPC and its special demands.