Formed in 2006, the Ethernet Alliance is the non-profit industry group dedicated to advancing Ethernet technology via initiatives aimed at improving interoperability and network performance. The original focus of the group was on bringing Ethernet into the mainstream, but the Ethernet Alliance has since moved forward to encourage the development of new Ethernet technologies in the face of skyrocketing demand for bandwidth.
John D’Ambrosia, chair of the Ethernet Alliance weighed in on the focus of the Ethernet Alliance at SC11, expanding on their interoperability goals and describing the overall role of Ethernet technologies in HPC.
HPCwire: What is the Ethernet Alliance demo showcasing at SC11?
John D’Ambrosia: The Ethernet Alliance is hosting an integrated, multi-vendor demo at SC11 showcasing Ethernet as the optimal solution for all datacenter needs. Ethernet, with its broad family of solutions and its roadmap to ever-higher speeds, is that protocol.
The demo highlights Ethernet’s capacity for seamless interoperability and highlights dependable, high-performance, low-cost solutions like 10GBASE-T, as well as advancements like 40 Gigabit Ethernet (40 GbE). Data center architects can continue to rely on Ethernet, and look to enhanced and emerging Ethernet transport technologies to achieve their ultimate goals.
The display further demonstrates 40 GbE as the next throughput and bandwidth stepping stone for data center applications, which inherently will establish the future upgrade path to 100 Gigabit Ethernet (100 GbE).
HPCwire: What Ethernet technologies are gaining in importance in HPC?
D’Ambrosia: There are several important technologies beginning to take hold in the HPC space. For example, RDMA over Converged Ethernet (RoCE) is a relatively new but promising transport that continues to gain traction in today’s datacenters.
Internet Wide Area RDMA Protocol (iWARP) is a proven remote direct memory access (RDMA) over Ethernet that has been ratified by the Internet Engineering Task Force (IETF). Providing cloud-ready transport with several large clusters scaled to thousands of nodes already in use, it negates the use of esoteric, risky networking and storage technologies requiring a complex amalgamation of routers, gateways, switches, software, and expertise to make HPC clusters excel.
Before the ratification of Data Center Bridging (DCB) in 2010, most datacenters have relied on Fiber Channel (FC) for lossless storage environments that could be used with confidence. With the advent of DCB, Fiber Channel over Ethernet (FCoE) has become a reality – enterprise datacenter architects can leverage current Fiber Channel investments while capitalizing on greater freedom of choice. It is now possible to migrate to increasingly popular Ethernet SAN and NAS file systems, yet maintain the lossless environment required for storage. Furthermore, with today’s ratified Ethernet-based iSCSI and FCoE storage transports, datacenter architects can now choose from a diverse array of interoperable, standard-based vendors.
10GBASE-T illustrates one of Ethernet’s solutions to deploying higher speeds for even conventional IT LAN solutions. Furthermore Ethernet, with its 40GbE and 100GbE families, is keeping apace of the continuing evolution of the PCIe bus on the motherboard, thus enabling 40 GbE and 100 GbE-based servers in the future.
HPCwire: Why interoperability is so important?
D’Ambrosia: Interoperability is critical not only because it offers consumers the ability to find solutions that best fit their needs, but also minimizes the threat of being locked into a single vendor or proprietary technology – undesirable situations for a myriad of reasons.
Proprietary, non-standard based technologies can trap users into a one-dimensional world where there are few choices outside of the chosen proprietary standard and an inability to change to a new one better fitting evolving datacenter needs. Choosing an Ethernet solution enables selecting product offerings from multiple vendors.
HPCwire: Can you describe the migration path in HPC applications?
D’Ambrosia: In particular to HPC computational clusters, Ethernet has numerous advantages and unparalleled flexibility that suit Supercomputing well both today and far into the future.
As previously mentioned, iWARP is well-established, cloud-ready, supported by multiple chip vendors, and has several large node cluster use cases. The newly formed RoCE protocol also allows InfiniBand users to easily migrate to Ethernet, casting off the need for special switches and gateways required when using multiple protocols.
HPCwire: What’s the most important take-away today about Ethernet for anyone in HPC? Where do you see it going in the future?
D’Ambrosia: The most important take away by far is that Ethernet, while being more than 40 years old when developed by Xerox PARC in Palo Alto, CA, is continually evolving and adapting as the mainstay for everyone’s networking needs.
The current Ethernet roadmap leads from 1G LAN on Motherboard (LOM) to 10, 40, and 100GbE. It is a real world-tested and proven, ubiquitous protocol capable of meeting both current and future networking needs ranging from Supercomputing down to consumer LANs. Additionally, Ethernet’s ability to adapt to new and future DC needs negates costly investments – such as new equipment, software, and acquiring needed expertise – into new technologies. And with its unique range of application, from supercomputers to home networks, Ethernet’s technology superiority remains unmatched.