Visit additional Tabor Communication Publications
November 15, 2011
Formed in 2006, the Ethernet Alliance is the non-profit industry group dedicated to advancing Ethernet technology via initiatives aimed at improving interoperability and network performance. The original focus of the group was on bringing Ethernet into the mainstream, but the Ethernet Alliance has since moved forward to encourage the development of new Ethernet technologies in the face of skyrocketing demand for bandwidth.
John D’Ambrosia, chair of the Ethernet Alliance weighed in on the focus of the Ethernet Alliance at SC11, expanding on their interoperability goals and describing the overall role of Ethernet technologies in HPC.
HPCwire: What is the Ethernet Alliance demo showcasing at SC11?
John D’Ambrosia: The Ethernet Alliance is hosting an integrated, multi-vendor demo at SC11 showcasing Ethernet as the optimal solution for all datacenter needs. Ethernet, with its broad family of solutions and its roadmap to ever-higher speeds, is that protocol.
The demo highlights Ethernet’s capacity for seamless interoperability and highlights dependable, high-performance, low-cost solutions like 10GBASE-T, as well as advancements like 40 Gigabit Ethernet (40 GbE). Data center architects can continue to rely on Ethernet, and look to enhanced and emerging Ethernet transport technologies to achieve their ultimate goals.
The display further demonstrates 40 GbE as the next throughput and bandwidth stepping stone for data center applications, which inherently will establish the future upgrade path to 100 Gigabit Ethernet (100 GbE).
HPCwire: What Ethernet technologies are gaining in importance in HPC?
D’Ambrosia: There are several important technologies beginning to take hold in the HPC space. For example, RDMA over Converged Ethernet (RoCE) is a relatively new but promising transport that continues to gain traction in today’s datacenters.
Internet Wide Area RDMA Protocol (iWARP) is a proven remote direct memory access (RDMA) over Ethernet that has been ratified by the Internet Engineering Task Force (IETF). Providing cloud-ready transport with several large clusters scaled to thousands of nodes already in use, it negates the use of esoteric, risky networking and storage technologies requiring a complex amalgamation of routers, gateways, switches, software, and expertise to make HPC clusters excel.
Before the ratification of Data Center Bridging (DCB) in 2010, most datacenters have relied on Fiber Channel (FC) for lossless storage environments that could be used with confidence. With the advent of DCB, Fiber Channel over Ethernet (FCoE) has become a reality – enterprise datacenter architects can leverage current Fiber Channel investments while capitalizing on greater freedom of choice. It is now possible to migrate to increasingly popular Ethernet SAN and NAS file systems, yet maintain the lossless environment required for storage. Furthermore, with today’s ratified Ethernet-based iSCSI and FCoE storage transports, datacenter architects can now choose from a diverse array of interoperable, standard-based vendors.
10GBASE-T illustrates one of Ethernet’s solutions to deploying higher speeds for even conventional IT LAN solutions. Furthermore Ethernet, with its 40GbE and 100GbE families, is keeping apace of the continuing evolution of the PCIe bus on the motherboard, thus enabling 40 GbE and 100 GbE-based servers in the future.
HPCwire: Why interoperability is so important?
D’Ambrosia: Interoperability is critical not only because it offers consumers the ability to find solutions that best fit their needs, but also minimizes the threat of being locked into a single vendor or proprietary technology – undesirable situations for a myriad of reasons.
Proprietary, non-standard based technologies can trap users into a one-dimensional world where there are few choices outside of the chosen proprietary standard and an inability to change to a new one better fitting evolving datacenter needs. Choosing an Ethernet solution enables selecting product offerings from multiple vendors.
HPCwire: Can you describe the migration path in HPC applications?
D’Ambrosia: In particular to HPC computational clusters, Ethernet has numerous advantages and unparalleled flexibility that suit Supercomputing well both today and far into the future.
As previously mentioned, iWARP is well-established, cloud-ready, supported by multiple chip vendors, and has several large node cluster use cases. The newly formed RoCE protocol also allows InfiniBand users to easily migrate to Ethernet, casting off the need for special switches and gateways required when using multiple protocols.
HPCwire: What's the most important take-away today about Ethernet for anyone in HPC? Where do you see it going in the future?
D’Ambrosia: The most important take away by far is that Ethernet, while being more than 40 years old when developed by Xerox PARC in Palo Alto, CA, is continually evolving and adapting as the mainstay for everyone’s networking needs.
The current Ethernet roadmap leads from 1G LAN on Motherboard (LOM) to 10, 40, and 100GbE. It is a real world-tested and proven, ubiquitous protocol capable of meeting both current and future networking needs ranging from Supercomputing down to consumer LANs. Additionally, Ethernet’s ability to adapt to new and future DC needs negates costly investments – such as new equipment, software, and acquiring needed expertise – into new technologies. And with its unique range of application, from supercomputers to home networks, Ethernet’s technology superiority remains unmatched.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.