10GbE Networking for HPC — Applications and Technology Trends

By Saqib Jang

June 15, 2009

The first in a two-part series, this article examines the drivers for 10GbE deployment for high-performance cluster computing (HPCC) environments and related technology trends.

Ten Gigabit-per-second Ethernet (10GbE) represents the next level of Ethernet network bandwidth, with networking vendors promoting it as the next great capability. But high-performance computing (HPC) infrastructure and operations professionals must strike a balance between constant operational improvement and sound financial decision-making. So far, 10GbE has been a high-end luxury for environments that want maximum performance regardless of cost, but that’s changing fast. The per-port pricing gap between 10GbE and alternate network options is narrowing rapidly as more vendors increase the competitive pressure on pricing for related components.

So where will this technology truly matter for HPC environments? This article examines the impact of 10GbE on HPC infrastructure and provides guidance for the most effective transformation of your network. The initial focus will be on the top drivers and applications for 10GbE deployment in HPC environments and then the leading technology trends impacting 10GbE NIC designs will be reviewed. The next article in the series will examine the major offerings in the 10GbE NIC area.

Network Convergence for HPC Datacenters

Clusters of commodity servers have rapidly evolved into a highly cost-effective form of supercomputer. As the technology has matured and costs have declined, enterprises across a wide range of industries have begun leveraging HPC for product design and simulation, data analysis and other highly compute intensive applications that were previously beyond the reach of IT budgets. Off-the-shelf clusters frequently use Gigabit Ethernet as the cluster interconnect technology, but a number of cluster vendors are exploiting more specialized cluster interconnect fabrics that feature very low message-passing latency.

Although Ethernet has been the de facto technology for the general purpose LAN, Gigabit Ethernet has been considered as a sub-optimal switching fabric for very high performance cluster interconnect and storage networking. This is due primarily to performance issues stemming from the fact that GbE has lower bandwidth than InfiniBand and Fibre Channel, and typically exhibits significantly higher end-to-end latency and CPU utilization.

However, this situation has changed dramatically due to recent developments in low-latency 10 GbE switching and intelligent Ethernet NICs that offload cluster and storage protocol processing from the host processor. These enhancements allow server end systems to fully exploit 10 GbE line rates, while reducing one-hop end-to-end latency to less than 10 microseconds and CPU utilization for line-rate transfers to less than 10 percent.

As a result, 10 GbE end-to-end performance now compares very favorably with that of more specialized datacenter interconnects, eliminating performance as a drawback to the adoption of an Ethernet unified datacenter fabric. Off-loading cluster and storage protocol processing from the central CPU to intelligent 10GbE NIC can also improve the power efficiency of end stations because off-load ASIC processors are generally considerably more power efficient in executing protocol workloads.

10GbE R-NICs for Low-Latency IPC

Traditionally, TCP/IP protocol processing has been performed in software by the end system’s CPU. The load on the CPU increases linearly as a function of packets processed, with the usual rule of thumb being that each bit per second of bandwidth consumes about a Hz of CPU clock (e.g., 1 Gbps of network traffic consumes about 1 GHz of CPU). As more of the host CPU is consumed by the network load, both CPU utilization and host send/receive latency become significant issues.

Over the last few years, vendors of intelligent Ethernet NICs, together with the RDMA Consortium and the IETF, have been working on specifications for hardware-accelerated RDMA over TCP/IP (or iWARP) protocol stacks that can support the ever-increasing performance demands of cluster inter-process communications (IPC) over 10 GbE.

An RDMA over TCP/IP NIC (or R-NIC) provides hardware support for a remote direct memory access (RDMA) mechanism. R-NICs allows a server to read/write data directly between its user memory space and the user memory space of another R-NIC-enabled host on the network, without any involvement of the host operating systems.

R-NICs provide an OS kernel bypass mechanism allowing applications running in user space to post read/write commands that are transferred directly to the RDMA over TCP/IP NIC (R-NIC). This eliminates the delay and overhead associated with copy operations among multiple buffer locations, kernel transitions and application context switches. R-NICs can reduce CPU utilization for 10 Gbps transfers to less than 10 percent and can reduce the host component of end-to-end latency to as little as 5–10 microseconds.

To reduce latency and maximize performance, cluster applications use the industry-standard message passing interface (MPI) middleware that is implemented atop of iWARP and other RDMA transports. Use of MPI removes the need for developers to understand the details of the particular cluster interconnect.

There are many MPI variants, some of which are vendor-specific, while others are open source-based standards. The former includes Intel MPI, HP MPI, Platform (Scali) MPI, and MPI/Pro. Popular open source MPI variants include MPICH and LAM/MPI.

The OpenFabrics Alliance is developing open-source middleware APIs for iWARP and other RDMA transport. The OpenFabrics stack includes user-level (uDAPL) and kernel-level (kDAPL) intermediate APIs which run atop RDMA transports, including iWARP. Most of the popular MPI packages now support the OpenFabrics APIs removing the need for 10bE iWARP NIC hardware vendors to directly support MPI middleware. OpenFabrics Alliance has also taken the step of offering a fully-validated Open Fabrics Enterprise Distribution (OFED) stack for Linux.

Enter 10GbE iSCSI and FCoE: Enablers for Storage I/O Consolidation

The concept behind I/O consolidation is simple: the sharing of storage and networking traffic on the same Ethernet physical cable or, in cases that network isolation is desired, the flexibility to configure and use the same hardware for either type of network load, and the prioritizing of traffic delivery through quality of service (QoS) metrics. The benefits end-users will realize from this simple idea are significant.

Companies that leverage I/O consolidation will be able to realize significant gains in server slot efficiencies by using multi-function network/storage adapters to simplify their cabling scheme within a rack, thereby reducing the amount of heat each server generates.
The dominant approach to storage I/O consolidation was iSCSI (Internet SCSI), a flexible and powerful storage area networking (SAN) protocol, providing data availability and performance compared to other Ethernet-based storage approaches such as network-attached storage (NAS). iSCSI replaced the FC stack with the standard networking TCP/IP stack in order to transport the storage traffic over standard lower cost Ethernet.

Customers in entry-level, mid-range, and high-end segments are building flexible storage infrastructures using iSCSI to allocate and shift resources dynamically to cost-effectively meet the storage demands of their compute cluster environments.

There are a number of 10 GbE NICs available that provide hardware-based iSCSI offload including comprehensive bare metal provisioning and management capabilities that come from hardware based boot-from-SAN technology in the 10 GbE NIC. With hardware-based iSCSI offload, SCSI commands issued by the OS are offloaded to the 10GbE NIC, converted into TCP/IP packets and transmitted to the iSCSI storage target that stores the disks. To the OS, the remote storage device appears as locally attached SCSI device. The hardware-based iSCSI offload also enables an OS-agnostic boot-from-SAN, which effectively removes the need for any direct attached storage in the server and moves the software image into a centralized iSCSI SAN.

Although successful in supporting block storage for the broad range of HPC applications, iSCSI has not been adopted for the most demanding of applications, such as data mining and decision support applications, due to performance, resilience, and manageability issues specific to these applications.

The value proposition of the emerging FCoE standard is based primarily on the elimination of the expensive FC infrastructure components in datacenters, which are currently used to connect servers running high-end applications to their networked storage systems. Since FCoE requires 10GbE (with Enhanced Ethernet extensions in both the NICs and the switches), its deployment is not expected till 2010 and is likely to remain an expensive niche interconnect for the foreseeable future.

While FCoE aims to eliminate the FC infrastructure by unifying the storage and networking interconnect into a single 10GbE fabric, it will also provide investment protection for many years — particularly at the storage end of the Fibre Channel SAN via Enhanced Ethernet-to-Fibre Channel switches/gateways.

10GbE Server Connectivity Standards

Most new 10GbE controllers and adapters support dual ports on the network side. Some 10GbE controllers use the second port only for failover, whereas the trend is for dual-port 10GbE NICs to support active/active configurations (i.e., concurrent operation for both ports).

For host-side connectivity, 10GbE NICs supporting PCI Express is the industry standard CPU-to-I/O serial interconnect in volume servers. The available 10GbE NICs support a mix of PCIe v1.1 and the second generation of PCIe, PCIe v2.0 or PCIe Gen2, which was finalized by PCI-SIG in January 2007 and is rapidly gaining market traction.

Intel started shipping X38, the first system-logic chip set supporting PCIe Gen2, in September, 2007, with AMD and NVIDIA following suit shortly thereafter. In March 2009, Intel’s dual-socket (2P) and quad-socket (4P0 server platforms completed the transition to PCIe Gen2.

PCIe Gen2 doubles the data rate possible in each lane of the scalable serial interface to 5Gb/s bidirectional throughput per lane compared to PCIe Gen1 devices, which supports 2.5Gb/s per lane. Thus, a PCIe Gen x8 slot can support an effective throughput of 32Gb/s (assuming the standard Ethernet 8b/10b encoding — every 10 bits sent carry 8 bits of data — so that the useful data transmission rate is four-fifths the raw rate). In comparison, a PCIe V1.1 slot can support 16 Gb/s effective bidirectional throughput.

Traditionally, blade servers have used GbE as the backplane fabric to connect server blades within the chassis. For these systems, blade system manufacturers open up their system designs to enable third parties to offer Fibre Channel and InfiniBand adapters for storage and clustering applications, respectively.

An emerging alternative to adding parallel fabrics for networking, storage and clustering gaining ground in blade servers is to provide 10GbE backplanes for these systems, enabling the use of iSCSI and iWARP storage and clustering protocols, which thereby eliminate the power and space requirements of multiple fabrics and controllers.

During 2007, HP and IBM introduced blade server designs with 1/10Gbe backplanes and mezzanine card offerings for 10GbE connectivity. In September 2008, HP introduced a blade server model that integrates a dual-port 10GbE controller as a default capability. First-generation 10GbE-based blade server designs use the KX4 Backplane Standard, which uses four 3.125 Gbps SerDes links and in which the network-side XAUI port of 10GbE NICs connects to a 10GBASE-KX4-to-XAUI optical PHY.

The blade server market is rapidly moving toward next-generation designs that use 10 Gbps serial links based on 10GBase-KR standard and which reduce the number of backplane traces. In this case, XAUI ports of 10GbE NICs connect to 10GBASE-KR-to-XAUI optical PHYs.

Contrasting 10GbE Physical Layer Options

The 10 Gigabit Ethernet standard encompasses a number of different physical layer (PHY) standards, including optical and copper cabling standards. As of 2009 10 Gigabit Ethernet is still an emerging technology with only 2 million ports shipped in 2008 — the overwhelming majority of which had optical PHYs. Additionally, the majority of these 10GbE ports shipped into switch applications, with only about 5 percent shipping into server interconnects (NICs).

Each successive generation of 10GbE optical modules has resulted in lower power dissipation, smaller footprint, and therefore higher port density. Most importantly for widespread deployment, the cost of these modules has declined by two orders of magnitude since 2002. As a result, 10GbE is nearing the price points required for mass adoption in HPC networks. Typically adoption of a new generation of Ethernet technology occurs when IT managers can buy a 10X increase in bandwidth for 3X-4X increase in price.

SFP+ is the most recent and state-of-the-art optical module form factor, and offers a significant improvement over the earlier XFP standard — with respect to footprint, allowing 24-port and 48-port dense top-of-rack switches and half-height PCIe NICs for datacenter server interfaces.

The SFP+ form-factor has been defined to support both optical interfaces and copper (twin-ax) 10GbE serial connections for distances up to 10 meters, for example, for a top-of-rack switch and for connecting racks within a datacenter. This “direct attach copper (DAC)” configuration further reduces the cost of the module by removing the transmit and receive optical subassemblies. SFP+ optical modules typically dissipate less than 1 W of power, while DAC versions have no active power-consuming components.

The key features enabling DAC connections of over 10m are i) EDC, ii) 10G TX pre-emphasis, and iii) forward error correction. NetLogic Microsystems has demonstrated up to 20m transmission using these three features. Other companies with similar devices include Broadcom and AMCC, although neither has shown equivalent DAC distances for similar power dissipation numbers.

While SFP+ is rapidly gaining traction for 10GbE HPC datacenter applications due to its lower cost and power characteristics, the alternative 10GBASE-T technology for 10GbE transmission over Category 6/7 twisted-pair copper cabling has been lagging in gaining market acceptance. 10GBASE-T has proven to be a challenging technology to implement in a cost- and power-effective way, because of the complexity of the signal processing required to overcome the bandwidth limitations and noise characteristics of the twisted-pair medium.

Other than link distance, 10GBASE-T suffers from several inherent disadvantages relative to SFP+ DAC. The most significant is power. Even using the latest IC process technology, 10GBASE-T PHYs dissipate around 6 W of power. This effectively has ruled out current 10GBASE-T offerings as a viable technology for the emerging generation of dual-port NIC cards and high-density switch technology, relegating it to single-port adapters and uplinks.

Another significant disadvantage of 10GBASE-T is its latency of approximately 2 µsec. This severely limits its applicability for IPC and storage workloads found in HPC datacenters. The latency of SFP+ is less than 0.1 µsec for all 10GbE standards, and much less than that for datacenter applications. In contrast, the combination of an SFP+ module and PHY for datacenter applications dissipates 1-1.5 W of power, depending on reach.

Next generation 10GBASE-T PHY development is underway today with several companies working on solutions to break the 4W per port power dissipation barrier. It remains to be seen how HPC environments adopt these solutions when they are available. Trade-offs include cost, power dissipation, latency and distance. In the meantime, short (15m and less), low-cost copper connections within the server rack and from server to end-of-row switch can be implemented with DAC and short-range optical interconnects.

About the Author

Saqib Jang is founder and principal at Margalla Communications, a Woodside, Calif.-based strategic and technical marketing consulting firm focused on storage and server networking. He can be contacted at saqibj@margallacomm.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This