10GbE Networking for HPC — Applications and Technology Trends

By Saqib Jang

June 15, 2009

The first in a two-part series, this article examines the drivers for 10GbE deployment for high-performance cluster computing (HPCC) environments and related technology trends.

Ten Gigabit-per-second Ethernet (10GbE) represents the next level of Ethernet network bandwidth, with networking vendors promoting it as the next great capability. But high-performance computing (HPC) infrastructure and operations professionals must strike a balance between constant operational improvement and sound financial decision-making. So far, 10GbE has been a high-end luxury for environments that want maximum performance regardless of cost, but that’s changing fast. The per-port pricing gap between 10GbE and alternate network options is narrowing rapidly as more vendors increase the competitive pressure on pricing for related components.

So where will this technology truly matter for HPC environments? This article examines the impact of 10GbE on HPC infrastructure and provides guidance for the most effective transformation of your network. The initial focus will be on the top drivers and applications for 10GbE deployment in HPC environments and then the leading technology trends impacting 10GbE NIC designs will be reviewed. The next article in the series will examine the major offerings in the 10GbE NIC area.

Network Convergence for HPC Datacenters

Clusters of commodity servers have rapidly evolved into a highly cost-effective form of supercomputer. As the technology has matured and costs have declined, enterprises across a wide range of industries have begun leveraging HPC for product design and simulation, data analysis and other highly compute intensive applications that were previously beyond the reach of IT budgets. Off-the-shelf clusters frequently use Gigabit Ethernet as the cluster interconnect technology, but a number of cluster vendors are exploiting more specialized cluster interconnect fabrics that feature very low message-passing latency.

Although Ethernet has been the de facto technology for the general purpose LAN, Gigabit Ethernet has been considered as a sub-optimal switching fabric for very high performance cluster interconnect and storage networking. This is due primarily to performance issues stemming from the fact that GbE has lower bandwidth than InfiniBand and Fibre Channel, and typically exhibits significantly higher end-to-end latency and CPU utilization.

However, this situation has changed dramatically due to recent developments in low-latency 10 GbE switching and intelligent Ethernet NICs that offload cluster and storage protocol processing from the host processor. These enhancements allow server end systems to fully exploit 10 GbE line rates, while reducing one-hop end-to-end latency to less than 10 microseconds and CPU utilization for line-rate transfers to less than 10 percent.

As a result, 10 GbE end-to-end performance now compares very favorably with that of more specialized datacenter interconnects, eliminating performance as a drawback to the adoption of an Ethernet unified datacenter fabric. Off-loading cluster and storage protocol processing from the central CPU to intelligent 10GbE NIC can also improve the power efficiency of end stations because off-load ASIC processors are generally considerably more power efficient in executing protocol workloads.

10GbE R-NICs for Low-Latency IPC

Traditionally, TCP/IP protocol processing has been performed in software by the end system’s CPU. The load on the CPU increases linearly as a function of packets processed, with the usual rule of thumb being that each bit per second of bandwidth consumes about a Hz of CPU clock (e.g., 1 Gbps of network traffic consumes about 1 GHz of CPU). As more of the host CPU is consumed by the network load, both CPU utilization and host send/receive latency become significant issues.

Over the last few years, vendors of intelligent Ethernet NICs, together with the RDMA Consortium and the IETF, have been working on specifications for hardware-accelerated RDMA over TCP/IP (or iWARP) protocol stacks that can support the ever-increasing performance demands of cluster inter-process communications (IPC) over 10 GbE.

An RDMA over TCP/IP NIC (or R-NIC) provides hardware support for a remote direct memory access (RDMA) mechanism. R-NICs allows a server to read/write data directly between its user memory space and the user memory space of another R-NIC-enabled host on the network, without any involvement of the host operating systems.

R-NICs provide an OS kernel bypass mechanism allowing applications running in user space to post read/write commands that are transferred directly to the RDMA over TCP/IP NIC (R-NIC). This eliminates the delay and overhead associated with copy operations among multiple buffer locations, kernel transitions and application context switches. R-NICs can reduce CPU utilization for 10 Gbps transfers to less than 10 percent and can reduce the host component of end-to-end latency to as little as 5–10 microseconds.

To reduce latency and maximize performance, cluster applications use the industry-standard message passing interface (MPI) middleware that is implemented atop of iWARP and other RDMA transports. Use of MPI removes the need for developers to understand the details of the particular cluster interconnect.

There are many MPI variants, some of which are vendor-specific, while others are open source-based standards. The former includes Intel MPI, HP MPI, Platform (Scali) MPI, and MPI/Pro. Popular open source MPI variants include MPICH and LAM/MPI.

The OpenFabrics Alliance is developing open-source middleware APIs for iWARP and other RDMA transport. The OpenFabrics stack includes user-level (uDAPL) and kernel-level (kDAPL) intermediate APIs which run atop RDMA transports, including iWARP. Most of the popular MPI packages now support the OpenFabrics APIs removing the need for 10bE iWARP NIC hardware vendors to directly support MPI middleware. OpenFabrics Alliance has also taken the step of offering a fully-validated Open Fabrics Enterprise Distribution (OFED) stack for Linux.

Enter 10GbE iSCSI and FCoE: Enablers for Storage I/O Consolidation

The concept behind I/O consolidation is simple: the sharing of storage and networking traffic on the same Ethernet physical cable or, in cases that network isolation is desired, the flexibility to configure and use the same hardware for either type of network load, and the prioritizing of traffic delivery through quality of service (QoS) metrics. The benefits end-users will realize from this simple idea are significant.

Companies that leverage I/O consolidation will be able to realize significant gains in server slot efficiencies by using multi-function network/storage adapters to simplify their cabling scheme within a rack, thereby reducing the amount of heat each server generates.
The dominant approach to storage I/O consolidation was iSCSI (Internet SCSI), a flexible and powerful storage area networking (SAN) protocol, providing data availability and performance compared to other Ethernet-based storage approaches such as network-attached storage (NAS). iSCSI replaced the FC stack with the standard networking TCP/IP stack in order to transport the storage traffic over standard lower cost Ethernet.

Customers in entry-level, mid-range, and high-end segments are building flexible storage infrastructures using iSCSI to allocate and shift resources dynamically to cost-effectively meet the storage demands of their compute cluster environments.

There are a number of 10 GbE NICs available that provide hardware-based iSCSI offload including comprehensive bare metal provisioning and management capabilities that come from hardware based boot-from-SAN technology in the 10 GbE NIC. With hardware-based iSCSI offload, SCSI commands issued by the OS are offloaded to the 10GbE NIC, converted into TCP/IP packets and transmitted to the iSCSI storage target that stores the disks. To the OS, the remote storage device appears as locally attached SCSI device. The hardware-based iSCSI offload also enables an OS-agnostic boot-from-SAN, which effectively removes the need for any direct attached storage in the server and moves the software image into a centralized iSCSI SAN.

Although successful in supporting block storage for the broad range of HPC applications, iSCSI has not been adopted for the most demanding of applications, such as data mining and decision support applications, due to performance, resilience, and manageability issues specific to these applications.

The value proposition of the emerging FCoE standard is based primarily on the elimination of the expensive FC infrastructure components in datacenters, which are currently used to connect servers running high-end applications to their networked storage systems. Since FCoE requires 10GbE (with Enhanced Ethernet extensions in both the NICs and the switches), its deployment is not expected till 2010 and is likely to remain an expensive niche interconnect for the foreseeable future.

While FCoE aims to eliminate the FC infrastructure by unifying the storage and networking interconnect into a single 10GbE fabric, it will also provide investment protection for many years — particularly at the storage end of the Fibre Channel SAN via Enhanced Ethernet-to-Fibre Channel switches/gateways.

10GbE Server Connectivity Standards

Most new 10GbE controllers and adapters support dual ports on the network side. Some 10GbE controllers use the second port only for failover, whereas the trend is for dual-port 10GbE NICs to support active/active configurations (i.e., concurrent operation for both ports).

For host-side connectivity, 10GbE NICs supporting PCI Express is the industry standard CPU-to-I/O serial interconnect in volume servers. The available 10GbE NICs support a mix of PCIe v1.1 and the second generation of PCIe, PCIe v2.0 or PCIe Gen2, which was finalized by PCI-SIG in January 2007 and is rapidly gaining market traction.

Intel started shipping X38, the first system-logic chip set supporting PCIe Gen2, in September, 2007, with AMD and NVIDIA following suit shortly thereafter. In March 2009, Intel’s dual-socket (2P) and quad-socket (4P0 server platforms completed the transition to PCIe Gen2.

PCIe Gen2 doubles the data rate possible in each lane of the scalable serial interface to 5Gb/s bidirectional throughput per lane compared to PCIe Gen1 devices, which supports 2.5Gb/s per lane. Thus, a PCIe Gen x8 slot can support an effective throughput of 32Gb/s (assuming the standard Ethernet 8b/10b encoding — every 10 bits sent carry 8 bits of data — so that the useful data transmission rate is four-fifths the raw rate). In comparison, a PCIe V1.1 slot can support 16 Gb/s effective bidirectional throughput.

Traditionally, blade servers have used GbE as the backplane fabric to connect server blades within the chassis. For these systems, blade system manufacturers open up their system designs to enable third parties to offer Fibre Channel and InfiniBand adapters for storage and clustering applications, respectively.

An emerging alternative to adding parallel fabrics for networking, storage and clustering gaining ground in blade servers is to provide 10GbE backplanes for these systems, enabling the use of iSCSI and iWARP storage and clustering protocols, which thereby eliminate the power and space requirements of multiple fabrics and controllers.

During 2007, HP and IBM introduced blade server designs with 1/10Gbe backplanes and mezzanine card offerings for 10GbE connectivity. In September 2008, HP introduced a blade server model that integrates a dual-port 10GbE controller as a default capability. First-generation 10GbE-based blade server designs use the KX4 Backplane Standard, which uses four 3.125 Gbps SerDes links and in which the network-side XAUI port of 10GbE NICs connects to a 10GBASE-KX4-to-XAUI optical PHY.

The blade server market is rapidly moving toward next-generation designs that use 10 Gbps serial links based on 10GBase-KR standard and which reduce the number of backplane traces. In this case, XAUI ports of 10GbE NICs connect to 10GBASE-KR-to-XAUI optical PHYs.

Contrasting 10GbE Physical Layer Options

The 10 Gigabit Ethernet standard encompasses a number of different physical layer (PHY) standards, including optical and copper cabling standards. As of 2009 10 Gigabit Ethernet is still an emerging technology with only 2 million ports shipped in 2008 — the overwhelming majority of which had optical PHYs. Additionally, the majority of these 10GbE ports shipped into switch applications, with only about 5 percent shipping into server interconnects (NICs).

Each successive generation of 10GbE optical modules has resulted in lower power dissipation, smaller footprint, and therefore higher port density. Most importantly for widespread deployment, the cost of these modules has declined by two orders of magnitude since 2002. As a result, 10GbE is nearing the price points required for mass adoption in HPC networks. Typically adoption of a new generation of Ethernet technology occurs when IT managers can buy a 10X increase in bandwidth for 3X-4X increase in price.

SFP+ is the most recent and state-of-the-art optical module form factor, and offers a significant improvement over the earlier XFP standard — with respect to footprint, allowing 24-port and 48-port dense top-of-rack switches and half-height PCIe NICs for datacenter server interfaces.

The SFP+ form-factor has been defined to support both optical interfaces and copper (twin-ax) 10GbE serial connections for distances up to 10 meters, for example, for a top-of-rack switch and for connecting racks within a datacenter. This “direct attach copper (DAC)” configuration further reduces the cost of the module by removing the transmit and receive optical subassemblies. SFP+ optical modules typically dissipate less than 1 W of power, while DAC versions have no active power-consuming components.

The key features enabling DAC connections of over 10m are i) EDC, ii) 10G TX pre-emphasis, and iii) forward error correction. NetLogic Microsystems has demonstrated up to 20m transmission using these three features. Other companies with similar devices include Broadcom and AMCC, although neither has shown equivalent DAC distances for similar power dissipation numbers.

While SFP+ is rapidly gaining traction for 10GbE HPC datacenter applications due to its lower cost and power characteristics, the alternative 10GBASE-T technology for 10GbE transmission over Category 6/7 twisted-pair copper cabling has been lagging in gaining market acceptance. 10GBASE-T has proven to be a challenging technology to implement in a cost- and power-effective way, because of the complexity of the signal processing required to overcome the bandwidth limitations and noise characteristics of the twisted-pair medium.

Other than link distance, 10GBASE-T suffers from several inherent disadvantages relative to SFP+ DAC. The most significant is power. Even using the latest IC process technology, 10GBASE-T PHYs dissipate around 6 W of power. This effectively has ruled out current 10GBASE-T offerings as a viable technology for the emerging generation of dual-port NIC cards and high-density switch technology, relegating it to single-port adapters and uplinks.

Another significant disadvantage of 10GBASE-T is its latency of approximately 2 µsec. This severely limits its applicability for IPC and storage workloads found in HPC datacenters. The latency of SFP+ is less than 0.1 µsec for all 10GbE standards, and much less than that for datacenter applications. In contrast, the combination of an SFP+ module and PHY for datacenter applications dissipates 1-1.5 W of power, depending on reach.

Next generation 10GBASE-T PHY development is underway today with several companies working on solutions to break the 4W per port power dissipation barrier. It remains to be seen how HPC environments adopt these solutions when they are available. Trade-offs include cost, power dissipation, latency and distance. In the meantime, short (15m and less), low-cost copper connections within the server rack and from server to end-of-row switch can be implemented with DAC and short-range optical interconnects.

About the Author

Saqib Jang is founder and principal at Margalla Communications, a Woodside, Calif.-based strategic and technical marketing consulting firm focused on storage and server networking. He can be contacted at saqibj@margallacomm.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This