10GbE Networking for HPC — Applications and Technology Trends

By Saqib Jang

June 15, 2009

The first in a two-part series, this article examines the drivers for 10GbE deployment for high-performance cluster computing (HPCC) environments and related technology trends.

Ten Gigabit-per-second Ethernet (10GbE) represents the next level of Ethernet network bandwidth, with networking vendors promoting it as the next great capability. But high-performance computing (HPC) infrastructure and operations professionals must strike a balance between constant operational improvement and sound financial decision-making. So far, 10GbE has been a high-end luxury for environments that want maximum performance regardless of cost, but that’s changing fast. The per-port pricing gap between 10GbE and alternate network options is narrowing rapidly as more vendors increase the competitive pressure on pricing for related components.

So where will this technology truly matter for HPC environments? This article examines the impact of 10GbE on HPC infrastructure and provides guidance for the most effective transformation of your network. The initial focus will be on the top drivers and applications for 10GbE deployment in HPC environments and then the leading technology trends impacting 10GbE NIC designs will be reviewed. The next article in the series will examine the major offerings in the 10GbE NIC area.

Network Convergence for HPC Datacenters

Clusters of commodity servers have rapidly evolved into a highly cost-effective form of supercomputer. As the technology has matured and costs have declined, enterprises across a wide range of industries have begun leveraging HPC for product design and simulation, data analysis and other highly compute intensive applications that were previously beyond the reach of IT budgets. Off-the-shelf clusters frequently use Gigabit Ethernet as the cluster interconnect technology, but a number of cluster vendors are exploiting more specialized cluster interconnect fabrics that feature very low message-passing latency.

Although Ethernet has been the de facto technology for the general purpose LAN, Gigabit Ethernet has been considered as a sub-optimal switching fabric for very high performance cluster interconnect and storage networking. This is due primarily to performance issues stemming from the fact that GbE has lower bandwidth than InfiniBand and Fibre Channel, and typically exhibits significantly higher end-to-end latency and CPU utilization.

However, this situation has changed dramatically due to recent developments in low-latency 10 GbE switching and intelligent Ethernet NICs that offload cluster and storage protocol processing from the host processor. These enhancements allow server end systems to fully exploit 10 GbE line rates, while reducing one-hop end-to-end latency to less than 10 microseconds and CPU utilization for line-rate transfers to less than 10 percent.

As a result, 10 GbE end-to-end performance now compares very favorably with that of more specialized datacenter interconnects, eliminating performance as a drawback to the adoption of an Ethernet unified datacenter fabric. Off-loading cluster and storage protocol processing from the central CPU to intelligent 10GbE NIC can also improve the power efficiency of end stations because off-load ASIC processors are generally considerably more power efficient in executing protocol workloads.

10GbE R-NICs for Low-Latency IPC

Traditionally, TCP/IP protocol processing has been performed in software by the end system’s CPU. The load on the CPU increases linearly as a function of packets processed, with the usual rule of thumb being that each bit per second of bandwidth consumes about a Hz of CPU clock (e.g., 1 Gbps of network traffic consumes about 1 GHz of CPU). As more of the host CPU is consumed by the network load, both CPU utilization and host send/receive latency become significant issues.

Over the last few years, vendors of intelligent Ethernet NICs, together with the RDMA Consortium and the IETF, have been working on specifications for hardware-accelerated RDMA over TCP/IP (or iWARP) protocol stacks that can support the ever-increasing performance demands of cluster inter-process communications (IPC) over 10 GbE.

An RDMA over TCP/IP NIC (or R-NIC) provides hardware support for a remote direct memory access (RDMA) mechanism. R-NICs allows a server to read/write data directly between its user memory space and the user memory space of another R-NIC-enabled host on the network, without any involvement of the host operating systems.

R-NICs provide an OS kernel bypass mechanism allowing applications running in user space to post read/write commands that are transferred directly to the RDMA over TCP/IP NIC (R-NIC). This eliminates the delay and overhead associated with copy operations among multiple buffer locations, kernel transitions and application context switches. R-NICs can reduce CPU utilization for 10 Gbps transfers to less than 10 percent and can reduce the host component of end-to-end latency to as little as 5–10 microseconds.

To reduce latency and maximize performance, cluster applications use the industry-standard message passing interface (MPI) middleware that is implemented atop of iWARP and other RDMA transports. Use of MPI removes the need for developers to understand the details of the particular cluster interconnect.

There are many MPI variants, some of which are vendor-specific, while others are open source-based standards. The former includes Intel MPI, HP MPI, Platform (Scali) MPI, and MPI/Pro. Popular open source MPI variants include MPICH and LAM/MPI.

The OpenFabrics Alliance is developing open-source middleware APIs for iWARP and other RDMA transport. The OpenFabrics stack includes user-level (uDAPL) and kernel-level (kDAPL) intermediate APIs which run atop RDMA transports, including iWARP. Most of the popular MPI packages now support the OpenFabrics APIs removing the need for 10bE iWARP NIC hardware vendors to directly support MPI middleware. OpenFabrics Alliance has also taken the step of offering a fully-validated Open Fabrics Enterprise Distribution (OFED) stack for Linux.

Enter 10GbE iSCSI and FCoE: Enablers for Storage I/O Consolidation

The concept behind I/O consolidation is simple: the sharing of storage and networking traffic on the same Ethernet physical cable or, in cases that network isolation is desired, the flexibility to configure and use the same hardware for either type of network load, and the prioritizing of traffic delivery through quality of service (QoS) metrics. The benefits end-users will realize from this simple idea are significant.

Companies that leverage I/O consolidation will be able to realize significant gains in server slot efficiencies by using multi-function network/storage adapters to simplify their cabling scheme within a rack, thereby reducing the amount of heat each server generates.
The dominant approach to storage I/O consolidation was iSCSI (Internet SCSI), a flexible and powerful storage area networking (SAN) protocol, providing data availability and performance compared to other Ethernet-based storage approaches such as network-attached storage (NAS). iSCSI replaced the FC stack with the standard networking TCP/IP stack in order to transport the storage traffic over standard lower cost Ethernet.

Customers in entry-level, mid-range, and high-end segments are building flexible storage infrastructures using iSCSI to allocate and shift resources dynamically to cost-effectively meet the storage demands of their compute cluster environments.

There are a number of 10 GbE NICs available that provide hardware-based iSCSI offload including comprehensive bare metal provisioning and management capabilities that come from hardware based boot-from-SAN technology in the 10 GbE NIC. With hardware-based iSCSI offload, SCSI commands issued by the OS are offloaded to the 10GbE NIC, converted into TCP/IP packets and transmitted to the iSCSI storage target that stores the disks. To the OS, the remote storage device appears as locally attached SCSI device. The hardware-based iSCSI offload also enables an OS-agnostic boot-from-SAN, which effectively removes the need for any direct attached storage in the server and moves the software image into a centralized iSCSI SAN.

Although successful in supporting block storage for the broad range of HPC applications, iSCSI has not been adopted for the most demanding of applications, such as data mining and decision support applications, due to performance, resilience, and manageability issues specific to these applications.

The value proposition of the emerging FCoE standard is based primarily on the elimination of the expensive FC infrastructure components in datacenters, which are currently used to connect servers running high-end applications to their networked storage systems. Since FCoE requires 10GbE (with Enhanced Ethernet extensions in both the NICs and the switches), its deployment is not expected till 2010 and is likely to remain an expensive niche interconnect for the foreseeable future.

While FCoE aims to eliminate the FC infrastructure by unifying the storage and networking interconnect into a single 10GbE fabric, it will also provide investment protection for many years — particularly at the storage end of the Fibre Channel SAN via Enhanced Ethernet-to-Fibre Channel switches/gateways.

10GbE Server Connectivity Standards

Most new 10GbE controllers and adapters support dual ports on the network side. Some 10GbE controllers use the second port only for failover, whereas the trend is for dual-port 10GbE NICs to support active/active configurations (i.e., concurrent operation for both ports).

For host-side connectivity, 10GbE NICs supporting PCI Express is the industry standard CPU-to-I/O serial interconnect in volume servers. The available 10GbE NICs support a mix of PCIe v1.1 and the second generation of PCIe, PCIe v2.0 or PCIe Gen2, which was finalized by PCI-SIG in January 2007 and is rapidly gaining market traction.

Intel started shipping X38, the first system-logic chip set supporting PCIe Gen2, in September, 2007, with AMD and NVIDIA following suit shortly thereafter. In March 2009, Intel’s dual-socket (2P) and quad-socket (4P0 server platforms completed the transition to PCIe Gen2.

PCIe Gen2 doubles the data rate possible in each lane of the scalable serial interface to 5Gb/s bidirectional throughput per lane compared to PCIe Gen1 devices, which supports 2.5Gb/s per lane. Thus, a PCIe Gen x8 slot can support an effective throughput of 32Gb/s (assuming the standard Ethernet 8b/10b encoding — every 10 bits sent carry 8 bits of data — so that the useful data transmission rate is four-fifths the raw rate). In comparison, a PCIe V1.1 slot can support 16 Gb/s effective bidirectional throughput.

Traditionally, blade servers have used GbE as the backplane fabric to connect server blades within the chassis. For these systems, blade system manufacturers open up their system designs to enable third parties to offer Fibre Channel and InfiniBand adapters for storage and clustering applications, respectively.

An emerging alternative to adding parallel fabrics for networking, storage and clustering gaining ground in blade servers is to provide 10GbE backplanes for these systems, enabling the use of iSCSI and iWARP storage and clustering protocols, which thereby eliminate the power and space requirements of multiple fabrics and controllers.

During 2007, HP and IBM introduced blade server designs with 1/10Gbe backplanes and mezzanine card offerings for 10GbE connectivity. In September 2008, HP introduced a blade server model that integrates a dual-port 10GbE controller as a default capability. First-generation 10GbE-based blade server designs use the KX4 Backplane Standard, which uses four 3.125 Gbps SerDes links and in which the network-side XAUI port of 10GbE NICs connects to a 10GBASE-KX4-to-XAUI optical PHY.

The blade server market is rapidly moving toward next-generation designs that use 10 Gbps serial links based on 10GBase-KR standard and which reduce the number of backplane traces. In this case, XAUI ports of 10GbE NICs connect to 10GBASE-KR-to-XAUI optical PHYs.

Contrasting 10GbE Physical Layer Options

The 10 Gigabit Ethernet standard encompasses a number of different physical layer (PHY) standards, including optical and copper cabling standards. As of 2009 10 Gigabit Ethernet is still an emerging technology with only 2 million ports shipped in 2008 — the overwhelming majority of which had optical PHYs. Additionally, the majority of these 10GbE ports shipped into switch applications, with only about 5 percent shipping into server interconnects (NICs).

Each successive generation of 10GbE optical modules has resulted in lower power dissipation, smaller footprint, and therefore higher port density. Most importantly for widespread deployment, the cost of these modules has declined by two orders of magnitude since 2002. As a result, 10GbE is nearing the price points required for mass adoption in HPC networks. Typically adoption of a new generation of Ethernet technology occurs when IT managers can buy a 10X increase in bandwidth for 3X-4X increase in price.

SFP+ is the most recent and state-of-the-art optical module form factor, and offers a significant improvement over the earlier XFP standard — with respect to footprint, allowing 24-port and 48-port dense top-of-rack switches and half-height PCIe NICs for datacenter server interfaces.

The SFP+ form-factor has been defined to support both optical interfaces and copper (twin-ax) 10GbE serial connections for distances up to 10 meters, for example, for a top-of-rack switch and for connecting racks within a datacenter. This “direct attach copper (DAC)” configuration further reduces the cost of the module by removing the transmit and receive optical subassemblies. SFP+ optical modules typically dissipate less than 1 W of power, while DAC versions have no active power-consuming components.

The key features enabling DAC connections of over 10m are i) EDC, ii) 10G TX pre-emphasis, and iii) forward error correction. NetLogic Microsystems has demonstrated up to 20m transmission using these three features. Other companies with similar devices include Broadcom and AMCC, although neither has shown equivalent DAC distances for similar power dissipation numbers.

While SFP+ is rapidly gaining traction for 10GbE HPC datacenter applications due to its lower cost and power characteristics, the alternative 10GBASE-T technology for 10GbE transmission over Category 6/7 twisted-pair copper cabling has been lagging in gaining market acceptance. 10GBASE-T has proven to be a challenging technology to implement in a cost- and power-effective way, because of the complexity of the signal processing required to overcome the bandwidth limitations and noise characteristics of the twisted-pair medium.

Other than link distance, 10GBASE-T suffers from several inherent disadvantages relative to SFP+ DAC. The most significant is power. Even using the latest IC process technology, 10GBASE-T PHYs dissipate around 6 W of power. This effectively has ruled out current 10GBASE-T offerings as a viable technology for the emerging generation of dual-port NIC cards and high-density switch technology, relegating it to single-port adapters and uplinks.

Another significant disadvantage of 10GBASE-T is its latency of approximately 2 µsec. This severely limits its applicability for IPC and storage workloads found in HPC datacenters. The latency of SFP+ is less than 0.1 µsec for all 10GbE standards, and much less than that for datacenter applications. In contrast, the combination of an SFP+ module and PHY for datacenter applications dissipates 1-1.5 W of power, depending on reach.

Next generation 10GBASE-T PHY development is underway today with several companies working on solutions to break the 4W per port power dissipation barrier. It remains to be seen how HPC environments adopt these solutions when they are available. Trade-offs include cost, power dissipation, latency and distance. In the meantime, short (15m and less), low-cost copper connections within the server rack and from server to end-of-row switch can be implemented with DAC and short-range optical interconnects.

About the Author

Saqib Jang is founder and principal at Margalla Communications, a Woodside, Calif.-based strategic and technical marketing consulting firm focused on storage and server networking. He can be contacted at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX develop Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computi Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” The newly announced SuperPods come just two years after the Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fle Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire