10GbE Networking for HPC — Applications and Technology Trends

By Saqib Jang

June 15, 2009

The first in a two-part series, this article examines the drivers for 10GbE deployment for high-performance cluster computing (HPCC) environments and related technology trends.

Ten Gigabit-per-second Ethernet (10GbE) represents the next level of Ethernet network bandwidth, with networking vendors promoting it as the next great capability. But high-performance computing (HPC) infrastructure and operations professionals must strike a balance between constant operational improvement and sound financial decision-making. So far, 10GbE has been a high-end luxury for environments that want maximum performance regardless of cost, but that’s changing fast. The per-port pricing gap between 10GbE and alternate network options is narrowing rapidly as more vendors increase the competitive pressure on pricing for related components.

So where will this technology truly matter for HPC environments? This article examines the impact of 10GbE on HPC infrastructure and provides guidance for the most effective transformation of your network. The initial focus will be on the top drivers and applications for 10GbE deployment in HPC environments and then the leading technology trends impacting 10GbE NIC designs will be reviewed. The next article in the series will examine the major offerings in the 10GbE NIC area.

Network Convergence for HPC Datacenters

Clusters of commodity servers have rapidly evolved into a highly cost-effective form of supercomputer. As the technology has matured and costs have declined, enterprises across a wide range of industries have begun leveraging HPC for product design and simulation, data analysis and other highly compute intensive applications that were previously beyond the reach of IT budgets. Off-the-shelf clusters frequently use Gigabit Ethernet as the cluster interconnect technology, but a number of cluster vendors are exploiting more specialized cluster interconnect fabrics that feature very low message-passing latency.

Although Ethernet has been the de facto technology for the general purpose LAN, Gigabit Ethernet has been considered as a sub-optimal switching fabric for very high performance cluster interconnect and storage networking. This is due primarily to performance issues stemming from the fact that GbE has lower bandwidth than InfiniBand and Fibre Channel, and typically exhibits significantly higher end-to-end latency and CPU utilization.

However, this situation has changed dramatically due to recent developments in low-latency 10 GbE switching and intelligent Ethernet NICs that offload cluster and storage protocol processing from the host processor. These enhancements allow server end systems to fully exploit 10 GbE line rates, while reducing one-hop end-to-end latency to less than 10 microseconds and CPU utilization for line-rate transfers to less than 10 percent.

As a result, 10 GbE end-to-end performance now compares very favorably with that of more specialized datacenter interconnects, eliminating performance as a drawback to the adoption of an Ethernet unified datacenter fabric. Off-loading cluster and storage protocol processing from the central CPU to intelligent 10GbE NIC can also improve the power efficiency of end stations because off-load ASIC processors are generally considerably more power efficient in executing protocol workloads.

10GbE R-NICs for Low-Latency IPC

Traditionally, TCP/IP protocol processing has been performed in software by the end system’s CPU. The load on the CPU increases linearly as a function of packets processed, with the usual rule of thumb being that each bit per second of bandwidth consumes about a Hz of CPU clock (e.g., 1 Gbps of network traffic consumes about 1 GHz of CPU). As more of the host CPU is consumed by the network load, both CPU utilization and host send/receive latency become significant issues.

Over the last few years, vendors of intelligent Ethernet NICs, together with the RDMA Consortium and the IETF, have been working on specifications for hardware-accelerated RDMA over TCP/IP (or iWARP) protocol stacks that can support the ever-increasing performance demands of cluster inter-process communications (IPC) over 10 GbE.

An RDMA over TCP/IP NIC (or R-NIC) provides hardware support for a remote direct memory access (RDMA) mechanism. R-NICs allows a server to read/write data directly between its user memory space and the user memory space of another R-NIC-enabled host on the network, without any involvement of the host operating systems.

R-NICs provide an OS kernel bypass mechanism allowing applications running in user space to post read/write commands that are transferred directly to the RDMA over TCP/IP NIC (R-NIC). This eliminates the delay and overhead associated with copy operations among multiple buffer locations, kernel transitions and application context switches. R-NICs can reduce CPU utilization for 10 Gbps transfers to less than 10 percent and can reduce the host component of end-to-end latency to as little as 5–10 microseconds.

To reduce latency and maximize performance, cluster applications use the industry-standard message passing interface (MPI) middleware that is implemented atop of iWARP and other RDMA transports. Use of MPI removes the need for developers to understand the details of the particular cluster interconnect.

There are many MPI variants, some of which are vendor-specific, while others are open source-based standards. The former includes Intel MPI, HP MPI, Platform (Scali) MPI, and MPI/Pro. Popular open source MPI variants include MPICH and LAM/MPI.

The OpenFabrics Alliance is developing open-source middleware APIs for iWARP and other RDMA transport. The OpenFabrics stack includes user-level (uDAPL) and kernel-level (kDAPL) intermediate APIs which run atop RDMA transports, including iWARP. Most of the popular MPI packages now support the OpenFabrics APIs removing the need for 10bE iWARP NIC hardware vendors to directly support MPI middleware. OpenFabrics Alliance has also taken the step of offering a fully-validated Open Fabrics Enterprise Distribution (OFED) stack for Linux.

Enter 10GbE iSCSI and FCoE: Enablers for Storage I/O Consolidation

The concept behind I/O consolidation is simple: the sharing of storage and networking traffic on the same Ethernet physical cable or, in cases that network isolation is desired, the flexibility to configure and use the same hardware for either type of network load, and the prioritizing of traffic delivery through quality of service (QoS) metrics. The benefits end-users will realize from this simple idea are significant.

Companies that leverage I/O consolidation will be able to realize significant gains in server slot efficiencies by using multi-function network/storage adapters to simplify their cabling scheme within a rack, thereby reducing the amount of heat each server generates.
The dominant approach to storage I/O consolidation was iSCSI (Internet SCSI), a flexible and powerful storage area networking (SAN) protocol, providing data availability and performance compared to other Ethernet-based storage approaches such as network-attached storage (NAS). iSCSI replaced the FC stack with the standard networking TCP/IP stack in order to transport the storage traffic over standard lower cost Ethernet.

Customers in entry-level, mid-range, and high-end segments are building flexible storage infrastructures using iSCSI to allocate and shift resources dynamically to cost-effectively meet the storage demands of their compute cluster environments.

There are a number of 10 GbE NICs available that provide hardware-based iSCSI offload including comprehensive bare metal provisioning and management capabilities that come from hardware based boot-from-SAN technology in the 10 GbE NIC. With hardware-based iSCSI offload, SCSI commands issued by the OS are offloaded to the 10GbE NIC, converted into TCP/IP packets and transmitted to the iSCSI storage target that stores the disks. To the OS, the remote storage device appears as locally attached SCSI device. The hardware-based iSCSI offload also enables an OS-agnostic boot-from-SAN, which effectively removes the need for any direct attached storage in the server and moves the software image into a centralized iSCSI SAN.

Although successful in supporting block storage for the broad range of HPC applications, iSCSI has not been adopted for the most demanding of applications, such as data mining and decision support applications, due to performance, resilience, and manageability issues specific to these applications.

The value proposition of the emerging FCoE standard is based primarily on the elimination of the expensive FC infrastructure components in datacenters, which are currently used to connect servers running high-end applications to their networked storage systems. Since FCoE requires 10GbE (with Enhanced Ethernet extensions in both the NICs and the switches), its deployment is not expected till 2010 and is likely to remain an expensive niche interconnect for the foreseeable future.

While FCoE aims to eliminate the FC infrastructure by unifying the storage and networking interconnect into a single 10GbE fabric, it will also provide investment protection for many years — particularly at the storage end of the Fibre Channel SAN via Enhanced Ethernet-to-Fibre Channel switches/gateways.

10GbE Server Connectivity Standards

Most new 10GbE controllers and adapters support dual ports on the network side. Some 10GbE controllers use the second port only for failover, whereas the trend is for dual-port 10GbE NICs to support active/active configurations (i.e., concurrent operation for both ports).

For host-side connectivity, 10GbE NICs supporting PCI Express is the industry standard CPU-to-I/O serial interconnect in volume servers. The available 10GbE NICs support a mix of PCIe v1.1 and the second generation of PCIe, PCIe v2.0 or PCIe Gen2, which was finalized by PCI-SIG in January 2007 and is rapidly gaining market traction.

Intel started shipping X38, the first system-logic chip set supporting PCIe Gen2, in September, 2007, with AMD and NVIDIA following suit shortly thereafter. In March 2009, Intel’s dual-socket (2P) and quad-socket (4P0 server platforms completed the transition to PCIe Gen2.

PCIe Gen2 doubles the data rate possible in each lane of the scalable serial interface to 5Gb/s bidirectional throughput per lane compared to PCIe Gen1 devices, which supports 2.5Gb/s per lane. Thus, a PCIe Gen x8 slot can support an effective throughput of 32Gb/s (assuming the standard Ethernet 8b/10b encoding — every 10 bits sent carry 8 bits of data — so that the useful data transmission rate is four-fifths the raw rate). In comparison, a PCIe V1.1 slot can support 16 Gb/s effective bidirectional throughput.

Traditionally, blade servers have used GbE as the backplane fabric to connect server blades within the chassis. For these systems, blade system manufacturers open up their system designs to enable third parties to offer Fibre Channel and InfiniBand adapters for storage and clustering applications, respectively.

An emerging alternative to adding parallel fabrics for networking, storage and clustering gaining ground in blade servers is to provide 10GbE backplanes for these systems, enabling the use of iSCSI and iWARP storage and clustering protocols, which thereby eliminate the power and space requirements of multiple fabrics and controllers.

During 2007, HP and IBM introduced blade server designs with 1/10Gbe backplanes and mezzanine card offerings for 10GbE connectivity. In September 2008, HP introduced a blade server model that integrates a dual-port 10GbE controller as a default capability. First-generation 10GbE-based blade server designs use the KX4 Backplane Standard, which uses four 3.125 Gbps SerDes links and in which the network-side XAUI port of 10GbE NICs connects to a 10GBASE-KX4-to-XAUI optical PHY.

The blade server market is rapidly moving toward next-generation designs that use 10 Gbps serial links based on 10GBase-KR standard and which reduce the number of backplane traces. In this case, XAUI ports of 10GbE NICs connect to 10GBASE-KR-to-XAUI optical PHYs.

Contrasting 10GbE Physical Layer Options

The 10 Gigabit Ethernet standard encompasses a number of different physical layer (PHY) standards, including optical and copper cabling standards. As of 2009 10 Gigabit Ethernet is still an emerging technology with only 2 million ports shipped in 2008 — the overwhelming majority of which had optical PHYs. Additionally, the majority of these 10GbE ports shipped into switch applications, with only about 5 percent shipping into server interconnects (NICs).

Each successive generation of 10GbE optical modules has resulted in lower power dissipation, smaller footprint, and therefore higher port density. Most importantly for widespread deployment, the cost of these modules has declined by two orders of magnitude since 2002. As a result, 10GbE is nearing the price points required for mass adoption in HPC networks. Typically adoption of a new generation of Ethernet technology occurs when IT managers can buy a 10X increase in bandwidth for 3X-4X increase in price.

SFP+ is the most recent and state-of-the-art optical module form factor, and offers a significant improvement over the earlier XFP standard — with respect to footprint, allowing 24-port and 48-port dense top-of-rack switches and half-height PCIe NICs for datacenter server interfaces.

The SFP+ form-factor has been defined to support both optical interfaces and copper (twin-ax) 10GbE serial connections for distances up to 10 meters, for example, for a top-of-rack switch and for connecting racks within a datacenter. This “direct attach copper (DAC)” configuration further reduces the cost of the module by removing the transmit and receive optical subassemblies. SFP+ optical modules typically dissipate less than 1 W of power, while DAC versions have no active power-consuming components.

The key features enabling DAC connections of over 10m are i) EDC, ii) 10G TX pre-emphasis, and iii) forward error correction. NetLogic Microsystems has demonstrated up to 20m transmission using these three features. Other companies with similar devices include Broadcom and AMCC, although neither has shown equivalent DAC distances for similar power dissipation numbers.

While SFP+ is rapidly gaining traction for 10GbE HPC datacenter applications due to its lower cost and power characteristics, the alternative 10GBASE-T technology for 10GbE transmission over Category 6/7 twisted-pair copper cabling has been lagging in gaining market acceptance. 10GBASE-T has proven to be a challenging technology to implement in a cost- and power-effective way, because of the complexity of the signal processing required to overcome the bandwidth limitations and noise characteristics of the twisted-pair medium.

Other than link distance, 10GBASE-T suffers from several inherent disadvantages relative to SFP+ DAC. The most significant is power. Even using the latest IC process technology, 10GBASE-T PHYs dissipate around 6 W of power. This effectively has ruled out current 10GBASE-T offerings as a viable technology for the emerging generation of dual-port NIC cards and high-density switch technology, relegating it to single-port adapters and uplinks.

Another significant disadvantage of 10GBASE-T is its latency of approximately 2 µsec. This severely limits its applicability for IPC and storage workloads found in HPC datacenters. The latency of SFP+ is less than 0.1 µsec for all 10GbE standards, and much less than that for datacenter applications. In contrast, the combination of an SFP+ module and PHY for datacenter applications dissipates 1-1.5 W of power, depending on reach.

Next generation 10GBASE-T PHY development is underway today with several companies working on solutions to break the 4W per port power dissipation barrier. It remains to be seen how HPC environments adopt these solutions when they are available. Trade-offs include cost, power dissipation, latency and distance. In the meantime, short (15m and less), low-cost copper connections within the server rack and from server to end-of-row switch can be implemented with DAC and short-range optical interconnects.

About the Author

Saqib Jang is founder and principal at Margalla Communications, a Woodside, Calif.-based strategic and technical marketing consulting firm focused on storage and server networking. He can be contacted at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

University of Chicago Researchers Generate First Computational Model of Entire SARS-CoV-2 Virus

January 15, 2021

Over the course of the last year, many detailed computational models of SARS-CoV-2 have been produced with the help of supercomputers, but those models have largely focused on critical elements of the virus, such as its Read more…

By Oliver Peckham

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Roar Supercomputer to Support Naval Aircraft Research

January 14, 2021

One might not think “aircraft” when picturing the U.S. Navy, but the military branch actually has thousands of aircraft currently in service – and now, supercomputing will help future naval aircraft operate faster, Read more…

By Staff report

DOE and NOAA Extend Computing Partnership, Plan for New Supercomputer

January 14, 2021

The National Climate-Computing Research Center (NCRC), hosted by Oak Ridge National Laboratory (ORNL), has been supporting the climate research of the National Oceanic and Atmospheric Administration (NOAA) for the last 1 Read more…

By Oliver Peckham

Using Micro-Combs, Researchers Demonstrate World’s Fastest Optical Neuromorphic Processor for AI

January 13, 2021

Neuromorphic computing, which uses chips that mimic the behavior of the human brain using virtual “neurons,” is growing in popularity thanks to high-profile efforts from Intel and others. Now, a team of researchers l Read more…

By Oliver Peckham

AWS Solution Channel

Now Available – Amazon EC2 C6gn Instances with 100 Gbps Networking

Amazon EC2 C6gn instances powered by AWS Graviton2 processors are now available!

Compared to C6g instances, this new instance type provides 4x higher network bandwidth, 4x higher packet processing performance, and 2x higher EBS bandwidth. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Honing In on AI, US Launches National Artificial Intelligence Initiative Office

January 13, 2021

To drive American leadership in the field of AI into the future, the National Artificial Intelligence Initiative Office has been launched by the White House Office of Science and Technology Policy (OSTP). The new agen Read more…

By Todd R. Weiss

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Intel ‘Ice Lake’ Server Chips in Production, Set for Volume Ramp This Quarter

January 12, 2021

Intel Corp. used this week’s virtual CES 2021 event to reassert its dominance of the datacenter with the formal roll out of its next-generation server chip, the 10nm Xeon Scalable processor that targets AI and HPC workloads. The third-generation “Ice Lake” family... Read more…

By George Leopold

Researchers Say It Won’t Be Possible to Control Superintelligent AI

January 11, 2021

Worries about out-of-control AI aren’t new. Many prominent figures have suggested caution when unleashing AI. One quote that keeps cropping up is (roughly) th Read more…

By John Russell

AMD Files Patent on New GPU Chiplet Approach

January 5, 2021

Advanced Micro Devices is accelerating the GPU chiplet race with the release of a U.S. patent application for a device that incorporates high-bandwidth intercon Read more…

By George Leopold

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Intel Touts Optane Performance, Teases Next-gen “Crow Pass”

January 5, 2021

Competition to leverage new memory and storage hardware with new or improved software to create better storage/memory schemes has steadily gathered steam during Read more…

By John Russell

Farewell 2020: Bleak, Yes. But a Lot of Good Happened Too

December 30, 2020

Here on the cusp of the new year, the catchphrase ‘2020 hindsight’ has a distinctly different feel. Good riddance, yes. But also proof of science’s power Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Leading Solution Providers

Contributors

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This