Next-Generation 10GbE Switching

By Saqib Jang

June 22, 2007

High-end computing environments for high-performance cluster computing (HPCC) and internet data center (IDC) applications have embraced the scale-out server model. As a result, they have witnessed their data centers expand to include hundreds to thousands of servers running diverse operating systems and applications. As the number of servers has grown, so has the cost of operations, which includes people, space, power, and cooling.

In response, IT organizations are increasingly turning to technologies like utility computing, virtualization, and data center grids to transform data center resources from monolithic systems into agile, shared computing resource pools, which consist of uniform components that can be dynamically aggregated, tiered, provisioned, and accessed.

Harnessed together, these technologies have the potential to dramatically increase performance levels, maximize return on investment, and allow IT organizations to rapidly deploy and scale resources on-demand.

Scalable 10GbE Networking For High-Performance Data Centers

Central to the vision of virtualization and automation of HPCC and IDC data center resources is the deployment of grid computing environments with hundreds to thousands of servers running tightly coupled applications that are highly sensitive to client/server and server-to-storage communications bandwidth, as well as to inter-process message latency.

While a number of specialized interconnect technologies, such as InfiniBand and Fibre Channel, are available for building high performance data center networks, Gigabit Ethernet remains the dominant networking technology in HPCC and IDC environments. The continued evolution towards 10 Gigabit Ethernet (10GbE) server networking with the promise of rapidly declining prices and a stream of innovations, such as power and space-efficient network interface controllers (NICs) for dense server form factors, and support of RDMA, TCP offload engine (TOE), iSCSI, and I/O virtualization capabilities, make it the obvious choice as the upgrade path for high-performance data centers.

Chelsio Communications, a leading provider of 10GbE adapter solutions, offers a family of ‘unified wire’ adapters targeted at the massive installed base of Gigabit Ethernet networking infrastructure. Chelsio’s adapters enable the replacement of disparate fabric technologies such as Fibre Channel and InfiniBand in a wide range of applications, including NAS filers, SAN arrays, high performance cluster computing, and blade servers.

“Chelsio’s focus is on delivering the promise of the unified wire, enabling the convergence of server networking, storage networking and cluster computing interconnect onto a single 10GbE fabric,” said Kianoosh Naghshineh, president and CEO of Chelsio. “Ethernet is the fabric of convergence and we have sucessfully developed a broad family of adapter solutions that deliver all the required critical features, such as low latency, high transaction rate, and reduced cost.”

While vendors such as Chelsio are driving major improvements in 10GbE adapter cost and performance, 10GbE deployment, however, continues to be limited to aggregation of gigabit ports in HPCC and IDC environments, and inter-switch connectivity. A number of obstacles have to be overcome in order to fulfill the promise of broad-based 10GbE data center networking deployment within high-end data centers. The most important among these is the availability of affordable switching infrastructure delivering the scalability, price, performance, and resiliency required for end-to-end 10GbE deployment within high-end IDC and HPCC data centers.

Scaling Ethernet Networks

While the 10GbE switching segment has seen intensifying competition with new switch chip and system products every quarter, switch port density remains a challenge, especially when you compare them to switching options based on Gigabit Ethernet, Fibre Channel, and InfiniBand technologies. The biggest non-blocking 10GbE LAN switch has 128 ports and is priced at thousands of dollars per port. Even with Gigabit Ethernet, the largest non-blocking switch has about 600 ports.

While the bandwidth of a L2 Ethernet network is limited to the bandwidth of the largest Ethernet switch in the core, Fibre Channel and InfiniBand protocols and products enable fabrics (multi-stage meshes of switches) with non-blocking throughput supported across thousands of ports. But the problem with Fibre Channel and InfiniBand is that they don’t do IP very well and they don’t do Ethernet at all. And, there is another challenge with Fibre Channel: With low-cost scale-out servers, the cost of Fibre Channel host bus adapters (HBAs) is commonly more than 25 percent of the cost of the server.

So what is it about Ethernet that prevents multi-path mesh fabrics when both InfiniBand and Fibre Channel support them? It’s a consequence of the plug-and-play nature of Ethernet. Ethernet frames, per the standards, do not keep history. An Ethernet frame does not track elapsed time or count hops. This means that if Ethernet switches are connected in a loop (which multi-path meshes typically have), it is possible that a packet forwarded from the output port of a Ethernet switch may ultimately arrive to an input port of the same Ethernet switch and, without any other mechanisms to resolve the situation, the packet will flow indefinitely in a loop consuming bandwidth and ultimately creating a fully congested network unable to forward any other traffic.

To avoid this, Ethernet switches run the spanning tree protocol (STP) during initialization and periodically thereafter to accommodate changes in the physical network to detect and disable multiple paths automatically. Spanning tree disables paths that could create a loop.

Theoretically, you can build a mesh of spanning tree-enabled Ethernet switches, but because of the single-path constraint, the Ethernet network becomes subjected to numerous congestion points leading to dropped packets and throttling of injection rates in transmit nodes which leads to less than wire speed and non-deterministic performance. In other words, the spanning tree algorithm does not allow the use of Layer 2 Ethernet switches to build multi-path mesh fabrics, which is why the overall bandwidth of a Layer 2 Ethernet network is limited to the capacity of the largest switch in the core.

In order to overcome this limitation of Layer 2 Ethernet switches, it is necessary to use Layer 3 (and higher) IP switches. Using Layer 3 10GbE switches, even switches of 128 ports, severely impacts price and performance metrics (both bandwidth and latency) critical in the data center. According to IDC, Layer 3 switch ports cost, on average, five times as much as Layer 2 ports.

L3 routing functionality requires complex processing based on store-and-forward methods which thereby preclude the use of “cut-through” switches. The requirement for store-and-forward and advanced processing introduces significant switch latency, makes performance non-deterministic and greatly increases the cost of these switches.

Converged I/O Using InfiniBand: Congestion Collapse?

In contrast, InfiniBand fabrics do not have this problem because they include a Subnet Manager that is used to discover the physical topology and set up all the paths by configuring the forwarding tables in each InfiniBand switch.

InfiniBand’s fast point-to-point communication speeds have made it a favorite among the high-end HPCC environments. HPC applications typically operate doing short, discrete calculations on clusters of servers. Since servers quickly complete their calculations and must quickly refresh with new calculations, constantly passing them additional data to keep them occupied is a key part of making sure an HPC farm is run optimally. InfiniBand high bandwidth and low latency make it ideal to pass arguments between HPC servers.

But InfiniBand has a major problem which is now becoming apparent as InfiniBand is being considered for broad-based deployment within HPCC and IDC data centers: InfiniBand switches cannot drop packets to deal with congestion. As a result, switch buffers can fill up, block upstream switches and even block flows that are not contending for the congested link.

For small-scale application environments with predictable load, this is not a problem. However, for large-scale deployments spanning hundreds to thousands of servers supporting a range of applications, the lack of congestion control is potentially disastrous. Data presented at the Open Fabrics conference held in April 2007 in Sonoma shows that congestion in IB networks can occur with as few as 24 servers, dramatically increasing latency and decreasing throughput.

A converged fabric requires significant congestion control to prevent congestion collapse as I/O increases. This is especially important for enterprise-class components such as databases and application servers that may be deployed in mission-critical, customer-facing IDC environments with variable demand. A financial institution that runs a key trading application on a fabric without sufficient congestion control may suffer severe performance degradation or a complete fabric collapse during a heavy trading day.

Need for Low Latency

While latency has been a much-discussed aspect of high-end data center networking, it would be helpful to review the different ways latency is measured as well as its impact on real-world HPCC and Enterprise applications.

First, latency means different things to different people. The application vendors measure how long it takes from requesting the data to when they receive it. If the request is for a big block of data, obviously the pipe size has an impact. The adapter vendors measure the round trip time for a small packet with two back to back servers and divide it by two to estimate the one-way latency. Low latency is better than high latency, but this measure won’t always predict applications performance.

The network switch vendors measure the time from the head of a frame entering one switch, to the head of the frame exiting the switch. What this ignores is the impact of bandwidth on applications performance. As an example, if the DBMS cluster is moving 32K byte data blocks around, they will get to their destination a lot faster with a 10 Gbps pipe than they will with 1 Gbps.

However, scalable support of high-performance applications running on server grids is driving the need for dynamic, fine-grained calibration of end-to-end latency within data centers. While millisecond-scale latency across the WAN can be measured, that isn’t near good enough in the data center. In very controlled conditions, with very expensive test equipment, the latency across a data center network can be measured one port at a time. But in the real world, nobody knows what the latency of their data center network is. While server to server pings can generate round trip time (RTT) latencies, however, microsecond-scale, let alone nanosecond-scale, one-way latencies cannot be measured without special test gear. All that can be typically done is to measure the application response time, and guess where time is being consumed.

The importance of latency is also often debated. In the view of application vendors, the latency they worry about is in hundreds of milliseconds, so fixating on the LAN latency is in their view a waste of time. However, what they fail to take into account is that an application-level transaction may include hundreds of read/write acknowledgements. In these cases, high “stack-up” latencies can result in reduced application performance.

Latency is becoming more important because in scale-out datacenters as the system bus of large multiprocessor systems is effectively being replaced by the LAN. Further, in the grid computing vision, servers across the enterprise network can, in theory, can be dynamically added to the DBMS cluster. But it will only deliver acceptable performance if the combined latency across the multiple switch hops is consistently low.

The need for optimizing end-to-end network availability and performance, including latency, is also being driven by the evolution of server virtualization to enable data center automation. For example, VMWare VMotion capability allows application-ready software modules to be moved seamlessly between physical and virtual computing resources, dynamically provisioned on one or more servers, autonomously updated and patched according to user-definable compliance and security policies, and scheduled, executed, and tracked according to logical sequences, events, dependencies, and geographic hierarchies.

During a VMotion migration, the memory contents of the running virtual machine are transferred over the network to the target server. Having a robust, high-performance network available for this task is critical for ensuring that VMotion operations can complete in a timely manner and result in a successful migration. The bandwidth, latency, and availability of the network determine the effectiveness of the dynamic load-balancing and stateful failover capability enabled by VMotion.

High Cost of Bandwidth

Moore’s Law doesn’t exactly predict performance improvements, but a corollary to transistors doubling every 24 months does. Between clock rate improvements and architectural improvements, CPU performance has typically doubled about every 18 months. Although clock rate improvements are slowing down, multi-core processors and better chip I/O are helping keep performance gains on the same trajectory. What isn’t so obvious is that this means that every five years or so, processor performance goes up 10X.

The last big change in Ethernet speed got well underway by 1999 when IT departments were deploying Gigabit Ethernet server connections using the recently ratified 1000BaseTX standard for Category 5 UTP cable. In 1999 several GbE startups were hitting their stride as newly-minted public companies or business units of incumbent vendors, as their LAN switch sales grew at dizzying rates. While it wasn’t until 2003 that the shipments of Gigabit Ethernet LAN switch ports equaled 100Mbps Ethernet, by then, most IT organizations had lots of servers connected with Gigabit Ethernet.

In servers, the transition to GbE NICs is already complete, while 10GbE remains in the early stages of adoption, with a range of incumbent and startup vendors offering 10GbE server adapters. While 10GbE NIC shipments are growing from a small base, prices are falling rapidly, and the impending volume shipments of 10GBaseT and 10GbE-based blade server products foretell a dynamic and high-growth market.

However, other than the market leader, LAN switch suppliers likely do not have the ability to invest in R&D to develop the new architectures, the chips, and the software needed to improve the price/performance needed to make 10GbE a viable alternative for converged data center networking. The high per-port cost of incumbents’ enterprise-class LAN switches is inherited from their architectures, which are based on either buffered single-stage crossbars or WAN-based 10GbE switch design heritage incorporating sophisticated multi-class quality of service (QoS) capabilities and large, high-performance buffers for handling longer distance.

With every generation of LAN switching, new products have come from startups. In fact, the market leader has bought at least one such company every time. But the dotcom/telecom bubble aftermath has stifled 10GbE LAN switch startup investments. For all practical purposes there aren’t any.

Next-Gen 10GbE LAN Switching For Dynamic, Scale-Out Computing

While a decade ago, Ethernet switching was promoted as the answer to the problems with latency, complexity, and cost caused by using Layer 3 routers to connect compute farms together, LAN switch market incumbents have come to advocate Layer 3 routing over Ethernet switching, likely to gain higher revenues per connection, and to protect their turf from smaller competitors. The strongest arguments in favor of routing are that it eliminates broadcast storms, and it avoids spanning tree protocol-related problems as described above.

But Layer 3 protocols add their own complexity and restrict the flexibility to dynamically reconfigure servers and networks promoted by utility computing visionaries. For example, it’s a lot easier to add a server to an application cluster or enable dynamic migration of virtual servers within the same Layer 2 subnet. Lots of subnets lead to redundant servers for every subnet scattered around the datacenter.

What is a needed is a new approach coming most likely from innovative startups. Incumbents are too focused on protecting their revenue streams and their proprietary turf, which is why the adventuresome are, by process of elimination, looking outside the Ethernet world for the capabilities they need.

One of the start-ups looking to address the limitations of 10GbE LAN switching technology is Woven Systems, a Santa Clara, CA-based startup developing an Ethernet-based mesh network product.

“Woven’s focus is to deliver the best features of Fibre Channel and InfiniBand on a 10G Ethernet fabric,” says Dan Maltbie, founder and Sr. VP of Engineering, Woven Systems.

Woven is developing Layer 2-based 10G Ethernet data center switches that use special algorithms that allow the boxes to deliver a resilient multipath Ethernet fabric solution that delivers the low latency and scalability of InfiniBand coupled with the reliability of Fibre Channel. “Multiple paths can be established among switches in the fabric, allowing bandwidth to be allocated more dynamically over the paths, since traffic lanes are not shut down, as in spanning tree-based Ethernet”, Maltbie says.

In summary, IT capacity and requirements are on a collision course for high-performance computing and internet datacenters. Industry trends such as multi-core CPUs, blade computing, and virtualization are significant advances that have increased datacenter capacity. But the demand for IT compute and storage capabilities is expected to increase at an even faster rate. Ethernet, the staple of datacenter networking infrastructure, has kept pace with the availability of a range of 10GbE server networking adapters, but 10GbE switches have until recently lacked the performance, affordability, scalability and reliability required for pervasive 10GbE deployment within enterprise data centers.

—–

About the Author

Saqib Jang is founder and principal at Margalla Communications, a Woodside, CA-based strategic and technical marketing consulting firm focused on storage and server networking. He can be contacted at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers in Neuroscience this month present IBM work using a mixed-si Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even in the U.S. (which has a reasonably fast average broadband Read more…

By Oliver Peckham

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and our HPC family. Rich was born in Milwaukee, Wisconsin on April 12, 1962. His Read more…

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the dominant primate species, with the neanderthals disappearing b Read more…

By Oliver Peckham

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected]home, a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This