Visit additional Tabor Communication Publications
February 24, 2010
The Rise of HPC Cluster Computing
While the HPC market is expected to experience a revenue dip in 2009, growth is expected to resume in 2010 and remain a bright spot in the overall IT market. The most important feature of the HPC growth trend is that it will continue to be fueled primarily by purchases of Linux cluster systems priced under $250,000. Cluster computing systems, separate compute nodes built from standard component technologies have caused disruptive changes in the HPC market.
As the component technologies of cluster systems have improved and buyers have become more confident running cluster systems, they have inevitably redirected capital once earmarked for large custom systems to larger cluster systems. These much larger clusters, often with thousands of processors, present opportunities for huge performance gains through improved parallel performance resulting in an overall higher order of magnitude return-on-investment (ROI). While algorithm and application tuning is often required to obtain these benefits, so often are cost, bandwidth, message rate, and latency of cluster interconnects.
One consequence of the range of requirements for cluster networking is that the leading interconnects in HPC are Gigabit Ethernet (which is based on Ethernet networking standard) and InfiniBand (delivering upwards of 10X performance vs. GbE). Both show significant deployment in HPC. The latest TOP500 list of HPC systems has 259 Gigabit Ethernet-based deployments compared to 181 InfiniBand-connected systems. The deployment of 10 Gigabit Ethernet (10GbE) cluster networking is emerging at this point. The price of this interconnect has been falling as the volume of its shipments grow. This growth is based on a combination of its 10X performance over GbE along with the ease of deployment due to its Ethernet heritage positions it for a bright future as a cluster interconnect.
As cluster systems have grown, so has the total amount of data in play in the average parallel HPC application. This has significant implications for HPC storage systems. Storage systems need to have the best possible bandwidth and latency characteristics. HPC storage systems have themselves become increasingly clustered and parallel as well as network-attached and accessible from all nodes on the cluster through the interconnect. In this context, the demand for interconnect solutions that supports a converged storage and cluster interconnect fabric is expected to grow significantly.
10GbE iWARP Overview and Value Proposition
For years, Ethernet has been the de facto standard LAN for connecting users to each other and to network resources. Ethernet sales volumes make it unquestionably the most cost-effective datacenter fabric to deploy and maintain. The latest generation of Ethernet, 10 Gigabit Ethernet (10GbE), offers a 10 Gbps data rate, which simplifies growth for existing data networking applications while removing the bandwidth barriers to deployment for highest-performance HPC clustering and storage networking.
Achieving 10GbE performance for latency-sensitive HPC communications has required solving Ethernet's long-standing overhead problems; problems that, in slower Ethernet generations, were adequately overcome by steadily increasing CPU clock speeds.
Enter 10GbE iWARP
The iWARP extensions to TCP/IP focus on eliminating the three major sources of networking overhead -- transport (TCP/IP) processing, intermediate buffer copies, and application context switches -- that collectively account for nearly 100 percent of CPU overhead related to networking. Specifically, iWARP implements a number of mechanisms to provide a low-latency means of passing RDMA over Ethernet.
The iWARP extensions utilize advanced techniques to reduce CPU overhead, memory bandwidth utilization, and latency by a combination of offloading TCP/IP processing from the CPU, eliminating unnecessary buffering, and dramatically reducing expensive operating system calls and context switches -- moving data management and network protocol processing to an accelerated RDMA over TCP/IP NIC (or R-NIC) 10 Gigabit Ethernet adapter.
R-NICs can reduce CPU utilization for 10 Gbps transfers to less than 10 percent and can reduce the host component of end-to-end latency to as little as 5–10 microseconds. High port-count 10GbE switches are available, which delivers HPC-class latency performance within 100's of nanoseconds.
InfiniBand Overview and Value Proposition
InfiniBand is an I/O architecture designed to increase the communication speed between CPUs, devices within servers and subsystems located throughout a network. The original goal behind the release of the InfiniBand specification by the InfiniBand Trade Association was to address the mismatch between the speed of CPUs and the PCI I/O bus, as well as other deficiencies of the PCI bus, including bus sharing, scalability, and fault tolerance.
InfiniBand is a point-to-point, switched I/O fabric architecture. Both devices at each end of a link have full access to the communication path. To go beyond a point and traverse the network, switches come into play. By adding switches, multiple points can be interconnected to create a fabric. As more switches are added to a network, aggregated bandwidth of the fabric increases. By adding multiple paths between devices, switches also provide a greater level of redundancy.
A single InfiniBand link supports 2.5 Gbps in each direction per connection. InfiniBand supports double (DDR) and quad data rate (QDR) speeds, for 5 Gbps or 10 Gbps respectively, at the same data-clock rate. InfiniBand links use 8B/10B encoding -- every 10 bits sent carry 8 bits of data, which meansthe net data transmission rate is four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbps respectively.
A quad-rate 12X link therefore carries 120 Gbps raw, or 96 Gbps of useful data. At present, most systems use 4X 10 Gbps (SDR), 20 Gbps (DDR) or 40 Gbps (QDR) connections. However, InfiniBand QDR performance is bounded by the 26 Gbps PCIe Gen2 throughput limitation.
Latency performance of InfiniBand SDR ad DDR switch chips is around 200 nanoseconds. InfiniBand Host Channel Adapters (HCAs) are rated 1-3 microseconds (although effective application-level performance is a different matter).
High-end clustering architectures have provided the main opportunity for InfiniBand deployment. Using the InfiniBand fabric versus Gigabit Ethernet as the cluster inter-process communications (IPC) interconnect typically boosts cluster performance and scalability while improving application response times. InfiniBand also provides exceptional scalability and failover in comparison to Gigabit Ethernet. In short, compared to Gigabit Ethernet, InfiniBand stands out in providing the mechanisms necessary to support the demanding requirements of high-end clustering.
iWARP and InfiniBand Comparative Review
As far as its compatibility with existing datacenter infrastructure, because it is layered on top of TCP, iWARP is fully compatible with existing Ethernet switching equipment that is able to process iWARP traffic out-of-the-box. In comparison, deploying InfiniBand requires environments where two separate network infrastructures are installed and managed as well as specialized InfiniBand to Ethernet gateways for bridging between the two infrastructures.
10GbE infrastructure is available from a range of incumbent and startup vendors. Intel, Broadcom, and Chelsio provide 10GbE iWARP adapters, while 10GbE switches are available from a broad range of vendors including Cisco, HP, IBM, BLADE Network Technologies, Extreme, Force10, Arista, and Voltaire. InfiniBand host channel adapter and switch silicon is only available from two vendors (Mellanox and QLogic), who in turn have signed up a number of OEMs to carry adapter and switching systems.
Both interconnects offer equivalent capabilities for supporting operating systems. The OpenFabrics software stack that is fully integrated into the flavors of Linux distributed by Novell and Red Hat fully supports both 10GbE iWARP and InfiniBand.
10GbE iWARP leverages its heritage to also support acceleration of emerging Ethernet-based storage protocols, including file storage (NFS-RDMA), which is fully supported by the Linux OFED stack. In addition, the Linux OFED stack also enables 10GbE iWARP to out-of-the-box support Lustre networking (LNET). In addition, 10GbE iWARP adapters can also provide concurrent, native support for standard Ethernet protocols such as NFS, CIFS, and iSCSI. In comparison, InfiniBand has had minimal deployments for server-to-storage communications, whether for file or block storage.
Regarding pricing, major server vendors are starting to add a 10 Gigabit Ethernet chip to the motherboard-known as LAN-on-Motherboard (LOM). NIC prices will continue to drop as LOM technology lets NIC vendors reach the high volumes they need to keep costs down, which in turn will drive switch port prices down as well. InfiniBand, on the other hand, has reached a mature market position and, consequently, reductions in the pricing of InfiniBand products will be relatively gradual.
Large-scale clusters built using 10GbE iWARP technology and high port-count 10Gbe switches are gaining ground, and cluster scalability is no longer viewed as inhibiting 10Gbe deployment. InfiniBand technology is an established interconnect for building large node-count clusters.
From a roadmap standpoint, the Ethernet market is moving forward aggressively to develop and implement 40G and 100G-based standards. It is expected that the standard for these versions of Ethernet will be ratified during 2010 and initial implementations based on these standards will be shipping from a range of vendors in the blade server and Ethernet networking switch markets within the next 2 to 3 years.
Converged Enhanced Ethernet
The IEEE has been developing standards collectively referred to as "Data Center Bridging" (DCB) or "Converged Enhanced Ethernet" (CEE) This refers to high speed Ethernet (currently 10 Gbps, with a clear path to 40 Gbps and 100 Gbps), plus a number of new features. The main new features are:
The first two features allow splitting an Ethernet link into multiple "virtual links" that operate independently -- bandwidth can be reserved for a given virtual link, and by having per-virtual-link flow control, CEE can ensure that certain traffic classes do not overrun their buffers thus avoiding dropping packets. This congestion notification capability means that we can tell senders to slow down to avoid congestion spreading caused by that flow control.
CEE was developed primarily for use in Fibre Channel over Ethernet (FCoE). FC requires a very reliable network -- it simply does not work if packets are dropped because of congestion -- and, so, CEE provides the ability to segregate FCoE traffic on top of a "no drop" virtual link.
The roadmap initiatives in the InfiniBand space consist of QDR, EDR (2011), and RDMA over CEE. However, these roadmap initiatives suffer from the same limitations that have been a traditional challenge for InfiniBand, namely, limited vendor support.
RoCEE Overview and Value Proposition
Mellanox, the leader in the InfiniBand market, is behind the emerging RDMA over Converged Enhanced Ethernet (RoCEE) protocol proposal. RoCEE is designed to allow the deployment of RDMA semantics on Converged Enhanced Ethernet fabric by running the IB transport protocol using Ethernet frames.
Mellanox's RoCEE proposal was motivated in order to create a protocol analogous to FCoE for Ethernet-based cluster networking. In other words, to take the InfiniBand transport layer and package it into Ethernet frames, instead of using the iWARP protocol for Ethernet-based high-performance cluster networking. But there are a number of challenges associated with this proposal:
First, one of the major motivations behind the RoCEE proposal is that it is the fastest path forward for an Ethernet-based alternative to InfiniBand. However, this ignores the fact that iWARP adapters are already shipping from multiple vendors, including Intel, Chelsio, and Broadcom. In addition, iWARP will automatically leverage the performance benefits of CEE as support for it will be ubiquitous in all 10GbE server adapter and LOM implementations, iWARP and non-iWARP alike.
Second, the idea that an InfiniBand over Ethernet (IBoE) specification will be quick or easy to develop flies in the face of the experience with FcoE. While FCoE sounded simple in concept, it turns out that the standards work took at least three years. In comparison, IBoE is more complicated to specify, and fewer resources are available for it, so a realistic view is that a true standard is very far away.
Last, RoCEE proponents point to the performance overhead challenges related to iWARP based on the TCP/IP protocol. However, this does not take into account the efficiency of silicon-based implementations of 10 Gbps TCP/IP. In addition, iWARP is also positioned to automatically take advantage of CEE as that protocol gains ubiquity in 10GbE server LOM and adapters.
In summary, RoCEE is unproven and its deployment faces significant hurdles including standardization and application and upper layer adoption. In addition, RoCEE is dependent on the deployment of 10GbE CEE infrastructure; currently only one vendor (Cisco) offers CEE switches, which are at relatively high price points.
About the Author
Saqib Jang is founder and principal at Margalla Communications, a Woodside, Calif.-based strategic and technical marketing consulting firm focused on storage and server networking.
This article is an excerpt from a Margalla Communications white paper entitled High-speed Remote Direct Memory Access (RDMA) Networking for HPC: Comparative Review of 10GbE iWARP and InfiniBand available at www.margallacomm.com.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.