Network-based Processing Versus Host-based Processing: Lessons Learned

By Gilad Shainer

May 17, 2010

Introduction

CPU clock speeds have remained essentially constant over the last several years, resulting in the number of CPUs used in high-end systems rapidly increasing to keep up with the performance boosts expected by Moore’s law. System size on the TOP500 list has changed rapidly, and, in November 2009 the top ten systems averaged 134,893 cores, with five systems larger than 100,000 cores. This rapid increase of system size and the associated increase in the number of compute elements used in a single user job increase the urgency of dealing with system characteristics that impede application scalability.

By providing low-latency, high-bandwidth and extremely low CPU overhead, InfiniBand has become the most deployed high-speed interconnect, replacing proprietary or low-performance solutions. The InfiniBand Architecture (IBA) is an industry-standard fabric designed to provide high scalability and efficient utilization of compute processing resources. InfiniBand scalability was already proven on multiple large-scale systems listed on the TOP500 list: LANL “Roadrunner” (4K nodes and 130K cores), NASA (above 7K nodes and 56K cores), NUDT “TianHe” (3K nodes and 72K cores), Jülich JuRoPa and HPC-FF (3K nodes and 30K cores) , TACC (4K nodes and 63K cores) and Sandia “Red Sky” (5.4K nodes and 43K cores) are a few examples. All of them use an InfiniBand solution that provides network-based processing.

Network-based Processing Versus Host-based Processing

In general, connectivity solutions can be divided into multiple categories: standard (such as InfiniBand and Ethernet) versus proprietary (such as SeaStar and Quadrics), high speed versus low speed, and offloading (or network-based processing) versus onloading (i.e., host-based processing). With offloading network solutions, the entire network transport is handled and performed by the NIC or adapter, including error handling, data retransmissions for reliable data transfer, and other sophisticated communications such as MPI. Onloading network solutions rely on the host central processing units (CPUs) to perform any task that is related to data transfer between servers or servers and storage; from data gathering, data packet creation, transport checks, reliability, physical-to-virtual memory translation, process protection (i.e., security) and more. To make it simple, offloading networks free the CPU from the need to handle server-to-server communications and instead dedicate most cycles to the user applications, and onloading networks are no more than the proverbial string and two metal cans that we played with as children.

Why Onloading Solutions?

The motivation for onloading solutions, or for using a string and two metal cans, is the simplicity of building such solutions. Since all networking processing is being done by the host, the NIC or the adapter needs only to include a bridge from the host-based interface (in most cases today, it is PCI-Express) and the network interface (InfiniBand, Ethernet etc.) and a buffer for shock absorption (which protects the network from a burst of data). As such, no major new technology needs to be developed, making solutions less costly. The big drawback is with the scalability and performance that such solutions can provide within a system, measured in terms of overall system performance and productivity. As more overhead processing is done by the CPU, less CPU time is available for user applications, resulting in lower system performance and scalability. One example is a comparison between Ethernet and InfiniBand on the Top500 list; since most of Ethernet solutions require the TCP (i.e., the transport) to be handled by the CPU, Ethernet-connected systems achieve on average only 50 percent efficiency, meaning 50 percent of the system capability cannot be utilized or is otherwise wasted. InfiniBand-connected systems on the Top500 list demonstrate up to 96 percent efficiency, therefore maximizing the CPU cycles for the user application and hence the overall system return on investment.

Offloading Solutions: Balance the System

Offloading network solutions eliminate the CPU overhead related to process-to-process communications, data transfer reliability, memory translations and process protections (or security) and data segmentations and reassembly. Moreover, this is the only way to counter the effect of system noise and jitter on application performance and scalability (e.g., by offloading MPI collectives communications), and the only way to allow overlapping between computations and communications within the server.

Scientific simulation codes frequently use collective communications. Offloading networks typically include programming capabilities for special features, and simulation of future problems.

With the increase in the demand for higher performance and scalability, offloading solutions are required in order to balance the increased number of CPU cores, and to provide a solution that can maximize the platform compute capability. Offloading solutions require sophisticated technology and advanced simulations in NIC or adapter deign. Therefore, not many vendors have the knowledge and capabilities required to produce offloading networks.

The System Latency

User applications resided in the user space, a space where no protection can be guaranteed for the process data. Data movement needs to involve an entity that will ensure that data from one process will not overwrite the memory space of another process by mistake, resulting in data destruction and security issues. Such entity can be the host CPU in the kernel space or the networking adapter. If done by the CPU in the kernel space, any user data needs to be copied there before being sent to the wire (i.e., a buffer copy), user to kernel system call needs to be triggered and as well as a CPU interrupt. Furthermore, data copy in large messages can increaste the negative performance effects due to cache trashing, TLBs etc. This implies higher latency for data transfer, as can be seen in Figure 1; this compares the latencies of “write” transactions between two servers. The latency ratio between onloading solutions and offloading solutions (in this case, InfiniBand), can be up to 700 percent higher when using onloading solutions versus offloading solutions. 
 
Figure 1

One can question the RDMA Write latency difference in light of the data provided by the different vendors for MPI latency. Both offloading-based and onloading-based solutions providers promote about 1us latency for MPI transactions. As offloading solutions demonstrate around 1us latency for RDMA write and send transactions, it is obvious that the MPI latency will be in the same range. On the other hand, onloading solutions demonstrate 7us latency for RDMA write transactions, so how come vendors promote figures around 1us for MPI latency? The reason is that with onloading solutions, such MPI latency benchmarks send the data directly from the user space to the network, and writing the data back from the network to the user space avoiding the buffer copy and the kernel space memory mapping. While this can be done for artificial benchmarks, avoiding memory checking and process isolation in production usage can result in data reliability and security issues. Those issues are critical in systems hosting many users, e.g., in cloud computing.

Network Message Rate

One of the known benchmarks besides latency and throughput is the network message rate, which is basically the network throughput divided by the message size, for small message sizes. In the case of onloading networks, this benchmark tests the ability of the CPU cores to create a network packet and send it through the two metal cans and the string. Assuming that the bridging between the host interface, or the PCI-Express, and the network interface (InfiniBand, for example) is good enough to provide the maximum data speed of those interfaces, as more CPU cores are used for network packet creation, more messages will be sent on the wire. One must remember that first, for such benchmarks, all of the CPU resources are being used for network packet creation; therefore no CPU is available for the user applications; and, second, in such benchmarks it is the same network packet that is being sent to the wire over and over again and does not reflect the real application situation where data on the wire is different from one network packet to the other. In simple words, for onloading networks, message rate is a CPU benchmark, and not really a network benchmark.

For offloading networks, the message rate benchmark really measures the network’s ability to create data packets and send them to the target. In the case of offloading networks, the CPU is not involved in the data transfer, and therefore is free for the user applications. The information about CPU availability is not mentioned on the various message rate benchmark results that are used for different publication methods, as shown in Figure 2. Figure 2 compares the message rate of an InfiniBand offloading solution versus InfiniBand onloading, and, as can be seen, no data point for CPU availability is provided. Actually, the CPU availability in this case is nearly zero for the onloading solution, which translates to no capability to run user applications, as all of the CPU cycles are being used to create network packets and send them out.
 
Figure 2

Figures 3 and 4 provide the InfiniBand message rate comparison based on the CPU availability. The comparison points of 50 percent and 85 percent CPU availability were used. With 50 percent, half of the CPU cycles were set for the user application, and the other half of the CPU cycles were used for the network processing within the onloading network solution. This resembles the average capability of the Ethernet networks: to provide 50 percent CPU availability, or efficiency. In the 85 percent case, 85 percent of the CPU cycles where dedicated to the user applications, while only 15 percent of the CPU cycles could be used for the packet creation. This represents the average capability of InfiniBand networks. As can be seen, the InfiniBand offloading solution maintains the same message rate capabilities since it does not demand CPU cycles for network processing, while the message rate capability of the onloading solution has been reduced dramatically. In production environments, CPU availability is critical for efficient usage of compute systems and for delivering the needed scalability. Offloading network are critical to guaranty that those requirements are met.
 
Figure 3

 
Figure 4

Scalability and Productivity

Those are the more desired items: to scale the system to meet the compute needs of today and tomorrow, and to maximize the return of investment or the productivity of the system. When one invests in the latest CPU technologies and a fast connection to host memory, it is critical to ensure that those resources can be fully utilized, and to connect them via high-performance, offloaded networking solutions.

The ability of adapters to offload the MPI collectives communication is extremely important for HPC application based on MPI. Collective communications, which have a crucial impact on the application’s scalability, are frequently used by scientific simulation codes like broadcasts for sending around initial input data, reductions for consolidating data from multiple sources, and barriers for global synchronization. Any collective communication executes certain global communications operations by coupling all processes in a given group. This behavior tends to have the most significant negative impact on the application’s scalability.

In addition, the explicit and implicit communication coupling, used in high-performance implementations of collective algorithms, tends to magnify the effects of system noise on application performance, further hampering application scalability. Some adapters address the collective communication scalability problem by offloading a sequence of data-dependent communications to the Host Channel Adapter (HCA). This solution provides the mechanism needed to support computation and communication overlap, allowing the communications to progress asynchronously in hardware, while at the same time computations are processed by the CPU. It also is a way to reduce the effect of system noise and application skew on application scalability. Needless to say, those capabilities cannot be provided with onloading solutions. Onloading solutions do the opposite; they eliminate any way to overlap computation and communications cycles, and thus magnify the effects of system noise and jitter on application performance.

Summary

As tests show, network offloading solutions are critical for high-performance system scalability, performance and productivity. Onloading solutions can negatively affect the system efficiency, and therefore are not recommended for systems with the above requirements. The main (and probably only) reason for onloading solutions is their price. Surprisingly, according to public market surveys, there is no real price gap between onloading solutions and offloading solutions in the InfiniBand market. Therefore, for a given system, the decision between offloading and onloading solutions should be very easy. When price gaps do exist, one should always review the entire system cost (i.e., by taking into account both capital expenses and operational expenses), and the desired return on investment for making the right decisions.

From the performance figures shown in 2-4, one can see that offloading networks (in this case InfiniBand), provide the needed scalability for multiple system cores while ensuring maximum core performance for user applications. One can argue that the frequency of the NIC or adapter is not as fast as the CPU, but such speed is not required. Offloading adapters need to be able to handle all incoming/outgoing data at wire speed, and — since it is being done in a highly parallel way — the offloading adapters can maintain the needed scalability and high performance without running at CPU-like frequencies. As the number of cores grows, the adapters provide higher throughput. Thus, using adapters that can handle all network data at wire speed, as in a full offloading architecture, is the secret for scalable systems.

About the Author

Gilad Shainer is an HPC evangelist that focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. He is a senior director of HPC and technical computing at Mellanox Technologies and the chairman of the HPC Advisory Council. Gilad Shainer holds M.Sc. degree (2001, Cum Laude) and a B.Sc. degree (1998, Cum Laude) in Electrical Engineering from the Technion Institute of Technology in Israel. He also holds patents in the field of high-speed networking.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This