Freed of the CPU overhead and networking latency thwarting its deployment beyond networking, Ethernet is finally ready to take on the most demanding of data center applications — high-performance clustering. When fully implemented, iWARP Ethernet achieves low latency, low CPU utilization, high throughput, and high bandwidth characteristics on par with proprietary clustering fabrics. It simplifies connectivity, lowers total cost of ownership (TCO), and delivers at last on the promise of one data center fabric that can do it all.
Every data center today supports Ethernet for networking, and many use Fibre Channel for block storage. As mainstream popularity grows for clustering and grid, many are also adding a proprietary fabric, such as InfiniBand, specifically designed to meet the performance demands of fine-grain parallel processing applications. These three-fabric configurations work, but require separate maintenance and management of each network and create a challenge in blade systems due to inherent space, power, and cooling constraints. A goal of many data center managers is to reduce the complexity and number of fabrics without compromising networking performance.
iWARP Ethernet achieves high performance in Ethernet channel adapters
iWARP is a set of standards developed by the RDMA Consortium and IETF, enabling TCP/IP-based Ethernet to address the three major sources of networking overhead – transport (TCP/IP) processing, intermediate buffer copies, and application context switch overhead. Fully implemented in a new type of networking device, an Ethernet channel adapter or ECA, iWARP enables Ethernet to successfully meet the performance requirements of all data center fabrics by:
— offloading TCP/IP transport processing from the CPU
— using RDMA and DDP to move data directly to/from application memory, eliminating overhead due to unnecessary data buffering
— instituting user level direct access (ULDA), enabling applications to directly control data movement through the ECA without OS involvement (this is also known as OS bypass).
Together these features allow applications to take full advantage of 10 Gb of network bandwidth, while utilizing a minimum of server resources for networking support.
Partial implementations of 1 Gb and 10 Gb iWARP Ethernet, available as transport offload engines (TOEs) and kernel-mode RDMA NICs (RNICs), offer some relief from overhead and latency issues, but not enough to support the 10 times performance increase of 10 Gb Ethernet. ECAs offer a full iWARP implementation and can be distinguished from conventional NICs and RNICs by their inclusion of ULDA, a key feature of the iWARP standards.
In performance comparisons of current InfiniBand host channel adapters (HCAs) to standard 10 GbE NICs and 10 GbE iWARP ECAs, the results speak for themselves. When fully implemented, iWARP allows Ethernet to achieve low latency, high bandwidth and low CPU utilization comparable to or better than proprietary clustering fabrics.
One fabric. One adapter. Lower TCO.
iWARP ECAs enable data centers to use a single building block for any networking topology with no compromises in bandwidth, throughput, CPU utilization or latency. Proprietary clustering fabrics can be replaced by iWARP Ethernet, block-based storage fabrics can be replaced with iSCSI and iSER (RDMA-enabled iSCSI) over iWARP Ethernet, while compatibility with ubiquitous Ethernet is maintained.
One fabric greatly simplifies connectivity. In any server, three adapters can be reduced to one. The three switches formerly connected to those three adapters are reduced from three to one. In fact, in the data center of the very near future, every server and every node in an HPC cluster has only two connections — one for power and one for everything else. The need for separate spares and different training is eliminated. Network management software returns to familiar Ethernet.
The iWARP specifications allow TCP/IP-based Ethernet to achieve equal or better performance than today's clustering and storage fabrics. IWARP ECAs simplify connectivity, lower total cost of ownership (TCO), and deliver on the promise that one data center fabric can do it all.
Terry Hulett is VP of Silicon Engineering for NetEffect, Inc., an Austin, Texas-based company offering iWARP ECAs for 1 GbE and 10 GbE fabrics.