Tag: Chelsio

RoCE Fails to Scale – the Dangers of Sliding Down the RoCE Path

Nov 30, 2015 |

A recent paper published by a public cloud vendor and others reveals the trials and tribulations of their RoCE experience, and the scale of their deployment woes1. The authors, including the RoCE vendor and some of the staunchest of RoCE proponents, reveal the severity of the problems faced, with “poor application performance,” “head-of-line blocking,” “unfairness,” Read more…

New Enhancements Boost Chelsio’s 40Gbps T5 Adapter Capabilities

Jul 27, 2015 |

The release of three new software enhancements adds even more capabilities to Chelsio Communications’ powerful Terminator 5 (T5) ASIC. The T5 is a fifth generation, high-performance 2x40Gbps/4x10Gbps server adapter engine with Unified Wire capability, enabling offloaded storage, compute and networking traffic to run simultaneously. T5 based adapters are high performance drop-in replacements for Fibre Channel Read more…

Comparing Lustre RDMA Performance over Ethernet vs. FDR InfiniBand

Jun 15, 2015 |

Two distinct solutions yielding nearly identical results – but with a significant difference in cost and management. These are the key findings of a recent study conducted by Chelsio Communications that compares the performance of Lustre RDMA (Remote Direct Memory Access) over Ethernet vs. FDR InfiniBand. Lustre is the popular, scalable, secure, high availability HPC Read more…

Chelsio Looks to Close Ethernet-InfiniBand Gap

Jan 24, 2013 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Chelsio_logo_120x.jpg” alt=”” width=”93″ height=”92″ />This week Chelsio Communications unveiled its latest Ethernet adapter ASIC, which brings 40 gigabit speeds to its RDMA over TCP/IP (iWARP) portfolio. The fifth-generation silicon, dubbed Terminator T5, brings bandwidth and latency within spitting distance of FDR InfiniBand, and according to Chelsio, will actually outperform its IB competition on real-world HPC codes.

Chelsio T4 Adapters Deliver Industry Leading Performance for High Performance Computing

Jul 18, 2011 |

High Performance Computing cluster architectures are moving away from proprietary and expensive networking technologies towards Ethernet as the performance/latency of TCP/IP continues to lead the way.  InfiniBand, the once-dominant interconnect technology for HPC applications leveraging Message Passing Interface (MPI) and remote direct memory access (RDMA), has now been supplanted as the preferred networking protocol in these environments. <br />