The release of three new software enhancements adds even more capabilities to Chelsio Communications’ powerful Terminator 5 (T5) ASIC. The T5 is a fifth generation, high-performance 2x40Gbps/4x10Gbps server adapter engine with Unified Wire capability, enabling offloaded storage, compute and networking traffic to run simultaneously. T5 based adapters are high performance drop-in replacements for Fibre Channel Read more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Chelsio_logo_120x.jpg” alt=”” width=”93″ height=”92″ />This week Chelsio Communications unveiled its latest Ethernet adapter ASIC, which brings 40 gigabit speeds to its RDMA over TCP/IP (iWARP) portfolio. The fifth-generation silicon, dubbed Terminator T5, brings bandwidth and latency within spitting distance of FDR InfiniBand, and according to Chelsio, will actually outperform its IB competition on real-world HPC codes.
High Performance Computing cluster architectures are moving away from proprietary and expensive networking technologies towards Ethernet as the performance/latency of TCP/IP continues to lead the way. InfiniBand, the once-dominant interconnect technology for HPC applications leveraging Message Passing Interface (MPI) and remote direct memory access (RDMA), has now been supplanted as the preferred networking protocol in these environments. <br />
Tom Statchura from Intel attended the IDF 2010 in San Francisco this year where he aided in the demonstration of an ideal scenario for HPC in the cloud—what is most frequently referred to as “bursting” to gain additional capacity.
Chipmaker places bets on 10GE and QPI.
Cluster computing systems have caused disruptive changes in the HPC market. One consequence of the range of requirements for cluster networking is that the leading interconnects in HPC are Gigabit Ethernet (GbE), which is based on Ethernet networking standard, and InfiniBand, delivering upwards of 10X performance vs. GbE. Both show significant deployment in HPC.
We have developed something of a tradition at HPCwire in the weeks leading up to each year’s SC conference; we interview the chairman of the OpenFabrics Alliance (OFA). Jim Ryan of Intel has been the OFA’s chair all these years, and our annual interview with Jim was as interesting as ever.
The upcoming IEEE standard for Data Center Bridging — a.k.a. converged enhanced Ethernet — could pave the way for a new low-latency RDMA over Ethernet protocol that leaves iWARP in the dust and provides a seamless way to integrate InfiniBand into the datacenter.
Intel has acquired the assets of NetEffect, an Austin-based company that makes iWARP-capable adapters. Intel will inherit NetEffect’s product portfolio, which includes 1 and 10 GbE accelerated adapters, 10 GbE adapters for blade configurations as well as a 10 GbE ASIC.