Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Tag: iWARP

New Enhancements Boost Chelsio’s 40Gbps T5 Adapter Capabilities

Jul 27, 2015 |

The release of three new software enhancements adds even more capabilities to Chelsio Communications’ powerful Terminator 5 (T5) ASIC. The T5 is a fifth generation, high-performance 2x40Gbps/4x10Gbps server adapter engine with Unified Wire capability, enabling offloaded storage, compute and networking traffic to run simultaneously. T5 based adapters are high performance drop-in replacements for Fibre Channel Read more…

Chelsio Looks to Close Ethernet-InfiniBand Gap

Jan 24, 2013 |

<img style=”float: left;” src=”” alt=”” width=”93″ height=”92″ />This week Chelsio Communications unveiled its latest Ethernet adapter ASIC, which brings 40 gigabit speeds to its RDMA over TCP/IP (iWARP) portfolio. The fifth-generation silicon, dubbed Terminator T5, brings bandwidth and latency within spitting distance of FDR InfiniBand, and according to Chelsio, will actually outperform its IB competition on real-world HPC codes.

Chelsio T4 Adapters Deliver Industry Leading Performance for High Performance Computing

Jul 18, 2011 |

High Performance Computing cluster architectures are moving away from proprietary and expensive networking technologies towards Ethernet as the performance/latency of TCP/IP continues to lead the way.  InfiniBand, the once-dominant interconnect technology for HPC applications leveraging Message Passing Interface (MPI) and remote direct memory access (RDMA), has now been supplanted as the preferred networking protocol in these environments. <br />

Powering HPC Locally and in the Cloud with 10 GbE

Oct 5, 2010 |

Tom Statchura from Intel attended the IDF 2010 in San Francisco this year where he aided in the demonstration of an ideal scenario for HPC in the cloud—what is most frequently referred to as “bursting” to gain additional capacity.

Intel’s 10 Gigabit Ethernet Boost Pushes out InfiniBand

May 3, 2010 |

Chipmaker places bets on 10GE and QPI.

Remote Direct Memory Access Networking for HPC: Comparative Review of 10GbE iWARP and InfiniBand

Feb 24, 2010 |

Cluster computing systems have caused disruptive changes in the HPC market. One consequence of the range of requirements for cluster networking is that the leading interconnects in HPC are Gigabit Ethernet (GbE), which is based on Ethernet networking standard, and InfiniBand, delivering upwards of 10X performance vs. GbE. Both show significant deployment in HPC.

OpenFabrics Alliance Weaves Its Story at SC09

Nov 15, 2009 |

We have developed something of a tradition at HPCwire in the weeks leading up to each year’s SC conference; we interview the chairman of the OpenFabrics Alliance (OFA). Jim Ryan of Intel has been the OFA’s chair all these years, and our annual interview with Jim was as interesting as ever.

An Ethernet Protocol for InfiniBand

May 21, 2009 |

The upcoming IEEE standard for Data Center Bridging — a.k.a. converged enhanced Ethernet — could pave the way for a new low-latency RDMA over Ethernet protocol that leaves iWARP in the dust and provides a seamless way to integrate InfiniBand into the datacenter.

Intel Grabs NetEffect Assets, Becomes iWARP Player

Oct 15, 2008 |

Intel has acquired the assets of NetEffect, an Austin-based company that makes iWARP-capable adapters. Intel will inherit NetEffect’s product portfolio, which includes 1 and 10 GbE accelerated adapters, 10 GbE adapters for blade configurations as well as a 10 GbE ASIC.