Two distinct solutions yielding nearly identical results – but with a significant difference in cost and management. These are the key findings of a recent study conducted by Chelsio Communications that compares the performance of Lustre RDMA (Remote Direct Memory Access) over Ethernet vs. FDR InfiniBand. Lustre is the popular, scalable, secure, high availability HPC Read more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Chelsio_logo_120x.jpg” alt=”” width=”93″ height=”92″ />This week Chelsio Communications unveiled its latest Ethernet adapter ASIC, which brings 40 gigabit speeds to its RDMA over TCP/IP (iWARP) portfolio. The fifth-generation silicon, dubbed Terminator T5, brings bandwidth and latency within spitting distance of FDR InfiniBand, and according to Chelsio, will actually outperform its IB competition on real-world HPC codes.
High Performance Computing cluster architectures are moving away from proprietary and expensive networking technologies towards Ethernet as the performance/latency of TCP/IP continues to lead the way. InfiniBand, the once-dominant interconnect technology for HPC applications leveraging Message Passing Interface (MPI) and remote direct memory access (RDMA), has now been supplanted as the preferred networking protocol in these environments. <br />