<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/ConnectIB_logo.bmp” alt=”” width=”86″ height=”26″ />Mellanox has developed a new architecture for high performance InfiniBand. Known as Connect-IB, this is the company’s fourth major InfiniBand adapter redesign, following in the footsteps of its InfiniHost, InfiniHost III and ConnectX lines. The new adapters double the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps.
High Performance Computing cluster architectures are moving away from proprietary and expensive networking technologies towards Ethernet as the performance/latency of TCP/IP continues to lead the way. InfiniBand, the once-dominant interconnect technology for HPC applications leveraging Message Passing Interface (MPI) and remote direct memory access (RDMA), has now been supplanted as the preferred networking protocol in these environments. <br />
Despite the still-modest showing of 10 Gigabit Ethernet (10GbE) technology in high performance computing deployments, vendors at SC10 were showcasing a wide array of performance-laden Ethernet products. IT Brand Pulse Labs analyst Tim Dales takes a look at the prospects for 10GbE in high performance computing, the migration pattern from GbE to 10GbE, and some application areas that seem especially suitable for the technology.
In general, connectivity solutions can be divided into multiple categories: standard (such as InfiniBand and Ethernet) versus proprietary (such as SeaStar and Quadrics), high speed versus low speed, and offloading (or network-based processing) versus onloading (i.e., host-based processing).
Intel has acquired the assets of NetEffect, an Austin-based company that makes iWARP-capable adapters. Intel will inherit NetEffect’s product portfolio, which includes 1 and 10 GbE accelerated adapters, 10 GbE adapters for blade configurations as well as a 10 GbE ASIC.