Infiniband carried a slight majority of the Top 500 share this year at ISC, a trend that Mellanox says will continue, both in HPC and beyond. We discussed IB’s reach and efficiencies with the company’s Gilad Shainer to better understand where Ethernet and Infinband are…
<img src=”http://media2.hpcwire.com/hpcwire/mellanox_logo.jpg” alt=”” width=”98″ height=”34″ />Mellanox wants to move the world away from closed-code Ethernet switches. The “Generation of Open Ethernet” initiative has been months in the planning. Here’s why Mellanox wants to do it…
But interconnect vendor gets reality check in Q4.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Intel_fabric_controller.bmp” alt=”” width=”117″ height=”59″ />It’s been a good year for interconnect maker Mellanox. The company has been riding high in 2012, thanks in large part to its dominant position in the InfiniBand marketplace and the surge in FDR sales over the last several months. But with Intel now eyeing the lucrative high performance interconnect market, Mellanox may soon face a formidable challenge as InfiniBand kingpin.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Mellanox_logo_small.bmp” alt=”” width=”101″ height=”86″ />With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel’s Romley platform, InfiniBand is having a big year.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/ConnectIB_logo.bmp” alt=”” width=”86″ height=”26″ />Mellanox has developed a new architecture for high performance InfiniBand. Known as Connect-IB, this is the company’s fourth major InfiniBand adapter redesign, following in the footsteps of its InfiniHost, InfiniHost III and ConnectX lines. The new adapters double the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps.
The San Diego Supercomputing Center ‘Gordon’ supercomputer was built specifically for handling large data-intensive compute tasks. This cluster uses a unique dual-rail 3D Torus topology using hardware and software provided by Mellanox Technologies. The successful deployment of this cluster highlights the flexible topology options that are available today over InfiniBand high-speed interconnect.<br />
Appro is doing a brisk business over at the Department of Energy. After winning the DOE’s second Tri-Lab Linux Capacity Cluster contact back in June, Appro has been tapped once again to provide Los Alamos National Laboratory (LANL) with yet another high performance computing cluster. The new Mustang supercomputer, installed there last month, will give the lab another 353 teraflops of number crunching capacity.
Vendors in the high performance cloud space were put in the hot seat during last week’s ISC Cloud event in Mannheim, Germany. Representatives from twelve companies, including HP, Intel, SGI, Bull and others, took part in a “gameshow” event that featured tough questions and a competitive reason to answer them thoroughly.