But interconnect vendor gets reality check in Q4.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Intel_fabric_controller.bmp” alt=”” width=”117″ height=”59″ />It’s been a good year for interconnect maker Mellanox. The company has been riding high in 2012, thanks in large part to its dominant position in the InfiniBand marketplace and the surge in FDR sales over the last several months. But with Intel now eyeing the lucrative high performance interconnect market, Mellanox may soon face a formidable challenge as InfiniBand kingpin.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Mellanox_logo_small.bmp” alt=”” width=”101″ height=”86″ />With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel’s Romley platform, InfiniBand is having a big year.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/ConnectIB_logo.bmp” alt=”” width=”86″ height=”26″ />Mellanox has developed a new architecture for high performance InfiniBand. Known as Connect-IB, this is the company’s fourth major InfiniBand adapter redesign, following in the footsteps of its InfiniHost, InfiniHost III and ConnectX lines. The new adapters double the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps.
The San Diego Supercomputing Center ‘Gordon’ supercomputer was built specifically for handling large data-intensive compute tasks. This cluster uses a unique dual-rail 3D Torus topology using hardware and software provided by Mellanox Technologies. The successful deployment of this cluster highlights the flexible topology options that are available today over InfiniBand high-speed interconnect.<br />
Appro is doing a brisk business over at the Department of Energy. After winning the DOE’s second Tri-Lab Linux Capacity Cluster contact back in June, Appro has been tapped once again to provide Los Alamos National Laboratory (LANL) with yet another high performance computing cluster. The new Mustang supercomputer, installed there last month, will give the lab another 353 teraflops of number crunching capacity.
Vendors in the high performance cloud space were put in the hot seat during last week’s ISC Cloud event in Mannheim, Germany. Representatives from twelve companies, including HP, Intel, SGI, Bull and others, took part in a “gameshow” event that featured tough questions and a competitive reason to answer them thoroughly.
The Texas Advanced Computing Center (TACC) has revealed plans to deploy a cutting-edge petascale supercomputer courtesy of a $27.5 million dollar NSF award. Built by Dell, the system will consist of 2 petaflops of Sandy Bridge-EP processors accelerated by an additional 8 petaflops of Intel’s Many Integrated Core (MIC) coprocessors. The machine is scheduled to boot up in late 2012 and be ready for production in January 2013.
It was a bit of a surprise when QLogic beat out Mellanox as the interconnect vendor on the NNSA’s Tri-Lab Linux Capacity Cluster 2 contract. Not only was Mellanox the incumbent on the original Tri-Lab contract, but it is widely considered to have the more complete solution set for InfiniBand. Nevertheless, QLogic managed to win the day, and did so with somewhat unconventional technologies.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the TeraGrid effort to support the Japanese research community; NNSA’s ‘Supercomputing Week’ coverage; Mellanox’s new double-duty switch silicon; Platform’s latest Symphony; and the Oracle Sun Server-based Sandia Red Sky/Red Mesa supercomputer upgrades.