At the Optical Fiber Conference taking place from March 22-24 in Anaheim, Calif., Mellanox is announcing an “important milestone” on the road to High Data Rate (HDR) 200Gb/s InfiniBand and Ethernet networks. At the trade show, the company is demonstrating 50 Gb/s silicon photonics optical modulators and detectors, which will comprise key elements of 200 Gb/s and 400 Gb/s LinkX cables and transceivers.
Improving data communication performance in HPC has turned out to be one of the most difficult challenges for system designers. As a result, the topic is getting a lot of attention from academic researchers around the world. Some of that work will be presented at this year’s ISC High Performance conference in Frankfurt, Germany, where Read more…
Infiniband carried a slight majority of the Top 500 share this year at ISC, a trend that Mellanox says will continue, both in HPC and beyond. We discussed IB’s reach and efficiencies with the company’s Gilad Shainer to better understand where Ethernet and Infinband are…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Chelsio_logo_120x.jpg” alt=”” width=”93″ height=”92″ />This week Chelsio Communications unveiled its latest Ethernet adapter ASIC, which brings 40 gigabit speeds to its RDMA over TCP/IP (iWARP) portfolio. The fifth-generation silicon, dubbed Terminator T5, brings bandwidth and latency within spitting distance of FDR InfiniBand, and according to Chelsio, will actually outperform its IB competition on real-world HPC codes.
Networking sage talks about Moore’s Law, switch buffers and merchant chips.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Mellanox_logo_small.bmp” alt=”” width=”101″ height=”86″ />With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel’s Romley platform, InfiniBand is having a big year.
Chief scientist discusses memory stacks, interconnects, and US technology leadership.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Myricom_Emulex.jpg” alt=”” width=”122″ height=”81″ />Myricom and Emulex are teaming up to bring a series of network offerings to market targeted for high performance applications. The partnership will kick off with Emulex reselling Myricom 10GbE products into selected application domains, but the end game is to go after the high-flying InfiniBand market with products based on Emulex’s Ethernet ASICs and Myricom’s high performance software.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/LHC_small.jpg” alt=”” width=”125″ height=”94″ />Pent-up demand for network bandwidth at both the core and edge of the datacenter is good news for suppliers of 100 Gigabit Ethernet (GbE) routers. And although Brocade was not the first vendor to market with such gear, it has quickly become one of the largest providers of 100 GbE ports, a lot of which are ending up in science and research networks. Organizations such as CERN, Indiana University and the Howard Hughes Medical Center are already employing the technology to power performance-demanding applications
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/digital_time_tunnel_small.jpg” alt=”” width=”118″ height=”95″ />With 2011 officially in the books, it’s time to offer a few predictions about the upcoming year in HPC. In general, I expect 2012 to continue the major trends we’ve seen over the past couple of years, namely the increased adoption of GPU computing into the mainstream and more parity of HPC capability around the world, as exemplified by China. There may, however, be one or two new trends to pop up.