Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: mellanox

Arista Poised for Ethernet Expansion

Jun 9, 2014 |

In the wake of the Arista Networks IPO news on Friday, there has been a great deal of conversation about what the future of software-defined networking will be going forward, not just for its obvious role in hyperscale datacenters, but in other IT segments as well, including HPC. To be clear, other than having a Read more…

What Drives Investment in the Middle of HPC?

May 15, 2014 |

When it comes to covering supercomputers, the most attention falls on the front runners on the Top 500. However, a closer look at the tail-end of the rankings reveals some rather interesting use cases—not to mention courses of development, system design, and user-driven requirements for future build out. The University of Florida is home to Read more…

Infiniband Snaps Up Strong Super Share

Jul 8, 2013 |

Infiniband carried a slight majority of the Top 500 share this year at ISC, a trend that Mellanox says will continue, both in HPC and beyond. We discussed IB’s reach and efficiencies with the company’s Gilad Shainer to better understand where Ethernet and Infinband are…

On the Verge of Cloud 2.0

Apr 26, 2013 |

Last week Mellanox debuted a new network adapter, ConnectX-3 Pro, the first interconnect solution to feature cloud offload engines for overlay networks. Like its predecessor, ConnectX-3, it supports both 10/40 Gigabit Ethernet as well as 56Gb/s InfiniBand, but it’s the overlay offloading engine that is poised to unlock cloud’s potential.

Mellanox Plots Death of Proprietary Ethernet

Mar 5, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/mellanox_logo.jpg” alt=”” width=”98″ height=”34″ />Mellanox wants to move the world away from closed-code Ethernet switches. The “Generation of Open Ethernet” initiative has been months in the planning. Here’s why Mellanox wants to do it…

Mellanox Rode InfiniBand to New Heights in 2012

Jan 3, 2013 |

But interconnect vendor gets reality check in Q4.

Intel Fabrics Could Put the Squeeze On Mellanox

Sep 25, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Intel_fabric_controller.bmp” alt=”” width=”117″ height=”59″ />It’s been a good year for interconnect maker Mellanox. The company has been riding high in 2012, thanks in large part to its dominant position in the InfiniBand marketplace and the surge in FDR sales over the last several months. But with Intel now eyeing the lucrative high performance interconnect market, Mellanox may soon face a formidable challenge as InfiniBand kingpin.

Mellanox Roars Through Second Quarter As InfiniBand Revenue Takes Off

Jul 24, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Mellanox_logo_small.bmp” alt=”” width=”101″ height=”86″ />With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel’s Romley platform, InfiniBand is having a big year.

Mellanox Cracks 100 Gbps with New InfiniBand Adapters

Jun 18, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/ConnectIB_logo.bmp” alt=”” width=”86″ height=”26″ />Mellanox has developed a new architecture for high performance InfiniBand. Known as Connect-IB, this is the company’s fourth major InfiniBand adapter redesign, following in the footsteps of its InfiniHost, InfiniHost III and ConnectX lines. The new adapters double the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps.

3D Torus Topology with InfiniBand at San Diego Supercomputing Center

Jan 30, 2012 |

The San Diego Supercomputing Center ‘Gordon’ supercomputer was built specifically for handling large data-intensive compute tasks.   This cluster uses a unique dual-rail 3D Torus topology using hardware and software provided by Mellanox Technologies.   The successful deployment of this cluster highlights the flexible topology options that are available today over InfiniBand high-speed interconnect.<br />