Intelligence and integration are the watchwords of an era in which the insatiable demand for faster, more powerful computers can no longer ride the coattails of a strong Moore’s law. These are also the hallmarks of co-design, an approach that is championed by interconnect fabric vendor Mellanox Technologies and others in the community as essential for supercomputing Read more…
We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.
<img src=”http://media2.hpcwire.com/hpcwire/mellanox_logo.jpg” alt=”” width=”98″ height=”34″ />Mellanox wants to move the world away from closed-code Ethernet switches. The “Generation of Open Ethernet” initiative has been months in the planning. Here’s why Mellanox wants to do it…
When it comes to Titan’s final acceptance testing, ONRL says not so fast.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Mellanox_logo_small.bmp” alt=”” width=”101″ height=”86″ />With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel’s Romley platform, InfiniBand is having a big year.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/ConnectIB_logo.bmp” alt=”” width=”86″ height=”26″ />Mellanox has developed a new architecture for high performance InfiniBand. Known as Connect-IB, this is the company’s fourth major InfiniBand adapter redesign, following in the footsteps of its InfiniHost, InfiniHost III and ConnectX lines. The new adapters double the throughput of the company’s FDR InfinBand gear, supporting speeds beyond 100 Gbps.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/HOTI_logo.bmp” alt=”” width=”140″ height=”80″ />This August, the IEEE is hosting its annual symposium on high-performance interconnects, known as Hot Interconnects. Now in its 20th year, the event focuses on the latest developments in the field with a special emphasis on how the technology is advancing in the realm of supercomputing and large-scale datacenters. The event covers both chip-to-chip interconnects as well as networking fabrics that bind whole systems and datacenters together.
IEEE’s Hot Interconnects symposium that kicks off later this month should be a real treat for the HPC crowd. The event focuses exclusively on cutting-edge developments in the interconnect arena, everything from the latest commodity networking technologies to the K supercomputer’s “Tofu” custom network. We asked the technical chairs of the event to share their perspectives on the commodity-proprietary interconnect dichotomy in high performance computing and in the industry at large.
It was a bit of a surprise when QLogic beat out Mellanox as the interconnect vendor on the NNSA’s Tri-Lab Linux Capacity Cluster 2 contract. Not only was Mellanox the incumbent on the original Tri-Lab contract, but it is widely considered to have the more complete solution set for InfiniBand. Nevertheless, QLogic managed to win the day, and did so with somewhat unconventional technologies.