Mellanox Advances ‘In-Network Computing’ with ConnectX-5 Adapter

By Tiffany Trader

June 16, 2016

Networking specialist Mellanox has announced ConnectX-5, the next-generation of its 100G InfiniBand and Ethernet adapter line. The company says the new device will help organizations take advantage of real-time data processing for high performance computing (HPC), data analytics, machine learning, national security and ‘Internet of Things’ applications.

ConnectX-5 was designed to connect with any computing infrastructure – x86, Power, GPU, ARM, and FPGA – and it employs a variety of offload engines, which can be classified into two camps. The more established offloading capability supports network functions, such as RDMA, transport offload, and SR-IOV. There’s also a new generation of acceleration engines which are running data algorithms, essentially making the ConnectX-5 a coprocessor.

Significant for HPC, ConnectX-5 continues the approach begun with Switch-IB2 and moves more MPI capabilities into the network. While Switch-IB2 offloads MPI collectives for running on the switch architecture, ConnectX-5 enables MPI Tag Matching and MPI AlltoAll operations, as well as advanced dynamic routing.

With ConnectX-5 and Switch-IB2, 60 percent of the MPI algorithms are now being executed on the network, said Mellanox’s Gilad Shainer. “Looking ahead, we’re probably going to see the entire MPI moved to the network as part of the co-design approach,” he added.

ConnectX-5 also exposes what Mellanox is referring to as in-network memory. With a small memory address space accessible by the application, data can be stored or made accessible on the network devices with the goal of enabling faster reach from different endpoints.

Mellanox positions the offloading approach as part of the larger transition to co-design principles that mine synergies between software and hardware or between the different hardware components. “The way to solve the performance bottlenecks that are now emerging is by running different algorithms in different places,” said Shainer. “ConnectX-5 is the first adapter that brings the co-design architecture into the NIC side.”

“Ten years ago process runtime or MPI collective approaches were running at hundreds of microsecond latencies,” he went on to explain. “Network device latencies were in the range of tens of microseconds, so it was a big part of the overall latency. Fast forward to today and process latencies are in the range of tens of microseconds and network device latency is running about 100 nanoseconds. The question we’re addressing is how do you make another performance improvement in the process latency – move from 10 microseconds to a low single digit of a microseconds – when CPU frequency doesn’t go faster.”

“Computing within network devices makes sense when multiple nodes need to act on the same data,” observed Addison Snell, CEO of analyst firm Intersect360 Research. “In essence, it’s the complement to pushing a computation all the way to a GPU with something like RDMA and you don’t have to move the data off of the GPU in order to compute on it. If something’s extremely local, it can be – on the one side – all the way down at the processing element on the node, but at the other end of the spectrum where it’s something that’s shared between nodes, it can be more effective to do it in the network as opposed to in the microprocessor.”

The offloading approach that Mellanox is championing and delivering on is in direct contrast to the CPU-centric approach, espoused by Intel. Mellanox believes offloading is essential to increasing CPU performance, while Intel is essentially following a system-on-a-chip strategy that as part of its Scalable System Framework offers the simplicity of tight-knit hardware-software stack. Today’s system architecture is still very CPU-centric, but Mellanox and others are advancing a different architectural approach based on specialized best-of-breed components.

Intel’s position is that everything will work better together if it’s integrated onto a single chip, observed Snell. “Now Mellanox is countering by giving powerful counter examples of how things can be engineered for higher performance when they’re not integrated onto the chip, things like in-network computing or their MCM features; those argue against having things all integrated onto a chip — which the market will prefer is certainly yet to be determined.”

“Omni-Path is certainly a formidable announcement that Mellanox has to compete against,” he continued, “but Mellanox has interesting differentiation. I don’t think they’re going just to get mowed over by Intel; I think this will come down to user preference and how they like to see their system architected.”

A 100 Gigabit-per-second NIC, ConnectX-5 enables a reported 600 nanoseconds end-to-end latency within the datacenter (the latency of ConnectX-5 in the range of 100 nanoseconds). From the previous generation, ConnectX-5 takes message rate performance from 150 million messages per second to 200 messages per second, a 30 percent increase. In terms of how this stacks up with the competition, Shainer claimed a 2x performance advantage over the first-generation Omni-Path Architecture (OPA) adapters, which he notes are capable of 89 messages per second, based on a benchmark released by Intel earlier this year. Intel product literature puts architecture maximums for the OPA adapter technology at 160 million messages per second.

ConnectX-5 also has some other new features, including support for PCIe 4.0 (expected next year). There’s an integrated PCIe switch to connect multiple PCIe devices or SSDs to the network adapter. Notably, there are also capabilities for enabling different topologies for datacenters. As one example of this flexibility, an organization can chain or collectively ring multiple adapters together without using a switch to create a small cluster.

Beyond HPC, there are more acceleration engines available for cloud infrastructures. ConnectX-5 includes an embedded switch so when you run multiple virtual machines or guest OS’s, instead of virtual machines needing to go to the switch for doing routing of data between the machines, that routing of data will be done within the NIC. It also brings offloads for NVMe to support NVMe over fabrics, RDMA, and other capabilities, according to Mellanox.

“This is the next logical step in Mellanox’s roadmap,” said Snell of the new Mellanox adapter. “They’re moving everything to consistent 100 Gigabit capability whether you’re on InfiniBand or Ethernet across these different networking cards, components and switches. Everything has to be able to connect at that high-bandwidth speed or else the data doesn’t move across the system well enough. And if the data doesn’t move across the system fast enough, then it doesn’t matter how fast your processor is, it just sits there starved waiting for data.”

Mellanox says it will start shipping ConnectX-5 in Q3 of this year.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Turing Architecture, Focusing on Real-Time Ray Tracing

August 16, 2018

From the SIGGRAPH professional graphics conference in Vancouver this week, Nvidia CEO Jensen Huang unveiled Turing, the company's next-gen GPU platform that introduces new RT Cores to accelerate ray tracing and new Tenso Read more…

By Tiffany Trader

HPC Coding: The Power of L(o)osing Control

August 16, 2018

Exascale roadmaps, exascale projects and exascale lobbyists ask, on-again-off-again, for a fundamental rewrite of major code building blocks. Otherwise, so they claim, codes will not scale up. Naturally, some exascale pr Read more…

By Tobias Weinzierl

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perh Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum Read more…

By John Russell

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This