Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them
Last week Mellanox debuted a new network interconnect adapter, ConnectX-3 Pro. Like the ConnectX-3 architecture that it is based on, this multi-protocol device supports InfiniBand and Ethernet connectivity, but its big advantage is providing hardware offloads for overlay networks commonly used in cloud infrastructures.
In fact, Mellanox is framing this new card as the world’s first cloud offload interconnect solution. According to Gilad Shainer, Mellanox vice president of marketing, the upgraded device is poised to unlock cloud’s potential.
From Cloud 1.0 to 2.0
The origins of cloud can be traced to the 1960s when organizations like IBM began to offer technology-based services via mainframes, largely considered the forerunner to today’s Software as a Service. In the 80s personal computers began to emerge that combined hardware and software in an easy-to-use form factor, bringing the benefits of computing to non-technical users. From the late 90s into the new millennium, companies such as Salesforce and Amazon.com brought Internet-enabled services to the masses. And in 2009, Google broke from decades of tradition by providing an alternative to on-premise enterprise apps. This is the pinnacle of Cloud 1.0. The era is characterized by small-scale clouds and virtualization, notes Shainer.
At the transition point between Cloud 1.0 and 2.0, private clouds start replacing legacy enterprise datacenters and there are more public clouds being built for small-scale usage. The Cloud 2.0 era is marked by the movement toward truly scalable clouds, according to Shainer, and this is driving the development of overlay technologies, such as NVGRE and VXLAN.
Unlocking the Promise
The underlying technology that enables enterprise cloud computing is virtualization. It allows administrators to run multiple virtualized machines on the same physical server, and that allows the physical server to be used by multiple users, each with its own virtual machine. This sets the stage for clouds that can provide services to multiple users. The promise of cloud computing is massive scalability, the ability to build multiple datacenters all over the world, and link them together to offer services of compute and storage that end users can leverage from anywhere. The end user doesn’t care where the resources provide. But cloud sizes have been limited by the application isolation technology that is used to keep data safe: VLANs.
The problem is that historically, the interconnect infrastructure could only support 4,000 VLANs, limiting the number of tenants and applications. This also limits resource mobility, because for a user to put data in more than one location requires additional VLANs, hence applications are relegated to a single location to save on resources.
“In this situation, the full promise of the cloud is limited, and that is what we call Cloud 1.0,” says Shainer. “If I really want to achieve the full promise of cloud, that means enabling unlimited scale, so I can build any number of datacenters around the world wherever I want and have availability of resources between the datacenters. This is Cloud 2.0.”
Next >> Overlay Networks
One of the underlying technologies of Cloud 2.0 in Shainer’s view is the overlay network (aka tunneling), which has been driven by two vendors: VMware – which created VXLAN – and Microsoft – which created NVGRE. One is for the VMware environment and the other is for Windows, but they do basically the same thing. Both solutions create “floating” virtual domains on top of a datacenter interconnect, which decouples the workload’s location from its network address. The virtual connection makes disparate resources appear to be in the same network, which then allows applications to run anywhere and to receive real-time resources as needed.
But there is no free lunch; the downside of abstraction is a degradation of performance. Until now, the use of overlay networks has been limited due to the high CPU overhead imposed on cloud resources. The technique prevents many of the traditional offload capabilities (checksum, TSO) from being carried out at the NIC. The performance of the cloud suffers: latency is higher, bandwidth is reduced.
Mellanox believes it’s found the solution to this dilemma. ConnectX-3 Pro implements these overlay technologies within the interconnect hardware itself. This allows datacenter operators to decouple the overlay network layer from the physical NIC, which brings performance back to native levels. The result is that clouds can take advantage of virtually unlimited scalability and resource mobility without the CPU overhead.
Because the offload engine enables the cloud infrastructure to support more users (and more applications), it follows that the cost per application will likewise decrease and ROI will increase.
According to Shainer, there are three main elements to Cloud 2.0: overlay networks, interconnect offload engines, and Open Platforms (Open Ethernet, and software-defined networking). And yes, Mellanox is involved in all three.
The company is not ready to release hard metrics on the benefits of interconnect offloading, but says more information will be available in the coming weeks. Asked to offer his best estimate, Shainer reported that the technology is reducing overheads to virtually none.
The adapters debuted last week and are already shipping. Pricing is dynamic and quantity-dependent, but the MSRP is between $700-$1,000.
Your email address will not be published. Required fields are marked *
Notify me of follow-up comments by email.
Strategies for coping with escalating power and cooling requirements in HPC hardware are in the spotlight. Can a liquid cooled system provide the performance today’s data centers require while delivering exceptional efficiency and density?