With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel’s Romley platform, InfiniBand is having a big year. And not just in its traditional HPC stronghold.
In evidence are the latest financial results from Mellanox, which last week reported a record revenue of $133.5 million for Q2. That’s 50 percent better than they did in Q1, and more than double the revenue from the second quarter of 2011. Net profit for the quarter was an impressive $32 million. For Q3, the company expects to do even better, with revenues of between $150 and 155 million.
Much of the Mellanox’s recent success can be attributed to the Intel’s launch of the Romley server platform (Xeon E5-2600, aka Sandy Bridge EP ) back in March. The release of that processor touched off an industry-wide server refresh across HPC, Web 2.0, cloud and other server-centric businesses, which had been waiting to deploy new equipment since last fall, when Intel was originally expected to make the chips generally available.
The Romley Xeons have built-in support for PCI-Express Gen3, the new bus standard which can deliver I/O at speeds of more than 100 gigabits per second (Gbps) off the motherboard. That’s more than enough bandwidth to support the latest fourteen data rate (FDR) InfiniBand products from Mellanox. In fact, at 56 Gbps, FDR InfiniBand is arguably the only standard interconnect technology that take advantage of the additional performance headroom afforded by PCIe’s latest and greatest. (Mellanox has even announced a dual-port PCIe-based FDR adapter, using its new Connect-IB technology, which can deliver more than 100 Gbps on a single wire.) Ironically, Intel, which now tends QLogic’s InfiniBand portfolio, lacks a corresponding FDR product that would take advantage of its own server chips.
The importance of fourteen data rate InfiniBand to Mellanox was underscored by the fact that 54 percent of the company’s total revenue in the second quarter was the result of FDR sales (silicon, adapters, and switches). That’s up from 31 percent of revenue in Q1, when Sandy Bridge EP was released, and 14 percent in Q4 2011, when a just handful of supercomputers were deployed with pre-launch Sandy Bridge silicon. Since the FDR adapters and switches are high-end products and have no direct competition from either Intel’s InfiniBand business or the best Ethernet vendors have to offer, Mellanox is free to sell these products at a hefty premium.
Some large FDR-based HPC deployments in the quarter established the foundation for the record Q2 revenue. And for the first time, InfiniBand surpassed Ethernet in the number of systems on the TOP500 list, a symbolic milestone, to be sure, but one that underlines the technology’s steady adoption in the HPC space at the expense of Ethernet.
High performance computing will continue to be Mellanox’s bread-and-butter for the foreseeable future, but during the Q2 investor call last week, company CEO Eyal Waldman made a big deal about their penetration into the Web 2.0 and cloud markets. With its virtual protocol interconnect (VPI) hardware and its work with Microsoft and VMware to make their InfiniBand portfolio more enterprise-friendly, Mellanox has been working hard to expand the reach of its flagship technology.
The company’s enthusiasm is being driven by the similarity of the underlying infrastructure of HPC with that of of web and cloud datacenters. Even though the application sets are different, the majority of these systems are being constructed from the same ultra-dense compute and storage hardware componentry, which simultaneously need big, fast network pipes and the capability to scale out hundreds or even thousands of nodes.
To illustrate how that’s being played out, the company pointed to a large-scale deployment of InfiniBand for a Web 2.0 customer, which accounted for a large chunk of the Q2 revenue. Although that particular customer shall remain nameless, IaaS provider ProfitBricks recently revealed it was using Mellanox FDR for its cloud hosting infrastructure. Waldman implied we’ll see more such deployments in the coming quarters.
In addition to expanding its market reach, Mellanox also pointed to specific application growth areas like Big Data”and financial services. The latter has been instrumental in sales of Mellanox Ethernet gear, which, although representing only 7 percent of the Q2 revenue, is a product set that Waldman and company have high hopes for. The fact that this Ethernet revenue is hooked into the lucrative and growing high frequency trading market certainly doesn’t hurt.
Another growth area where Mellanox is seeing increased traction is storage. In part, that’s being driven by the industry’s move to denser and more scalable storage platforms, which puts high performance interconnect capabilities at a premium. In particular, the growing addition of flash memory in these systems matches up particularly well with the latency and bandwidth characteristics of InfiniBand.
According to Waldman, the storage segment accounts for about 15 to 20 percent of the company’s total revenue, but a number of design wins with InfiniBand as the internal fabric suggests a steady pipeline of future deployments. “InfiniBand is becoming the interconnect of choice for storage systems,” he told investors.
From his perspective, this all adds up to a growing market acceptance of their InfiniBand product set, as HPC, Web 2.0, storage, cloud, and Big Data environments all gravitate toward high performance interconnect fabrics. Although the latest Ethernet products are getting close, InfiniBand still has the performance edge, along with a straighter and faster roadmap to 100 Gbps. That perspective is certainly shared by the Taneja Group, an analyst firm that conicdentally delivered a report (PDF) last week, detailing the advantages of InfiniBand across the much of the datacenter landscape. Their conclusion:
Many of the “extreme” requirements leading to InfiniBand used to only apply to HPC and a few other specialized needs, but moving forward hundreds of mainstream mission-critical applications will be hosted on denser, virtualized clouds of infrastructure with similar interconnect requirements. In a way InfiniBand is being pulled up by its bootstraps by the exigencies of virtualization, cloud and Big Data.
If true, the only dark cloud that looms on Mellanox’s InfiniBand horizon would be Intel, which certainly has the wherewithall to build a competitive portfolio from the technology it inherited from QLogic. In a couple of years perhaps, by the time four-lane 100 Gbps InfiniBand technology comes to fruition, Intel could indeed have a solution that could go head-to-head with Mellanox. With its superior chip-baking expertise and ability to blend interconnect smarts onto a general-purpose processor, Intel could make up for a lot of lost time.