Chip giant Intel has penned an agreement to acquire QLogic’s assets related to its InfiniBand product line. The move is in line with Intel’s strategy to have a broader, more diverse set of offerings for the datacenter, especially in the lucrative network and interconnect segment. The chipmaker also sees the acquisition as key to fulfilling its promise to deliver exascale technology by 2018.
According to Kirk Skaugen, vice president and general manager of Intel’s Data Center and Connected System Group, the company was looking to add a high performance network fabric to its stable of HPC technologies and approached QLogic with an offer. The deal, which Skaugen described as a “win-win for both companies” sends $125 million in cash to QLogic allowing it to focus on its core competencies in converged and storage networking. Meanwhile Intel gets an HPC fabric technology and product line that it intends to leverage to the max for its supercomputing business.
Given the high margins of the Xeon server CPUs and the large numbers of chips required in supercomputers, the HPC business is already a big money maker for Intel. In 2010, HPC was 15 percent of the company’s Xeon sales. And according to Skaugen, that business is going to get even more attractive. “By 2018, the top 100 supercomputers in the world will represent the same [processor] volume as half of the world’s server market as we know it today,” he says.
From Skaugen’s perspective, leadership in HPC fabrics going to help drive that along a couple of different dimensions. To being with, he says the company is committed to QLogic’s existing line InfiniBand adapters and switches and will continue to sell them in the HPC cluster marketplace. At the same time they will use the technology as a foundation to help build an integrated fabric architecture for its future manycore processors. “In both cases, we plan to invest to be number one in high performance computing fabrics,” Skaugen told HPCwire. Intel would also be free to include the adapter chips on their recently launched HPC server boards or even sell the InfiniBand ASICs to third-party storage and network vendors, although Skaugen did not ellaborate on either of those possibilities.
The chipmaker’s plans to use the technology for delivering on its exascale plans have to do less with InfiniBand, per se, than owning a fabric technology (patents, IP, and engineering talent) geared for high performance computing. On the compute side, Intel has pinned its hopes on its Many Integrated Core (MIC) architecture to bring the x86 architecture into the exaflops realm, but shoehorning a whole exaflop system inside the oft-mentioned 20MW power budget is going to require on-chip integration of the interprocessor communication fabric. In the QLogic technology, they see a scalable fabric that can be integrated into future x86 microprocessors that will deliver the requisite performance per watt.
Needless to say, the QLogic acquisition would appear to pit Intel against Mellanox, the current leader in all things InfiniBand. Skaugen, though, reminds us that Mellanox is an Intel partner too, selling both Ethernet and InfiniBand adapter parts atop Intel server boards. And since Intel already has an Ethernet play via its Fulcrum and NetEffect acquistions, the QLogic deal would fall into the same category.
That’s not saying Intel is going to always play nice with Mellanox. When you aim to be “number one in high performance computing fabrics,” you can’t avoid bumping into the InfiniBand leader from time to time. But, unlike QLogic and Fulcrum, which had to go head-to-head with Mellanox and other interconnect vendors to succeed, Intel can also derive value from the QLogic technology through packaging and integration with its CPU-based products.
Internally, Intel will segregate its Ethernet and InfinBand products along traditional lines, with the 10GbE offerings aimed at the commercial enterprise business, cloud computing, and the lower rung of the HPC space. InfiniBand meanwhile will be mostly reserved for supercomputing, which in this case refers to systems above the departmental cluster level. If vendors with specialized appliances, like Oracle’s Exadata machine or other company’s with business intelligence boxes, display a desire for InfiniBand-level latency and bandwidth, Intel will pursue those opportunities as well.
The acquisition of the QLogic technology does put Intel into somewhat of an embarrasing position inasmuch as its Sandy Bridge Xeon (E5) processors come with built-in support PCIe 3.0, the interface required to fully support Fourteen Data Rate (FDR) InfiniBand. As it currently stands, only Mellanox adapters can take advantage of that capability, since the QLogic technology does not yet offer FDR support. Skaugen wouldn’t say when they plan to move the QLogic architecture to FDR, hinting only that he believes QDR is going to be predominant InfiniBand speed in HPC systems for the next year or so.
The acquisition will close by the end of the current quarter, assuming the deal doesn’t run afoul of regulators or encounter a revolt from big stockholders. Intel has made offers to QLogic employees associated with the InfiniBand business and expects to get most if not all of them on board when the aquisition completes.