With two major announcements this week, IBM continues to drive the build-out of its advanced scale ecosystem, aligning industry heavies to jointly develop open technologies that tackle the perennial problems of latency, bandwidth and bringing processing power into balance with data access within and among servers. In so doing, IBM and the companies alongside it have joined battle with Intel (some of those companies are in the Intel camp as well) for the soul of the new analytics-driven data center/web services/hyperscale world.
This morning, many of the same companies are involved in the launch of the OpenCAPI Consortium and the development of an open CAPI (Coherent Accelerator Processor Interface) standard that takes aim at the standard PCIe bus. It’s designed to provide an open, high-speed pathway for advanced memory, accelerators, networking and storage to tightly integrate their functions within servers.
Promising 10X server performance improvement over existing servers, the consortium said the new interface will be capable of 25Gbits/second data rate, compared with the current PCIe specification with a maximum data transfer rate of 16Gbits/second. OpenCAPI’s “data-centric” approach to server design puts the compute power closer to the data and is intended to remove inefficiencies and system bottlenecks.
The consortium brings together AMD, Dell EMC, Google, HPE, IBM, Mellanox Technologies, Micron and Xilinx. According to the consortium, OpenCAPI specs will be released by the end of the year, and servers and related products based on the new standard are expected in the second half of 2017.
“OpenCAPI is a new standard to enable very high performance accelerators like FPGAs, graphics, network and storage accelerators that perform functions the datacenter server’s general purpose CPU isn’t optimized for,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “Acceleration is what all the cool kids are doing,” he said, citing recent statements from Google about its Tensor Processing Unit, from Microsoft about FPGAs, from Apple about its GPU accelerator farm and Intel’s acquisitions of FPGA maker Altera and machine learning start-up Nervana Systems.
“Unless you are living under a rock,” said Moorhead, you realize that AI, machine learning, big data, analytics, security and HPC “are all the hottest spots driving technology today. Accelerators need a very high-performance, low-latency, cache coherent bus to connect to, and OpenCAPI was designed literally years ago to do just this.”
He added that OpenCAPI’s openness is a virtue. “While there has been a lot of debate on what ‘open’ means,” he said, “what everyone in technology can agree on is that open standards are one of the key drivers of industry growth and prosperity. Open technology standards enable profitable growth by allowing companies to share common specifications so everyone isn’t recreating the wheel.”
The OpenCAPI spec is the product formerly known as New CAPI (see IBM Advances Against x86 with Power9). While OpenCAPI has been positioned as a successor to CAPI 2.0, the two protocols will coexist in POWER9, IBM’s latest POWER CPU, with OpenCAPI running on the 25Gbs link and CAPI 2.0 running over the 16Gbs PCIe link.
IBM shared that OpenCAPI runs on a faster electrical interface than its CAPI forerunners but is architecture-agnostic. In addition, the accelerator design (FPGA, GPU, etc.) is simplified compared to CAPI 2.0 in that a PSL logic layer is not required.
Roger Kay, founder and president, Endpoint Technologies, said OpenCAPI could “give PCIe a run for its money. Up to now there hasn’t been much of a challenge to that bus, but there also hasn’t been an open standard that’s high performance at the same time as an alternative.”
He said that “IBM makes the case pretty well that Moore’s Law is failing to deliver system-level performance the way it used to, and I think that’s reasonably fair. You haven’t had clock rate increases really in years, you’ve had proliferation of cores, but that’s been the way performance has been handled, trying to parallel up, gang up, programming tasks and execute them in parallel, to get them done faster. But serial computing, which is at the bottom of it all, hasn’t changed all that much.”
The additions of Dell EMC and HPE “are big wins,” he said, because “those guys are traditionally x86 houses. It shows some willingness to adopt this technology by members of the x86 ecosystem.”
In its announcement, the consortium said that Google and Rackspace’s new server under development, codenamed Zaius, will leverage POWER9 processor technology and plans to provide the OpenCAPI interface in its design.
Kay said OpenCAPI, and OpenPOWER systems in general, have potential among in the hyperscale and cloud services providers markets, along with China.
“Another piece of it is the ‘Google-ish’ thing,” he said. “I think that’s where HPE and Dell fit in as well because this begins to hint at the major cloud suppliers…. Those guys are just looking for sheer performance at the center of their clouds. So guys like Google are expected to get millions of these for themselves, and you can expect someone like Dell selling them to Google and Amazon, for example…. Those guys are just huge suppliers in and of themselves, and they just want top performance stuff, so this will appeal to them.”
He also said POWER has been doing well in the Chinese market and that the openness of OpenCAPI will be welcomed there.
“I think that’s kind of the ace in the hole for both these consortia (Gen-Z and OpenCAPI),” Kay said. “China’s very anxious to have a fully indigenous stack, top to bottom, and they don’t mind that they didn’t invent it as long as they have complete rights to use it. So under the open schema they can take both POWER, which they are working on now to make processors and to make system space on them, and also institute the OpenCAPI bus as well. So they’re free to be with that. I think it’s going to have appeal in China.”