Put on a shelf by Intel in 2019, Omni-Path faced an uncertain future, but under new custodian Cornelis Networks, Omni-Path Architecture (OPA) is seeking to make a comeback as an independent high-performance interconnect solution. According to the company, a “significant refresh” – called Omni-Path Express – is coming later this year.
Cornelis Networks formed last September as a spinout of Intel’s Omni-Path division. The new stewards of the Omni-Path networking brand have been making the virtual conference rounds to showcase the technology and preview what’s to come. The company is led by Co-founder and CEO Phil Murphy, previously a director at Intel, where he was responsible for fabric platform technology and business development. Murphy was also an executive with QLogic’s Network Solutions Group, and he was one of the founding members of the OpenIB Alliance (which became the OpenFabrics Alliance). You could say the arc of Murphy’s career has swung toward Omni-Path and its current development path.
At Supercomputing Frontiers Europe (SCFE) this week, Murphy provided a brief update on the Omni-Path portfolio that is aiming to win the performance and price-performance wars in the HPC interconnect marketplace. Currently, Nvidia’s Mellanox InfiniBand products have a dominant position. Like Mellanox, the born-again Omni-Path portfolio supports a variety of industry CPUs and accelerators, including AMD and Intel platforms. HPE’s proprietary (Cray) Slingshot interconnect also supports a range of devices – however, Slingshot is only available with HPE systems.
Cornelis’ mission is to deliver an industry-leading fabric for traditional modeling and simulation, high-performance data analytics and artificial intelligence. Even within traditional HPC, the trend is toward increasingly adding deep learning capabilities, Murphy noted. In some parts of the code, heavy duty arithmetic can be replaced at no, or almost no, loss of fidelity, he added.
“To really unlock the power of the compute infrastructure, you need a corresponding powerful interconnect – that means very low latency, so that the communication amongst the nodes is as rapid as possible. But also you need a very high message injection rate throughout the entire network, and highly scalable bandwidth becomes ever more important with artificial intelligence,” said Murphy.
The name Cornelis can (somewhat circuitously) be traced to a book by AI pioneer Douglas Hofstadter, called Gödel, Escher, Bach: An Eternal Golden Braid. “On the surface [the book] talked about these three important figures, but in its essence, it was really laying the foundation for what intelligence was all about,” said Murphy. “And given our focus on artificial intelligence we thought we’d give a shout out to that book – and the ‘C’ in and M.C. Escher stands for Cornelis, so that’s how we came up with our name,” Murphy said.
Cornelis Network’s Omni-Path portfolio draws from investments made by the OpenFabrics Alliance, QLogic, Cray Aries and, of course, the Intel Omni-Path project. Under Intel, the Omni-Path fabric was closely tied to the (now-defunct) Phi effort (OPA was integrated into the Phi Knights Landing processor), and it was part of the original Aurora pre-exascale supercomputer design (that was recast as a very different exascale-class machine).
Cornelis is still shipping the Omni-Path 100Gbps (OPA100) products, developed under Intel, and it is planning to launch 400Gbps products late next year, with broader availability slated for the first quarter of 2023. The OPA400 product will support bifurcation to 200Gbps.
There are 800Gbps solutions farther out on the Cornelis Omni-Path roadmap.
“There’s lots of different ways we can go,” said CEO Murphy. “We’re evaluating all the tradeoffs between cost and performance, optics versus copper, [and] going wide or going more narrow.”
Gone from the roadmap is the OPA200 product that had previously been promised by Intel. However, Cornelis is working on what it says is a significant enhancement to the OPA100 product, called Omni-Path Express (OPX).
Omni-Path Express is powered by a highly-optimized host software that supports OpenFabrics Interfaces (OFI), developed under the OpenFabrics Alliance. The Alliance describes OFI as “a collection of libraries and applications used to export fabric services.” To the Omni-Path software stack Cornelis added a native OFI provider in service of the OFI libfabric layer. “The goal of OFI, and libfabric specifically, is to define interfaces that enable a tight semantic map between applications and underlying fabric services,” notes the project.
The result for Omni-Path, according to Cornelis, is superior performance via increased message rates and lower latency. Just as important, says the company, the enhanced stack now supports a much broader range of application environments than was possible through Verbs and PSM2. (See chart above.) Popular MPI implementations, such as OpenMPI and MVAPICH, are supported, as well as PGAS programming models using SHMEM and Chapel. Also supported are AI frameworks, such as TensorFlow and PyTorch, as well as object storage files, and there is “generic support” for GPUs, said Murphy.
The benchmarking slides below show message rate improvements moving from Omni-Path (with PSM2) to Omni-Path Express (with OFI) on both Intel and AMD platforms.
Cornelis is also demonstrating latency improvements on AMD hardware. “We’ve seen some cases where latency dropped almost 30 percent on small messages, and these are the main predictors of application performance in the long term. The average reduction in latency across the small messages was 16 percent,” said Murphy.
We were not shown head-to-head performance benchmarks against competing interconnect solutions. However, a competitive benchmark overview was provided in May at the RMACC HPC Symposium that put dual-rail Omni-Path OPA100 against InfiniBand HDR200. (Slide below.)
Cornelis emphasizes that its technology is independent and vendor-agnostic, not tied technically or commercially to any specific processor or accelerator technology. Although Cornelis’ Omni-Path is a generation (or two) behind Nvidia’s Mellanox division (which is getting ready to ship its NDR 400Gbps products), recent shifts in the HPC interconnect landscape may provide an opportunity for Cornelis.
Industry analyst Addison Snell, CEO of Intersect360 Research, makes just this point. “A few years ago, InfiniBand was the de facto open standard for scalable, low-latency systems,” he said in a recent statement. “Cornelis’ investment in Omni-Path, combined with the Nvidia acquisition of Mellanox, suddenly flips the conversation. Now it is InfiniBand that can potentially be viewed as proprietary to a single processor vendor, whereas Omni-Path Express is an open, multi-vendor solution.”
Dan Olds, Chief Research Officer of Intersect360, further remarked on the caliber and experience of the Cornelis team. “These are not starry-eyed idealists trying to put together a new interconnect,” Olds told HPCwire. “They are ex-Intel and come from other companies and they know what they are doing. I think they have a shot to be disruptive if they hit their milestones and execute.
“I’ll want to see more head-to-head comparisons about where they are now and where they are going to be, and where they are vis-a-vis InfiniBand,” Olds added.