Our Earth may be round, but the universe appears to be relatively flat. At least that’s the word from the Stephen Hawking Centre for Theoretical Cosmology at the University of Cambridge, home of the COSMOS supercomputer.
As you might expect, arriving at these cosmic conclusions requires massive amounts of computing power. The COSMOS supercomputer was the first very large (over 10 terabyte) single-image, shared memory system to incorporate Intel Xeon Phi coprocessors. COSMOS has been named an Intel Parallel Computing Center.
If you attended the recent ISC15 in Frankfurt, you got a look at another new Intel technology making these astronomical calculations possible. In the company’s booth, a COSMOS simulation was running on a demo of Intel’s forthcoming Xeon Phi Knights Landing processors supported by the first public pre-release demo of the Intel Omni-Path Architecture.
It’s Omni-Path that supplied the huge bandwidth – 100 Gbps – and low latency needed to run applications of this magnitude. The advanced fabric will be released in the fourth quarter of this year.
According to Joe Yaworski, Intel Director of Fabric Marketing for the HPC Group, Omni-Path is primarily an evolutionary product incorporating some outstanding revolutionary designs. It builds on the best features of the company’s five-year-old True Scale Fabric and adds new capabilities that will take Omni-Path into the Exascale era.
Says Yaworski, “We saw the limitations of InfiniBand and understood that these were major barriers to achieving Exascale. So we decided to do something about it.”
OmniPath has a solid technical foundation, built on leading-edge IP from Intel acquisitions combined with Intel’s own in-house innovations. The result is an architecture that can cost effectively scale to tens, and eventually hundreds of thousands of nodes.
In a recent webinar, Yaworski discussed the architecture at length. Here are a few highlights from his presentation (you can view the webinar here).
A major Intel strategy is to drive the fabric increasingly closer to the processor until Omni-Path essentially becomes an extension of the CPU. “This is not happening on day one,” comments Yaworski, “but in the not too distant future, the CPU and the fabric will be indistinguishable from one another. As we move into the next generations of Xeon processors, Omni-Path will become basically an extension of memory – part of the addressing scheme. This will provide ample opportunity for improving performance and scalability in tomorrow’s high end HPC systems.”
Omni-Path also leverages some of the best features of Intel’s True Scale Fabric. True Scale combines a very high MPI rate with low latency that stays low even at scale due to the architecture’s connectionless design. Connectionless design does not establish connection address information between nodes, cores, or processes, as opposed to a traditional implementation where this information is maintained in the cache of the adapter. As a result, connectionless design delivers consistent latency independent of the scale or messaging partners. This implementation offers greater potential to scale performance across a large node/core count cluster, while maintaining low end-to-end latency as the application is scaled across the cluster
Each Omni-Path switch port can support up to 195 million messages/second. Given the 48 port design of the fabric’s switch infrastructure, this amounts to 9.6 billion messages/second. Included is low port-to-port latency – some 100 nanoseconds – that includes error detection and correction. At these speeds and scales errors in the fabric are inevitable. The architecture ensures that errors do not trigger an end-to-end retry, making errors transparent to the application. The packet integrity protection feature catches and corrects all single and multi-bit errors in the fabric, eliminating additional latency and end-to-end retries.
“Traffic flow optimization is another major feature incorporated into Omni-Path,” says Yaworski. “We have extremely fine grained control of traffic moving through the fabric – down to a 65 bit element. So every 65 bits we can make a priority decision. This means that high priority MPI files don’t get blocked by low priority storage files hogging the pipeline.”
The architecture’s dynamic lane scaling feature ensures that cables – whether copper or optical – will fail gracefully without impacting reliability or application stability. Operations continue even if one or more lanes of a 4x link fail.
Omni-Path’s adaptive routing makes up to hundreds of real-time adjustments per second per switch to optimize traffic flow. And, the architecture’s dispersive routing feature defines alternate routes that disperse traffic flow for redundancy, performance and load balancing.
These are just a few of the many evolutionary and revolutionary aspects of this next generation fabric. A good overview is available here.
Notes Yaworski, “The Omni-Path Architecture has been designed specifically for the rapidly evolving field of high performance computing. It will support HPC systems ranging from entry level – say 16 nodes – up to extreme scales incorporating multiples of tens of thousands of nodes. It’s a fabric designed to provide cost-effective performance and scaling required for today’s and tomorrow’s systems as we move toward Exascale.”
As the COSMOS simulation at Intel’s ISC booth demonstrated, Omni-Path is helping to move HPC forward at warp speed.