Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better.
Deploying a higher-performing fabric might seem, to the outsider, to be an obvious win for them. However, deciding to switch to a different interconnect fabric is not just about performance, price, or features. Operators and system admins value enhancements that require minimal effort installing, and getting up to speed for their user base. If a new system represents a large investment of manpower and resources, and sits on top of a large learning curve, the performance bonuses that might be delivered may still not, in the end, be worth it.
HPE and Intel are both familiar with this concern.
This is why Intel Omni-Path Architecture (Intel OPA) drivers are built on top of OpenFabrics Alliance software. This approach allows Intel OPA to be compatible with the base of Intel True Scale Fabric and Mellanox InfiniBand applications that have been developed over the years. It also provides access to the set of mature protocols contained in OpenFabrics Alliance releases.
HPE systems with Intel Omni-Path Architecture fabric deliver:
– High-performance clusters built for HPC and AI workloads pushing the frontiers of scale-out computing
– Broad portfolio of HPE high-performance systems —seamlessly integrated with Intel OPA 100 Gbps fabric to meet your needs
– Advanced fabric features that improve scalability, increase density, and reduce latency, cost and power
Intel OPA prioritizes ease of use and integration for users migrating to the software for the first time. Intel OPA includes both command line and GUI cluster management, and operations tools that are powerful and easy to use. Intel OPA’s fabric management, drivers and fabric tools are all open source, supported and provided at no additional charge to current Intel OPA customers. Additionally, major Linux distributions today include Intel OPA in their libraries of drivers and system software.While common interfaces are maintained, Intel OPA has been architected to specifically address HPC and AI workload needs – delivering high-performance, low latency, excellent scalability and quality of service – even for the largest clusters. Intel and HPE have also partnered to develop, deliver, and support a robust set of Intel OPA 100Gb switches and host fabric interface adapters to enhance HPE’s systems portfolio. HPE’s broad range of HPC and AI systems offer seamless integration with Intel OPA, providing an optimal integrated HPC-fabric package.
For example, HPE Apollo 6000 systems with Intel OPA offers:
- Tight integration: Intel OPA connects directly into the HPE Apollo 6000 Gen10 system backplane
- System and Switch Density: HPE Apollo k6000 chassis supports up to two Intel OPA 48port switches
- High performance and energy efficient design: Priced to meet the needs of entry-level to large-scale HPC customers.
Intel OPA has the specs that make it attractive: it is simple and straightforward to integrate into any type of HPE cluster, on-premises or in a hybrid cloud environment. HPC or AI system administrators can switch to Intel OPA with relative ease and still enjoy the advanced fabric features and the increased performance that Intel OPA delivers.
Advanced Fabric Features
Intel OPA has been designed to overcome the challenges associated with modern clusters. IT comes with the following enhanced interconnect features:
1) High Message Rate. Intel OPA uses an innovative design built around a communications library called Performance Scaled Messaging (PSM). The comms library is optimized for high-speed MPI communications and message rates, a key factor that can determine overall application performance and scaling.
2) Intel OPA’s 48-port Switch ASIC. Provides improved fabric scalability, increased density, and reduced latency, cost and power.
3) Deterministic Latency with Traffic Flow Optimization. High priority messages can bypass lower priority large messages, even if lower priority message transmission had been initiated, helping maintain consistent latency even when large messages are being simultaneously transmitted in the fabric.
4) Enhanced End-to-End Reliability. Intel OPA’s Packet Integrity Protection enables recovery of errors within the fabric, between host and switch, and between switches. This helps eliminate the need for transport level timeouts and end-to-end retries.
5) Dynamic Lane Scaling. Helps keep the fabric running in the event of a physical fault, saving the need to restart or go to a previous checkpoint to keep the application running.
HPE offers the unique HPE Enhanced Hypercube interconnect technology for Intel OPA. This software enables effective scaling without the use of any external switches. This solution reduces operational costs when combined with the HPE SGI 8600, through superior power and cooling efficiency, coupled with advanced power management which can limit power use at a job, node, rack, and system level.
HPE serves any size customer and meets their needs with custom-fit systems. Our diverse portfolio of Apollo Systems, ProLiant Gen10 Servers and the HPE SGI 8600 System, coupled with high-density storage solutions, provide customers the opportunity to choose the right solution to meet their needs. For example, in the HPE Apollo system family, including HPE Apollo 6500, 6000, 2000 and sx40, you can find an ideal HPC or AI System, including CPU-intensive or integrated CPU/GPU, along with a wide variety of system features, configuration options, and form factors. The HPE SGI 8600 achieves both high density and high scalability in an energy-efficient, water-cooled system. HPE also offers highly secure, industry-standard, rack-optimized servers with its ProLiant DL line.
Best of all, within all of these systems, the Intel OPA fabric fits seamlessly.
To discover how an HPE high-performance cluster with Intel OPA fabric can change the game for your HPC system, visit our website, or contact your local HPE or Intel sales representative or business partner.