InfiniBand is pervasively used in high-performance computing (HPC) to remove data exchange bottlenecks, delivering very high throughput and very low latency. As HPC becomes more mainstream and embraced by enterprise users, there is a need for assurances that performance is optimized. There also is a need for enterprise-class availability. There cannot be disruptions of organization-critical production workloads.
Meeting these requirements of improving HPC cluster performance and uptime can be challenging. When most InfiniBand HPC clusters were in supercomputing labs and academic data centers, those facilities typically had experts who could manage and fine-tuned the cluster using command line interfaces.
Enterprise users might not have the expertise or the time to spend on these tasks. They need easy to use management tools that intuitively help them optimize workflows and ensure the workflows run with minimal or no downtime.
Technology that makes this possible
The ultimate goal of using HPC resources is to speed the time to insight or discovery. Beyond raw compute power, this requires optimizing the performance of HPC clusters to ensure high-throughput workloads run as fast as possible and that the execution of various workloads can be sustained without interruptions.
As such, there is a need for better insight into what is happening in the HPC cluster and ways to simply manage the cluster and optimize its performance.
Fortunately, there are several technologies available that can help. The technologies cover the core infrastructure, easier management of the network, and fault-tolerance capabilities. The technologies come from Mellanox Technologies, the leading provider of InfiniBand technology, and Fabriscale Technology, a provider of fabric management and network intelligence and analytics solutions.
The specific technologies from these companies that help organizations improve the performance of their HPC clusters and increase uptime include:
Infrastructure: Efficient high-performance computing systems require high-bandwidth, low-latency connections, both between thousands of multi-processor nodes and to high-speed shared storage systems.
Mellanox interconnect solutions provide low-latency, high-bandwidth, high message rate, transport offload to facilitate extremely low CPU overhead, Remote Direct Memory Access (RDMA), and advanced intelligent communications and computations offloads. The high-speed interconnect solutions are commonly used for large-scale simulations, replacing proprietary or low-performance solutions. Mellanox InfiniBand solutions provide the scalability, efficiency, and performance needed in HPC and Artificial Intelligence systems today.
Fabric management: Fabriscale Wingman is a new fabric management software that ensures more efficient and reliable operation of HPC clusters based on InfiniBand technology. Wingman provides efficient routing and optimized fault-tolerance capabilities, which ensures improved performance, fast failover, and graceful degradation in the case of faults.
Whenever a fault occurs in the network (e.g., link failure), Wingman will automatically detect the problem and quickly reconfigure the network to use a pre-computed path, which reduces the time it takes to handle network faults from several minutes to less than a second when compared to existing solutions. This leads to reduced downtime and improved utilization of the cluster.
Wingman also supports dynamic partitioning. This makes it possible to set up and tear down InfiniBand partitions on the fly. This feature can be integrated with job scheduling or virtual machine provisioning to provide on-demand network isolation at the job, user, or virtual cluster level.
Network monitoring and analytics: Fabriscale Hawk-eye is a cluster interconnect monitoring software that provides visual insight into the status of an InfiniBand cluster. Hawk-eye provides an overview of performance, visualizes the topology, and lets an operator drill-down into statistics, alerts, and key metrics. Hawk-eye does also offer seamless integration with workload managers in order to leverage job scheduling information to visualize jobs in the cluster, identify potential job specific network bottlenecks, and conduct job management.
With Hawk-eye, monitoring of an InfiniBand network is automated, and the system raises alarms (e.g., link failures, port error rates, congestion) only when the operator’s attention is required. When such an event happens, an operator will quickly be pointed to where the problem has occurred, supported by relevant metrics and statistics with strong analytics support. This saves the operator time, leads to faster error recovery situations, reduces strain on key operator resources, and helps reduce downtime for the cluster.
Synergies: Using the Fabriscale tools on top of a Mellanox InfiniBand infrastructure simplifies network management and helps an organization fine-tune the performance of the HPC cluster. These tools help the operator to quickly identify problems and impart the resiliency to overcome link failures and other network problems. These attributes help make an HPC cluster more robust, ensuring optimal use of HPC resources.
Summary
InfiniBand deployments have increased, expanding into the enterprise. Managing these networks can be a challenging task for the system and network administrators. They need tools to understand the performance and to optimize the data flow in the network.
Using Fabriscale Hawk-eye and Fabriscale Wingman with Mellanox InfiniBand technology makes it easier to get the most use out of an HPC cluster’s resources and helps avoid cluster downtime.
This combination of technologies allows jobs to run faster and more jobs to be executed in a given time frame. As a result, organizations can do more work with their compute resources and maximize data center and cluster productivity.
For more information, visit: www.fabriscale.com and www.mellanox.com