DENVER, Colo., Nov. 19 — Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that NVIDIA GPUDirect RDMA technology, which is supported on NVIDIA Tesla K40 and K20 series GPU accelerators, is now supported on Mellanox’s Connect-IB InfiniBand adapters. The combined solution of Mellanox’s Connect-IB FDR InfiniBand adapters, NVIDIA GPUDirect RDMA technology and Tesla GPU accelerators provides industry-leading application performance and efficiency for GPU-accelerator based high-performance clusters.
With full support based on the message passing interface (MPI) MVAPICH2-2.0b release by The Ohio State University, the following features and capabilities are enabled:
- Multi-rail capabilities for NVIDIA GPUDirect RDMA with MVAPICH2
- 67 percent reduction in small message latency and a 10 percent reduction in large message latency
- 5X bandwidth improvement for small messages with Connect-IB
- Support for RDMA over InfiniBand and Ethernet (RoCE)
“Using enhanced MVAPICH2-2.0b with NVIDIA GPUDirect RDMA-based designs, end-users will now see a significant reduction in latency for small messages and an increase in bandwidth for large messages,” said Professor Dhableswar K. (DK) Panda of The Ohio State University. “The MVAPICH2-2.0b design with NVIDIA GPUDirect RDMA support is able to deliver excellent performance for K40 GPUs using Connect-IB FDR adapters.”
“We see increased adoption of FDR InfiniBand and NVIDIA GPUDirect RDMA technology by leading commercial partners, government agencies, as well as academia and research institutions,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox’s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA are providing the highest level of application performance, scalability and efficiency for GPU-based clusters.”
“With 12GB of ultra-fast GDDR5 memory and support for PCIe Gen 3 interconnect technology, the new Tesla K40 accelerators are ideal for ultra-large scale scientific and commercial workloads,” said Ian Buck, vice president of Accelerated Computing at NVIDIA. “When coupled with NVIDIA GPUDirect RDMA technology, Mellanox InfiniBand solutions unlock new levels of performance for HPC customers by enabling direct memory access from the GPU across the InfiniBand fabric.”
Beta-level support for NVIDIA GPUDirect RDMA and MVAPICH2-2.0b-GDR will be publically available this quarter with the upcoming MLNX_OFED 2.1 release. For more information please email email@example.com.
Visit Mellanox Technologies at SC’13 (November 18-21, 2013)
Visit Mellanox Technologies at SC’13 (booth #2722) to see live NVIDIA GPUDirect RDMA demonstrations by OSU and rCUDA, as well as the full suite of Mellanox’s end-to-end high-performance InfiniBand and Ethernet solutions. For more information on Mellanox’s event and speaking activities at SC’13, please visit http://www.mellanox.com/sc13.
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at www.mellanox.com.
Source: Mellanox Technologies