SAN JOSE, Calif., March 28, 2018 — WekaIO, a leader in high performance, scalable file storage for data intensive applications, today announced that its Matrix software outperforms legacy file systems and local-drive NVMe on GPUs, delivering an exceptional performance boost and cost-savings for high performance AI applications when coupled with Mellanox Technologies’ InfiniBand intelligent interconnect solutions.
WekaIO Matrix achieved 5Gb/s of throughput per client running NVIDIA TensorRT inference optimizer over Mellanox EDR 100Gb/s InfiniBand on 8 NVIDIA® Tesla® V100 GPUs, meeting the performance deep learning networks require. Customers with demanding AI workloads could expect to achieve 11Gb/s 256K read performance from a single client host, a performance benchmark that was achieved on a 6-host cluster with 12 Micron 9200 drives. Additionally, benchmark results showed that customers will experience a boost in performance over local-drive NVMe even when link speeds are reduced to 25Gb/s. Together, WekaIO with Mellanox InfiniBand over local NVMe delivers performance superiority for customers who need higher loads from single DGX-1 servers.
The work with Mellanox provides a comprehensive view and understanding of WekaIO Matrix software’s ability to distribute data across multiple GPU nodes to achieve higher performance, scalability, lower latency, and better cost savings for machine learning and technical compute workloads.
Mellanox is the leading supplier of performance interconnect solutions for high performance GPU clusters used for deep learning workloads. When customers couple Mellanox’ ConnectX-series InfiniBand with WekaIO Matrix software, they realize significant performance improvements to data-hungry workloads without making any modifications to their existing network.
“We are very excited by the results we are achieving with WekaIO,” said Gilad Shainer, VP of Marketing at Mellanox. “By taking full advantage of our smart InfiniBand acceleration engines, WekaIO Matrix delivers world leading storage performance for artificial intelligence applications.”
“In deep learning environments we see large compute nodes, almost universally augmented with GPUs, where customers need performance scaling so that they can train their large neural networks faster. Local file systems with NVMe fall short of the 5Gb/s of storage performance required to leverage the processing power of GPUs—leaving expensive GPU resources underutilized,” said Liran Zvibel, co-founder and CEO at WekaIO. “Our work with Mellanox demonstrates that WekaIO with InfiniBand provides the best infrastructure for environments with GPUs, delivering superior performance and economics for our customers.”
See WekaIO Matrix demonstrated in Booth #624 at the NVIDIA GPU Conference, March 27-29, 2018, in San Jose, Calif.
WekaIO leapfrogs legacy infrastructures and improves IT agility by delivering software-centric data storage solutions that unlock the true promise of the cloud. WekaIO Matrix software is ideally suited for performance intensive workloads such as Web 2.0 application serving, financial modeling, life sciences research, media rendering, Big Data analytics, log management and government or university research. For more information, visit www.weka.io, email us at [email protected], or watch our latest video here.