Many companies in a wide variety of industries are increasingly looking to use artificial intelligence (AI) to provide innovative services, make faster discoveries, enhance their business operations, and even improve the quality of customer engagements. With AI going mainstream, companies need new infrastructures designed to explore vast amounts of data and deliver actionable results quickly.
Systems must deliver the required capabilities, easily scale, and provide flexibly to support new software and technologies with the latest enhancements and optimizations. While other compute-intensive applications have similar needs, AI workloads have additional requirements that must be addressed in the solution’s architecture.
In particular with AI, the compute power needed is different from that required for other business applications and even many HPC applications. Most AI solutions today leverage the compute power of NVIDIA GPUs. GPUs are ideally suited for an expanding list of AI operations from data preparation to neural net training algorithms. For such applications, GPUs run AI workloads in a massively parallel fashion, speeding computations compared to that of using traditional CPUs.
The I/O requirements are different, too. Efficient AI solutions require extreme data throughput to feed the GPUs. High-bandwidth shared storage is critical to support delivering data for training or inference. However, I/O requirements – data types and sizes of file – can vary significantly across the total AI data pipeline. Data preparation and classification can take as much, if not more time, than model development. The solution must also easily scale to accommodate the large amounts of data used to train and run AI systems with low operating costs.
Software-defined storage provides the performance and flexibility to address the needs of the AI data pipeline. Properly sizing high-performance flash storage with the right networking ensures the data throughput to keep the GPUs saturated with the agility to meet the needs across the AI data pipeline.
At the top of performance-based storage solutions available in the market today, IBM Spectrum Scale™ on NVMe flash and high-performance InfiniBand interconnect technology deliver best-in-class scalable storage performance. This has been clearly demonstrated in the converged solution — IBM SpectrumAI with NVIDIA DGX.
Addressing the Infrastructure Requirements of AI
Much of the HPC infrastructure technology needed for AI workloads is new to most businesses. It is challenging enough to select the right GPU-platform, storage, and interconnect technologies. Even harder is bringing these elements together into a complete tested and proven solution for AI.
Organizations have many choices in these technology areas to build a system that matches their needs. Unfortunately, because of AI’s unique data requirements most businesses do not have both the experience to assemble a system and the expertise to manage it.
At a high level, what most companies adopting AI are looking for:
- An easily deployed AI infrastructure that will support their business
- A system that delivers deterministic performance as they grow
- A system that accelerates time-to-insight and ready for the latest AI software
- A simplified, post-deployment support experience, that’s enterprise grade and covers the entire hardware and software solution stack
IBM SpectrumAI with NVIDIA DGX – An Infrastructure Solution for Scalable AI
To ease the transition meeting these types of requirements, a group of industry-leading solution providers has developed IBM SpectrumAI with NVIDIA DGX – a reference architecture infrastructure solution built upon their work in AI supercomputing.
The key players include:
- NVIDIA, the leader in GPU computing and purpose-built systems for AI workloads
- IBM, a leading provider of scalable, high-performance storage solutions
- Mellanox Technologies, which offers leading advanced interconnect technology for both HPC and AI
This validated solution offers a fast, streamlined design to deployment experience, combined with a simplified support model. It prescriptively integrates NVIDIA DGX-1 servers with IBM NVMe_Powered_ESS with Mellanox networking.
With the simplicity of a single converged InfiniBand network for deployment, which is also optimized for all leading AI frameworks and NVIDIA libraries, such as NCCL and additionally supported by IBM storage, there is significant investment cost reduction without sacrificing superior performance.
Another key technology that helps accelerate many AI workloads supported on IBM SpectrumAI with DGX, is GPUDirect™ RDMA (remote direct memory access). Co-developed by Mellanox and NVIDIA, GPUDirect RDMA provides the ability to place data into remote GPU memory directly from the network, which eliminates both the operating system and processor involvement, so it is exceptionally faster than other solutions.
The Validated Platform for AI initiative builds upon the work IBM, Mellanox and NVIDIA have done for the largest and smartest supercomputers in the world but tailored for enterprises and the latest NVIDIA DGX systems. It brings the benefits of state-of-the-art technologies and takes the guesswork out of the investment and performance equation for a company new to AI.
The group’s work means that businesses do not have to know these technologies and they do not have to know how to integrate them. These companies have been technology partners for years, and are bringing their joint expertise to the table, allowing businesses that are new to AI or looking to scale their efforts the resources to quickly reap the benefits. Rather than spending months researching technologies and solutions, any business can use the guidance provided by the validated platform effort to jump ahead and deploy a suitable system quickly.
Learn more about IBM SpectrumAI with NVIDIA DGX :