A Year of AI Transformation

By Tutiya Teevan

December 20, 2018

Artificial Intelligence (AI) is poised to shift from being an emergent technology into becoming mainstream and omnipresent. Two years ago, investment in AI had already reached nearly $40 billion worldwide.

AI adoption rates soaring, technology vendors have been racing to develop and offer high performance computing (HPC) infrastructure solutions that help make implementing AI easier, lower risk, and more affordable. IBM is no exception. In 2018, IBM established a brisk cadence of AI-related reference architectures, culminating in the recent announcement of IBM Spectrum AI with NVIDIA DGX, a high performance solution that combines industry acclaimed IBM Spectrum Scale software-defined storage with NVIDIA DGX-1 systems connected with Mellanox Infiniband networking. The new solution offers the highest performance storage of any comparable system, 2 and supports any GPU accelerated server, including IBM AC922.

To achieve acceptable levels of insight and accuracy, AI applications require access to immense amounts of training data and processing power. 3 Such requirements can make the infrastructure transformation necessary to enable AI complex, high risk, and costly, hindering adoption. To address these challenges, in June 2018 IBM announced the IBM Systems Reference Architecture for AI. Based on IBM PowerAI, IBM Spectrum Computing (the HPC-focused component), and IBM storage, this new reference architecture offers a proven solution for AI computing and deep learning that simplifies complex operations and reduces deployment and operational risk. Developed through realworld customer experience, the IBM Reference Architecture for AI provides a comprehensive guide to help organizations create successful AI infrastructure proofs-of-concept, expand these into production, and then scale the solutions as needed to accommodate AI application and data growth.

In August, IBM added fuel to its 2018 AI fire by announcing the IBM Power Systems Accelerated Computing Platform (IBM Power ACP). Currently, the two most powerful AI-enhanced supercomputers on the planet – Summit at Oak Ridge and Sierra at Lawrence Livermore National Labs – are built from IBM Power ACP elements. A key to these installations is the fact that they were assembled using only the same commercially available components as found in the IBM Power ACP offering. It’s a complete solution that includes IBM POWER9 servers; IBM Elastic Storage Server (ESS); networking, development, and runtime software; and professional services designed to help any organization easily build the onpremises infrastructure needed to support AI, HPC, and other compute-intensive workloads.

Headlining the IBM Power ACP offering is the IBM Power System Accelerated Compute Server – AC922 – the same servers used in the Summit and Sierra CORAL supercomputers. AC922 is designed for enterprise AI. With up to 1 TB of RAM, two 20-core IBM POWER9 processors, and up to four NVDIA Tesla GPUs connected through NVLINK, the AC922 server can handle the full range of demanding AI and HPC workloads.

By October, IBM Storage formalized the systems it had already been providing to car manufacturers into a solution called IBM Storage for Autonomous Driving. Advanced driver assistance systems and autonomous driving (AD) initiatives all have one thing in common – miles and miles of data. Sources include sensor data, weather data, satellite data, behavioral and other personal data, diagnostic data, and more. Each connected car generates from a few megabytes to sometimes terabytes per car per day when that car is a test vehicle used to train AD models. These connected-cars initiatives can generate storage demand for upwards of 200 exabytes of data daily across entire fleets. 4 The IBM Storage for AD solution is based on IBM Cloud Object Storage in order to accommodate these enormous data streams, IBM Spectrum Discover to enhance and manage the file metadata, and IBM Spectrum Scale to provide comprehensive storage management and data services – the same storage software found in Summit and Sierra.

Finally, in mid-December IBM announced IBM Spectrum AI with NVIDIA DGX. The solution provides the ready-to-deploy robust infrastructure and software that AI projects need to ramp up quickly and grow confidently. Designed for NVIDIA DGX-1 systems, IBM Spectrum Scale software-defined storage, and Mellanox networking, IBM Spectrum AI with NVDIA DGX can be configured to meet current and growing organizational requirements. The NVIDIA DGX software stack includes access to the latest in NVIDIA optimized containers via the NGC container repository, plus the new RAPIDS framework to accelerate data science workflow.

IBM Spectrum Scale can be deployed in configurations from a single IBM ESS to support a few GPU accelerated servers, to a full IBM Spectrum AI with NVIDIA DGX rack of nine servers with 72 NVIDIA V100 Tensor Core GPUs. And multi-rack configurations are possible as well. Unlike traditional storage arrays, highly parallel IBM Spectrum Scale can grow almost linearly to feed multiple GPUs. The result is a solution that delivers AI workload performance from shared storage comparable to that of local RAM disk. In an IBM Spectrum AI system, IBM Spectrum Scale on IBM NVMe flash has demonstrated 120GB/s of data throughput, 5 enough to support multiple users and multiple AI models simultaneously.

 

To learn more about IBM SpectrumAI with NVIDIA DGX, visit this webpage and join us for the January 29th webinar “Building your AI Data Pipeline with IBM SpectrumAI.”

IBM Spectrum Scale provides the flexibility to address storage requirements across the entire AI data pipeline – from ingest; through data classification, transformation, analytics, and model training; to data archiving. It can also provide storage services across different storage choices, including AWS public cloud. IBM Spectrum Scale can share data with IBM Cloud Object Storage and tape, with shared metadata services provided by IBM Spectrum Discover.

When it comes to transforming AI from dreams into reality, 2018 was a big year at IBM. The leader in AI technology announced new AI reference architectures, released multiple AI infrastructure solutions across IBM Systems and now with NVIDIA, and even accelerated the development of autonomous driving vehicles. Previously, AI remained mostly within the province of early technology adopters, but now, that’s no longer the case.


1 Datafloq: Why the Adoption Rate of AI is Increasing, May 2018 https://datafloq.com/read/adoption-rate-ai-increasing/5044
2 FIO data throughput testing of 120GB/s and AI workloads performance compared to the self-reported results of other NVIDIA RA business partners https://www.nvidia.com/en-us/data-center/dgx-reference-architecture/
3 TechTarget Whatis definition: Deep Learning (https://searchenterpriseai.techtarget.com/definition/deep-learning-deepneural-network)
4 IBM Solution Brief: IBM Storage solutions for advanced driver assistance systems and autonomous driving, October 2018 (https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=34019934USEN) 5 IBM lab testing of IBM Spectrum Scale on 3 NMVe arrays using 4k random reads driven by FIO on 9 NVIDIA DGX-1 systems connected with Mellanox EDR Infiniband

 

Return to Solution Channel Homepage

IBM Resources

Follow @IBMSystems

IBM Systems on Facebook

Do NOT follow this link or you will be banned from the site!
Share This