IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. It’s another step in IBM efforts to lay claim to leadership in the nascent deep learning market. Offering supported distributions of popular frameworks, said Sumit Gupta, IBM vice president, High Performance Computing and Analytics, is a natural next step in expanding and commercializing deep learning use.
“What we did with PowerAI is create a software distribution for deep learning and machine learning. The insight to do that came from the Linux world. Most enterprise clients don’t go to Linux.org to get their Linux, they go to Red Hat or SUSE,” said Gupta. “Today deep learning is completely an open source community with users going to TensorFlow.org or Caffe.org etc. to download software. But we have clients saying they would prefer to get a supported distribution. So we created PowerAI, a pre-curated, pre-bundled binary that has all the deep learning frameworks. The problems we’re solving are that downloading and installing these frameworks is hard. TensorFlow, for example, depends on a 100 different packages.”
“I want to emphasize that in a sense we have become the Red Hat of deep learning. As Red Hat is to Linux, IBM PowerAI is to deep learning.”
TensorFlow, of course, was originally created by Google and then put into the open source community. “TensorFlow is quickly becoming a viable option for companies interested in deploying deep learning for tasks ranging from computer vision, to speech recognition, to text analytics,” said Rajat Monga, engineering leader for TensorFlow. “IBM’s enterprise offering of TensorFlow will help organizations deploy this framework — we’re glad to see this support.”
According to the IBM release, “IBM Technology Support Services will build upon its hardware support services by investing and launching a new innovative enterprise software support offering for the PowerAI stack for a competitive advantage. Further, IBM Global Business Solutions established a deep learning design and development team as part of its Cognitive Business Solutions practice to help build solutions on the PowerAI platform, while making use of popular frameworks such as TensorFlow.”
“Every enterprise is looking at emerging artificial intelligence methods to take advantage of the data they now have access to,” said Ken King, IBM general manager for OpenPOWER. “Our PowerAI software offering curates packages and provides enterprise-level support for the major deep learning frameworks like TensorFlow, to enable enterprises to easily use these new AI methods to build new computer models for analyzing their data.”
Selling Power-based servers is also a big goal. In September, IBM introduced several new Power8-based machines, including the Minsky platform that has the Power8+ chip and NVLink for communication with NVIDIA’s P100 GPUs. PowerAI has been optimized for Minsky and Gupta said TensorFlow, for example, “runs 30% faster* on Minsky (Power System S822LC for HPC) compared to an x86 system with PCIe GPUs, so we have shown the value of NVLink between the CPU and GPU (details of the systems compared with below).”
Gupta characterized TensorFlow as becoming one of the most popular DL framework in the U.S. while Chainer is the most popular in Japan. The PowerAI suite now includes CAFFE, Chainer, TensorFlow, Theano, Torch, cuDNN, NVIDIA DIGITS, and several other machine and deep learning frameworks and libraries. The IBM PowerAI roadmap includes addition of supported versions of the Microsoft Cognitive Toolkit (previously called CNTK) and Amazon’s MXNet, said Gupta.
The newest edition of the PowerAI software is available now for download. It will also be available on the HPC specialist Nimbix cloud which offers high-end Minsky machines with NVLink and P100 GPUs. Gupta said Nimbix already has “a lot of customers right now are using the Minsky cloud that they put up a few months ago.”
Market traction for the Minsky platform, said Gupta, has been especially strong although he declined to offer numbers or identify major wins.
*Details behind the 30% advantage supplied by IBM:
Achieved 30% more Images/sec on TensorFlow 0.12 training when compared to a system with four NVIDIA Tesla P100s attached through conventional PCIe when running the Inception v3 model, a popular image recognition framework.
Results are based on IBM Internal Measurements running TensorFlow 0.12 (model: Inception v3, dataset: ImageNet2012) training for 500 iterations.
Power System S822LC for HPC: 20 cores (2 x 10c chips), POWER8; 3.95GHz (peak); 512GB RAM; 4x Nvidia Tesla (NVLink) P100 GPU; Ubuntu 16.04, TensorFlow 0.12.0
Competitive System: E5-2640v4; 20 cores (2 x 10c chips), Broadwell; 3.4Ghz; 512GB RAM; 4x Nvidia Tesla P100 (PCIe) GPU; Ubuntu 16.04, TensorFlow 0.12.1