At IBM’s inaugural Think 2018 conference and third US-based OpenPower summit, the tech giant is trumpeting swelling momentum for its OpenPower ecosystem and traction for Power9, including Google’s disclosure that its custom-built (by Rackspace) Power9 “Zaius” servers are “Google strong,” i.e., market ready. IBM is also welcoming the Power servers, backed by Nvidia V100 GPUs, into its cloud.
In a blog post on “bringing security and AI to the IBM cloud” Ross Mauri, general manager, IBM Z, and John Considine, general manager, IBM Cloud Infrastructure, revealed the availability of the Power9-based AC922 (“Witherspoon”) systems in the company’s cloud. The new servers with two 16-core Power9 chips and four Nvidia Tesla V100 GPUs are said to provide a 4x speedup for deep learning training compared to competitive Intel hardware, according to internal testing by IBM*.
While hailing the advantages of its Power servers’ unique support for the CPU-to-GPU NVLink interface (which offers a 5-10x speedup in communications over PCIe gen3), IBM has been slow to bring HPC-class Power architectures into its own cloud, even ceding first-mover bragging rights to partner and HPC cloud vendor Nimbix, which deployed the Power8 Minsky technology in 2016 (and was the first public cloud provider to do so). [Nimbix is also announcing Power9 on its cloud, which we cover in-depth here.]
IBM was an early adopter of Nvidia’s high-end Pascal and Volta GPU gear, but used the PCIe variants within x86 Intel servers. Why IBM held out on putting Power in its cloud is a bit of a head-scratcher since the company believes it has the superior technology (with Power) and strategy (with OpenPower) to put a dent in Intel’s game. If IBM’s cloud customers weren’t ready, that’s understandable, but for the sake of optics alone, it’s surprising IBM didn’t make the move earlier.
At any rate, now there is some critical mass happening for Power/OpenPower (covered here) and IBM Cloud is moving to embrace it. PowerAI will also be brought into the IBM Cloud, with planned availability for next quarter. The suite of artificial intelligence tools supporting popular frameworks like TensorFlow, Torch and Caffe was developed and optimized for IBM’s accelerated Power servers, so it’s only natural that it would be included.
In addition, IBM announced it is integrating SAP HANA on Power into the IBM Cloud as an SAP-certified managed application service. “Through this integrated approach of hardware and software, IBM is the first to offer this level of cloud support for massive workloads of up to 24 TB by using IBM Power Systems and IBM Storage,” said Mauir and Considine.
In a press briefing at IBM Think, Senior Vice President IBM Cognitive Systems Bob Picciano said the traditional approach to high-performance computing is undergoing disruption from the fast growing GPU-accelerated approach. “We’re democratizing hardware-software integration,” Picciano told HPCwire. He cited Snap ML on Power9 as an example, referring to the new library developed by the IBM Zurich Research that “trains models faster than you can snap your fingers.”
Using an online advertising (click stream) dataset released by Criteo Labs with over 4 billion training examples, the Zurich team trained a logistic regression classifier across four IBM Power System AC922 servers, each with four Nvidia Tesla V100 GPUs. Training was accomplished in just 91.5 seconds, a 46x speedup over the previous best result reported by Google, which used TensorFlow on Google Cloud Platform to train the same model in 70 minutes (more info here). “We’re seeing that pattern repeatedly with machine learning and deep learning once you move into Power9 on the GPU stack,” said Picciano.
While IBM rolls out Power9/OpenPower in the enterprise, it has already made a splash in leadership supercomputing with the DOE Power9-based CORAL machines Summit and Sierra. On track for acceptance this summer, the systems, which cost $325 million to develop and manufacture, will be the largest in the United States; and Summit, with an expected 200-petaflops of peak performance, is in the running to be the world’s fastest supercomputer, as measured by the long-running Linpack benchmark.
* Results are based on IBM Internal Measurements running the CUDA H2D Bandwidth Test Hardware: Power AC922; 32 cores (2 x 16c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU; Ubuntu 16.04. S822LC for HPC; 20 cores (2 x 10c chips), POWER8 with NVLink; 2.86 GHz, 512 GB memory, Tesla P100 GPU Competitive HW: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) / 40 threads; Intel Xeon E5-2640 v4; 2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04