Driven by a surging demand for HPC and AI compute power, the delay between the introduction of high-end GPUs and adoption by cloud vendors is shrinking. With the Nvidia V100 launch ink still drying and other big cloud vendors still working on Pascal generation rollouts, Amazon Web Services has become the first cloud giant to offer the Tesla Volta GPUs, beating out competitors Google and Microsoft.
Google had been the first of the big three to offer P100 GPUs, but now we learn that Amazon is skipping Pascal entirely and going directly to Volta with the launch of V100-backed P3 instances that include up to eight GPUs connected by NVLink. It was only one year ago that AWS deployed its Tesla K80 “P2” instances, two years after the Kepler-generation silicon debuted at SC14. Microsoft, which has been hoisting the HPC flag in its Azure cloud (most recently via its Cycle Computing acquisition in August and this week’s deal to colo Cray supers) has said it will deploy Tesla P100 gear by year’s end.
Amazon’s P3 instances employ customized Intel Xeon E5-2686 v4 processors running at up to 2.7 GHz and come in three sizes: p3.2xlarge with one GPU; p3.8xlarge with four GPUs; and p3.16xlarge with eight GPUs. Each GPU comprises 5,120 CUDA cores and another 640 Tensor cores, providing a theoretical maximum of 125 teraflops of mixed-precision, 15.7 teraflops of single-precision, and 7.8 teraflops of double-precision. High-speed NVLink interconnection on the four and eight-GPU instances allows the GPUs to communicate directly without going through the CPU or the PCI-Express fabric.
The boost to deep learning workloads provided by the Tensor cores as well as the other HPC-focused attributes of Volta (enhanced cache, HBM2 memory and NVLink) provide justification for Amazon’s quick embrace of the V100 GPUs. With the aforementioned Tensor cores delivering 125 peak tensor teraflops per GPU, the V100 provides up to 12 times more throughput for deep learning training compared to P100 FP32 operations, and for deep learning inference, up to six times more throughput compared to P100 FP16 operations. (Of note, Nvidia and Baidu published research earlier this month showing how they are achieving FP32 accuracy with mixed-precision performance; summarized nicely here.)
In a press statement, Matt Garman, vice president of Amazon EC2, commented on the speedup over AWS’s K80-backed P2 instances and the traction for GPU computing. “When we launched our P2 instances last year, we couldn’t believe how quickly people adopted them,” he said. “Most of the machine learning in the [AWS] cloud today is done on P2 instances, yet customers continue to be hungry for more powerful instances.” He added that P3 instances offer up to 14 times better performance than P2 instances for training machine learning models and 2.7 times more double-precision floating point arithmetic for HPC applications.
The P3 instances are available now in four regions: US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). They can be purchased via On-demand, Reserved or Spot pricing.
Amazon is also releasing new deep learning AMIs configured with CUDA 9 for Volta. The AMIs come preinstalled with popular frameworks like Google’s TensorFlow and Caffe2, optimized for the V100 GPU and the P3 instance family.
In related news, Nvidia announced that its AI cloud container registry is generally available. Previewed at GTC17 in May, the Nvidia GPU Cloud (NGC) runs on the company’s distribution of Docker containers and is touted as “purpose built” for developing deep learning models on GPUs. Similar to the AWS AMI, the NGC service provides access to common GPU-accelerated frameworks, such as TensorFlow, Caffe, Microsoft Cognitive Toolkit (CNTK) and Torch, as well as CUDA for application development.
AWS is the first public cloud to interface with the NGC tool, which is free to users. Nvidia says it plans to expand support to other cloud platforms soon.