Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the second generation TPU v2 will soon be added to Google Compute Engine and Google Cloud shortly thereafter. The folks in the lab are clearly busy at Google. The new TPU is said to deliver 180 teraflops of floating-point performance.
“Powerful as these TPUs are on their own, though, we designed them to work even better together. Each TPU includes a custom high-speed network that allows us to build machine learning supercomputers we call “TPU pods.” A TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model,” wrote Jeff Dean, Google senior fellow and Urs Hölzle, senior vice president Google cloud infrastructure, in a blog (Build and train machine learning models on our new Google Cloud TPUs) last week.
Google says that using these TPU pods has already produced dramatic improvements in training times. “One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod,” wrote Dean and Hölzle.
NVIDIA CEO Jensen Huang in a lengthy bog on the AI/deep learning revolution this week paid tribute to Google, “It’s great to see the two leading teams in AI computing race while we collaborate deeply across the board – tuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. AI is the greatest technology force in human history.”
TPUs, of course, aren’t new to Google data centers, but the company started talking about them publicly only recently in a blog and also released a technical paper, titled “In-Datacenter Performance Analysis of a Tensor Processing Unit,” that details the design and performance characteristics of the TPU.
According to that paper, Google’s TPU was 15 to 30 times faster at inference than Nvidia’s K80 GPU and Intel Haswell CPU in a Google benchmark test. On a performance per watt scale, the TPUs are 30 to 80 times more efficient than the CPU and GPU (with the caveat that these are older designs). (See HPCwire/Datanami article by Alex Woodie, Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money)
In last week’s blog, the authors note, “We’re bringing our new TPUs to Google Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs. You can program these TPUs with TensorFlow, the most popular open-source machine learning framework on GitHub, and we’re introducing high-level APIs, which will make it easier to train machine learning models on CPUs, GPUs or Cloud TPUs with only minimal code changes.”
Link to Google blog: https://www.blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/
Link to NVIDIA blog: https://blogs.nvidia.com/blog/2017/05/24/ai-revolution-eating-software/