The rise of deep-learning (DL) has been fueled by the improvements in accelerators. Accelerators allow DL models to crunch a large amount of data, which is vital for them to achieve high accuracy. In fact, AlexNet, the famous winner of the ILSVRC 2012 competition, was trained on GPUs. GPU continues to remain the most widely used accelerator for DL applications, due to several of its features, such as high performance, continued improvements in its architecture and software-stack, ease of programming using high-level languages such as CUDA and availability of GPUs in cloud.
“Accelerating DL models” is chasing a moving target
As DL models are becoming more pervasive and accurate, their compute and memory requirements are growing tremendously. For example, training a deep neural network (DNN) takes a large amount of time, e.g., 100-epoch training of ResNet-50 on ImageNet dataset on one M40 GPU requires 14 days. Similarly, during inference, meeting the latency targets while achieving high data-reuse and throughput is a major challenge.
Extracting last bit of performance from GPU
While treating GPU as a black box is a convenient abstraction of DL researchers, even simple architectural optimizations can boost the performance of GPU significantly. For example, since the input-data to DNN remains unchanged, it can be stored in the constant cache. The weights can be loaded in shared memory to avoid incurring the penalty of accessing global memory. Also, partial sums can be stored in the register file to achieve efficient accumulation.
In fact, architecture-oblivious techniques run the risk of losing their theoretical benefits. For example, although weight pruning is expected to increase performance by virtue of reducing the model size of a DNN, on GPUs, pruning actually harms the performance of DNNs. This is because weight pruning makes the DNN sparse, which requires sparse matrix-multiplication (MM). However, optimizations such as memory-coalescing and matrix tiling cannot be performed on sparse MM. To address this inefficiency, researchers suggest doing “node pruning,” and not “weight pruning” on GPU.
Node pruning does not make the network sparse, and although it brings a smaller reduction in model size than weight pruning, it achieves higher throughput by more effectively utilizing the massive resources of GPUs.
Similarly, optimizing data-layouts, batching, and data-reuse is important to get high performance. Also, since convolution can be performed in multiple ways such as FFT, Winograd, lowering (matrix-multiplication) or direct convolution, the choice of the right strategy is essential. The recent survey paper I’ve written with Shraiysh Vaishay reviews many techniques for optimizing DL on GPUs.
Utilizing both CPU memory and GPU memory
DNN training requires a significant amount of memory, which may exceed the memory capacity of a single GPU. For example, training VGG-16 with a batch size of 256 requires 28GB memory, which is larger than the 12GB memory capacity of Titan X.
To alleviate the memory bottleneck issue, the memory resources of CPUs can be used. In the back-propagation algorithm, the feature maps of a layer, which are produced during the forward-propagation phase, are later reused during the backward-propagation phase of the same layer. Since current machine-learning frameworks allocate the memory for accommodating the needs of all the layers, these feature maps stay in GPU memory for a long time without getting used. To alleviate this inefficiency, feature maps not required by the current layer in the forward-propagation phase are offloaded to CPU memory and released from GPU memory. During the backward propagation phase, these feature maps are fetched from CPU memory to GPU memory just before the processing of that layer. Evidently, the GPU memory management techniques and high-bandwidth interconnect such as NVLink can play a significant role in accelerating training of DNN workloads.
HPC is vital for AI
Distributed computing over a cluster of GPUs can reduce the training time of DNNs significantly. For example, researchers from SenseTime Research and Nanyang Technological University, Singapore have trained AlexNet over ImageNet dataset in just 1.5 minutes. They have used a cluster of 64 machines, each with 8 Volta GPUs. They also perform a range of optimizations at all levels of abstraction, such as using NVIDIA’s NCCL communication library and storing parameters and gradients in half-precision (FP16). Also, they overlap the communication of gradient of one layer with backward propagation of subsequent layers, combine multiple allreduce operations into one operation to reduce the memory copy overhead and intelligently transmit only those gradients that exceed a threshold.
Similarly, researchers from Sony corporation have trained ResNet-50 in just 2 minutes using 3,456 Volta GPUs. This “race to train DNNs” is no less exciting than the “race to the moon” seen in the 1960s! On a more serious note, the DNN training performance can be a more meaningful metric for HPC systems than the peak performance metrics such as Exaflop. This has already led to the creation of benchmarks such as DawnBench and MLPerf.
AI accelerator future promises to be exciting
While the general-purpose nature of GPU makes it useful for a broad range of applications, it also precludes thorough optimization of GPU architecture for AI applications. In this regard, custom-made AI accelerators such as Google’s tensor processing unit (TPU) are in a vantage position. It remains to be seen whether the future trajectory of GPU architecture will see revolutionary or evolutionary changes. It will be also interesting to see how well the next-generation GPU strikes a balance between the conflicting goals of special-purpose and general-purpose computing, and how well it competes with the other AI accelerators.
Sparsh Mittal received the B.Tech. degree in electronics and communications engineering from IIT, Roorkee, India and the Ph.D. degree in computer engineering from Iowa State University (ISU), USA. He worked as a Post-Doctoral Research Associate at Oak Ridge National Lab (ORNL), USA for 3 years. He is currently working as an assistant professor at IIT Hyderabad, India. He was the graduating topper of his batch in B.Tech and has received fellowship from ISU and performance award from ORNL. Sparsh has published more than 70 papers in top conferences and journals. His research interests include accelerators for machine learning, non-volatile memory, and GPU architectures. His webpage is http://www.iith.ac.in/~sparsh/