Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stages of training a Deep Neural Network (DNN), a lot of guesswork goes on. The algorithm assigns random values to the weights and computes the error. But the error is enormous in the beginning, and the values of the weights are a long way from the ones selected at the end. Representing weights as a 32-bit floating-point number is costly in terms of processing, yet most of the bits of the mantissa are not needed in early training. As training progresses and it hones the value of the weights, then greater precision becomes important in order to optimize the solution.
Using reduced precision floating point number formats offers benefits in memory footprint and bandwidth and in processing time, which can translate to power savings. These savings can possibly be significant if the benefits can be scaled out to accommodate training of massive DNNs. But will less precision affect overall accuracy of the training?
A lot of research in reduced precision for AI training and inferencing has gone on over the last year. Across Europe and the U.S., industry, academia, and research institutions are looking at this aspect of AI, including U.S. National Labs, Google, and Microsoft. Thus far, the work has resulted in papers, proposals, and some code. Google’s experiments with DNNs have shown that reducing the mantissa in 32-bit floating point numbers for certain calculations of DNNs is okay, “as long as you can represent tiny values closer to zero as part of the summation of small differences during training” (https://en.wikichip.org/wiki/brain_floating-point_format).
Google integrated the bfloat16 format, which provides the same size exponent as the IEEE standard 32-bit FP (float32) but with a smaller mantissa, into some of its products. Bfloat16 is being implemented in a range of future Intel processors for AI deep learning applications.
Intel has integrated a reduced representation format into the Vector Neural Network Instruction (VNNI), a part of Intel Deep Learning Boost (DL Boost), added to the Intel Advanced Vector Extensions 512 instruction set in 2nd generation Xeon Scalable processors.
But the jury is still out on which numbering format or code is best to use at different stages of training and for inferencing. What are the benefits to be gained, in terms of processing performance and power, for the different formats used? And what conditions tell a developer the best format or code to use and when? These are all areas of great interest to Marc Casas, Senior Researcher, at Barcelona Supercomputing Center (BSC).
“We believe dynamic numerical precision approaches offer the best benefit to training and inferencing,” stated Casas. “We are evaluating the applications of many formats and codes, including Intel DL Boost (such as VNNI and others), 32-bit and 64-bit floating point, Flexpoint, and integer formats, at various phases of training neural networks and inferencing.” Flexpoint is a format proposed by Intel for tensors and will be integrated in its Nervana Neural Network processors.
Casas and his team, including John Haiber Osorio Rios and Marc Ortiz of BSC, expect to identify at what phase of training it is best to apply different numerical presentations and how they benefit the network evolution without loss of accuracy. They will also study their impact on processor performance and power consumption on Intel hardware. But, understanding when to use an appropriate format and the impact on the hardware is only one aspect.
“We propose to not only develop innovative ways to exploit the potential of DL Boost and these numerical representations, but to dynamically adjust the Flexpoint/Bfloat16 formats to determine which DL Boost instructions to apply at different phases of training,” add Casas. “We will develop an algorithm to drive these dynamic adjustments based on different proxies describing the network evolution. These adaptive and dynamic schemes used for learning or inferencing phases of DNNs will make it possible to switch across different precisions on runtime.”
Casas says their baseline models are built on Alexnet and Resnet using the Imagenet data set. The project will use software emulations and eventually be applied and evaluated on Intel hardware designed to implement the numerical formats as the next-generation Intel silicon becomes available.
In 2017, BSC installed MareNostrum4, a large supercomputing cluster from Lenovo built on Intel Xeon Scalable processors and Intel Omni-Path Architecture fabric. Casas and his team will use MareNostrum4 to help them answer these questions.
“Understanding the use of dynamic numerical formats and developing schemes to apply them with will change the way industry is training networks,” concluded Casas. “Our work will shed light on allowing a more flexible training mechanism. We will look for ways to apply it to DNN Frameworks, like Intel’s version of Caffe and TensorFlow, so everyone can use it.”