BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

By Ken Strandberg

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stages of training a Deep Neural Network (DNN), a lot of guesswork goes on. The algorithm assigns random values to the weights and computes the error. But the error is enormous in the beginning, and the values of the weights are a long way from the ones selected at the end. Representing weights as a 32-bit floating-point number is costly in terms of processing, yet most of the bits of the mantissa are not needed in early training. As training progresses and it hones the value of the weights, then greater precision becomes important in order to optimize the solution.

Using reduced precision floating point number formats offers benefits in memory footprint and bandwidth and in processing time, which can translate to power savings. These savings can possibly be significant if the benefits can be scaled out to accommodate training of massive DNNs. But will less precision affect overall accuracy of the training?

A lot of research in reduced precision for AI training and inferencing has gone on over the last year. Across Europe and the U.S., industry, academia, and research institutions are looking at this aspect of AI, including U.S. National Labs, Google, and Microsoft. Thus far, the work has resulted in papers, proposals, and some code. Google’s experiments with DNNs have shown that reducing the mantissa in 32-bit floating point numbers for certain calculations of DNNs is okay, “as long as you can represent tiny values closer to zero as part of the summation of small differences during training” (https://en.wikichip.org/wiki/brain_floating-point_format).

Google integrated the bfloat16 format, which provides the same size exponent as the IEEE standard 32-bit FP (float32) but with a smaller mantissa, into some of its products. Bfloat16 is being implemented in a range of future Intel processors for AI deep learning applications.

Intel has integrated a reduced representation format into the Vector Neural Network Instruction (VNNI), a part of Intel Deep Learning Boost (DL Boost), added to the Intel Advanced Vector Extensions 512 instruction set in 2nd generation Xeon Scalable processors.

But the jury is still out on which numbering format or code is best to use at different stages of training and for inferencing. What are the benefits to be gained, in terms of processing performance and power, for the different formats used? And what conditions tell a developer the best format or code to use and when? These are all areas of great interest to Marc Casas, Senior Researcher, at Barcelona Supercomputing Center (BSC).

“We believe dynamic numerical precision approaches offer the best benefit to training and inferencing,” stated Casas. “We are evaluating the applications of many formats and codes, including Intel DL Boost (such as VNNI and others), 32-bit and 64-bit floating point, Flexpoint, and integer formats, at various phases of training neural networks and inferencing.” Flexpoint is a format proposed by Intel for tensors and will be integrated in its Nervana Neural Network processors.

Casas and his team, including John Haiber Osorio Rios and Marc Ortiz of BSC, expect to identify at what phase of training it is best to apply different numerical presentations and how they benefit the network evolution without loss of accuracy. They will also study their impact on processor performance and power consumption on Intel hardware. But, understanding when to use an appropriate format and the impact on the hardware is only one aspect.

“We propose to not only develop innovative ways to exploit the potential of DL Boost and these numerical representations, but to dynamically adjust the Flexpoint/Bfloat16 formats to determine which DL Boost instructions to apply at different phases of training,” add Casas. “We will develop an algorithm to drive these dynamic adjustments based on different proxies describing the network evolution. These adaptive and dynamic schemes used for learning or inferencing phases of DNNs will make it possible to switch across different precisions on runtime.”

Casas says their baseline models are built on Alexnet and Resnet using the Imagenet data set. The project will use software emulations and eventually be applied and evaluated on Intel hardware designed to implement the numerical formats as the next-generation Intel silicon becomes available.

In 2017, BSC installed MareNostrum4, a large supercomputing cluster from Lenovo built on Intel Xeon Scalable processors and Intel Omni-Path Architecture fabric. Casas and his team will use MareNostrum4 to help them answer these questions.

“Understanding the use of dynamic numerical formats and developing schemes to apply them with will change the way industry is training networks,” concluded Casas. “Our work will shed light on allowing a more flexible training mechanism. We will look for ways to apply it to DNN Frameworks, like Intel’s version of Caffe and TensorFlow, so everyone can use it.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie, Doug Black and Tiffany Trader

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the General Chair of SC19 -- is an ACM Distinguished Scientist. Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn’t a shocking surprise as many observers speculated Intel w Read more…

By John Russell

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie, Doug Black and Tiffany Trader

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

AI and Enterprise Datacenters Boost HPC Server Revenues Past Expectations – Hyperion

April 9, 2019

Building on the big year of 2017 and spurred in part by the convergence of AI and HPC, global revenue for high performance servers jumped 15.6 percent last year Read more…

By Doug Black

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Systems Vendors Refresh Product Lines as Intel Launches New Xeon, Optane

April 2, 2019

Five of the biggest systems vendors – Dell EMC, Lenovo, Supermicro, Cisco and Cray – in concert with Intel’s launch this morning of its second-generation Xeon Scalable processors (Cascade Lake) and Optane persistent memory – announced the refresh of server portfolios leveraging the new Intel technologies. Read more…

By Doug Black

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This