Automated Optimization Boosts ResNet50 Performance by 1.77x

By Tiffany Trader

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain performance, wasting precious cycles and watts.

In the fast-growing field of AI, optimized systems yield faster training times and require less infrastructure. But the tuning process can be tedious and requires specialized skills. Startup Concertio, creator of performance optimization toolkit Optimizer Studio, is asking the question, “can we relieve data scientists from the need to understand their specific underlying infrastructure and from the need to optimize the performance of their models?”

In short: can the tuning process be automated?

Using Concertio’s optimization tool, Intel was able to accelerate TensorFlow implementations of three popular deep learning models, including
ResNet50, which saw a speedup of 1.77x over baseline. The result, described by Intel and Concertio, was achieved automatically without any manual effort, producing comparable speedup to manual tuning by Intel’s engineers. “What took tens of hours of manual labor was now done automatically in just two hours,” reported Concertio Co-founder and CEO Tomer Morad in a blog post, published today.

“Concertio’s Optimizer Studio was able to leverage the tunables of TensorFlow and Intel Xeon Scalable Processors to further accelerate deep learning workloads,” shared Dr. Arjun Bansal, vice president of AI Software and Research at Intel. “Optimizer Studio is able to relieve engineers from the task of finding optimal system settings, as it achieves at least comparable performance to manual tuning – but without the manual effort.”

Concertio’s Optimizer Studio tool (profiled by EnterpriseTech earlier this year) navigates the broad parameter space of system settings and application settings on today’s devices and works to maximize the performance by finding the best settings possible. Settings can be anywhere in the system — in the processor, the firmware, the operating system and also in the applications and application frameworks, like TensorFlow. Optimizer Studio runs the workload iteratively until it finds a configuration that performs well. The two parameters that Intel had Optimizer Studio zero in on for tuning its Tensorflow workload for ResNet50 are called intra_op and inter_op, which control model level parallelism and data parallelism.

Morad explains that optimizing these parameters can greatly accelerate throughput for the training, but there’s a tradeoff where higher values increase parallelism but also amplify the contention on shared resources such as the main memory and on-chip caches. There comes a point where the benefit of increased parallelism gets canceled out by the slowdown caused by increased contention. With inter_op values ranging from 1 to 28 and intra_op taking an even number from 10 to 56, there are 672 possible configurations to explore, so finding the optimized combination requires extensive experimentation that can take tens of hours.

The team of Intel AI engineers, led by Dr. Jayaram Bobba, performed the optimization using Concertio Optimizer Studio version 1.12 on an Intel Xeon Platinum 8180 processor (@ 2.50GHz) with 384GB RAM. It took two hours and 8 minutes to identify the optimal values for inter_op and intra_op (found to be 2 and 28, respectively).

This graph shows ResNet50 relative performance during optimization:

The first model that Intel evaluated, ResNet50, is a variant of Deep Residual Networks, the deep convolutional neural network created by
Microsoft. The Intel team extended its assessment to include GNMT (Google’s Neural Machine Translation System) and DeepSpeech, an open-source speech-to-text engine, implemented in TensorFlow. In this round of testing, Intel was looking to see whether optimized OS and CPU settings would provide further performance gains following manual optimization of their TensorFlow tunables. Using Optimizer Studio with the same Xeon test platform led to the discovery of settings that improved the performance by 8.3 percent and 8 percent for GNMT and DeepSpeech respectively.

Morad told HPCwire that Concertio and the Intel AI team crossed paths a while back when Concertio was meeting with another group at Intel in Hillsboro, Ore. The Intel AI team is constantly looking for ways to improve TensorFlow performance on Intel Architectures, so it was natural for them to explore a tool that promised to automate the tedious task of searching for optimal configurations. The Intel engineers downloaded the Optimizer Studio software and conducted the experiments. Along the way, they provided feedback to Concertio that went into improving the product.

It is Concertio’s expectation that users running TensorFlow who have not done regular system tuning will likely see a sizable speedup from Optimizer Studio. “Since the effort involved in manual tuning on a regular basis is significant, we see that in the vast majority of cases it just never happens,” said Morad. “This is one of the main advantages of using automation for this purpose — the tool allows integrating performance optimization into the CI/CD pipeline so that every software version that comes out is always performing at its best.”

“Our aim was to make Optimizer Studio as intuitive as possible to use, and it is satisfying to see that the majority of users are able to see results in the first day of use without requiring assistance,” said Morad. “That said, we love being engaged with our users and assisting them, and we do so through various channels, including via Slack.”

Developers can purchase annual licenses of Optimizer Studio directly from Concertio or through value-added distributors. High-performance computing users can also get these tools from Red Barn Technology Group, an HPC systems integrator headquartered in Binghamton, NY.

Read the blog for the full study as well to see configuration details and disclaimers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiology. Clara, you may recall, is Nvidia’s biomedical platform Read more…

By John Russell

DARPA, NSF Seek Real-Time ML Processor

March 18, 2019

A new U.S. research initiative seeks to develop a processor capable of real-time learning while operating with the “efficiency of the human brain.” The National Science Foundation (NSF) and the Defense Advanced Re Read more…

By George Leopold

It’s Official: Aurora on Track to Be First U.S. Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaflops, will be delivered by the end of 2021 to Argonne Nation Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Spark That Ignited A New World of Real-Time Analytics

High Performance Computing has always been about Big Data. It’s not uncommon for research datasets to contain millions of files and many terabytes, even petabytes of data, or more. Read more…

NASA’s Pleiades Simulates Launch Abort Scenarios

March 15, 2019

NASA is using flow simulations running on its Pleiades supercomputer to help design the agency’s next manned spacecraft, Orion. Crew safety is paramount, so NASA engineers are using the HPC cluster to simulate and v Read more…

By George Leopold

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First U.S. Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Nvidia Wins Mellanox Stakes for $6.9 Billion

March 11, 2019

The long-rumored acquisition of Mellanox came to fruition this morning with GPU chipmaker Nvidia’s announcement that it has purchased the high-performance net Read more…

By Doug Black

Optalysys Rolls Commercial Optical Processor

March 7, 2019

Optalysys, Ltd., a U.K. company seeking to advance it optical co-processor technology, moved a step closer this week with the unveiling of what it claims is th Read more…

By George Leopold

Intel Responds to White House AI Initiative

March 6, 2019

The Trump Administration’s release last month of the “American AI Initiative,” aimed at prioritizing federal R&D investments in machine intelligence, Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This