SLIDE: Rice Researchers Create Algorithm That Trains Deep Neural Nets Faster on CPUs than GPUs

March 3, 2020

AUSTIN, Texas, March 3, 2020 — Rice University computer scientists have overcome a major obstacle in the burgeoning artificial intelligence industry by showing it is possible to speed up deep learning technology without specialized acceleration hardware like graphics processing units (GPUs).

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Rice University’s Anshumali Shrivastava led a team that demonstrated how to implement deep learning technology without specialized acceleration hardware like graphics processing units. Image courtesy Jeff Fitlow/Rice University.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUsrecently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Rice researchers created a cost-saving alternative to GPU, an algorithm called “sub-linear deep learning engine” (SLIDE) that uses general purpose central processing units (CPUs) without specialized acceleration hardware.

“Our tests show that SLIDE is the first smart algorithmic implementation of deep learning on CPU that can outperform GPU hardware acceleration on industry-scale recommendation datasets with large fully connected architectures,” said Anshumali Shrivastava, an assistant professor in Rice’s Brown School of Engineering who invented SLIDE with graduate students Beidi Chen and Tharun Medini.

SLIDE doesn’t need GPUs because it takes a fundamentally different approach to deep learning. The standard “back-propagation” training technique for deep neural networks requires matrix multiplication, an ideal workload for GPUs. With SLIDE, Shrivastava, Chen and Medini turned neural network training into a search problem that could instead be solved with hash tables.

This radically reduces the computational overhead for SLIDE compared to back-propagation training. For example, a top-of-the-line GPU platform like the ones AmazonGoogle and others offer for cloud-based deep learning services has eight Tesla V100s and costs about $100,000, Shrivastava said.

“We have one in the lab, and in our test case we took a workload that’s perfect for V100, one with more than 100 million parameters in large, fully connected networks that fit in GPU memory,” he said. “We trained it with the best (software) package out there, Google’s TensorFlow, and it took 3 1/2 hours to train.

“We then showed that our new algorithm can do the training in one hour, not on GPUs but on a 44-core Xeon-class CPU,” Shrivastava said.

Deep learning networks were inspired by biology, and their central feature, artificial neurons, are small pieces of computer code that can learn to perform a specific task. A deep learning network can contain millions or even billions of artificial neurons, and working together they can learn to make human-level, expert decisions simply by studying large amounts of data. For example, if a deep neural network is trained to identify objects in photos, it will employ different neurons to recognize a photo of a cat than it will to recognize a school bus.

“You don’t need to train all the neurons on every case,” Medini said. “We thought, ‘If we only want to pick the neurons that are relevant, then it’s a search problem.’ So, algorithmically, the idea was to use locality-sensitive hashing to get away from matrix multiplication.”

Hashing is a data-indexing method invented for internet search in the 1990s. It uses numerical methods to encode large amounts of information, like entire webpages or chapters of a book, as a string of digits called a hash. Hash tables are lists of hashes that can be searched very quickly.

“It would have made no sense to implement our algorithm on TensorFlow or PyTorch because the first thing they want to do is convert whatever you’re doing into a matrix multiplication problem,” Chen said. “That is precisely what we wanted to get away from. So we wrote our own C++ code from scratch.

Shrivastava said SLIDE’s biggest advantage over back-propagation is that it is data parallel.

“By data parallel I mean that if I have two data instances that I want to train on, let’s say one is an image of a cat and the other of a bus, they will likely activate different neurons, and SLIDE can update, or train on these two independently,” he said. “This is much a better utilization of parallelism for CPUs.

“The flipside, compared to GPU, is that we require a big memory,” he said. “There is a cache hierarchy in main memory, and if you’re not careful with it you can run into a problem called cache thrashing, where you get a lot of cache misses.”

Shrivastava said his group’s first experiments with SLIDE produced significant cache thrashing, but their training times were still comparable to or faster than GPU training times. So he, Chen and Medini published the initial results on arXiv in March 2019 and uploaded their code to GitHub. A few weeks later, they were contacted by Intel.

“Our collaborators from Intel recognized the caching problem,” he said. “They told us they could work with us to make it train even faster, and they were right. Our results improved by about 50% with their help.”

Shrivastava said SLIDE hasn’t yet come close to reaching its potential.

“We’ve just scratched the surface,” he said. “There’s a lot we can still do to optimize. We have not used vectorization, for example, or built-in accelerators in the CPU, like Intel Deep Learning Boost. There are a lot of other tricks we could still use to make this even faster.”

Shrivastava said SLIDE is important because it shows there are other ways to implement deep learning.

“The whole message is, ‘Let’s not be bottlenecked by multiplication matrix and GPU memory,’” Chen said. “Ours may be the first algorithmic approach to beat GPU, but I hope it’s not the last. The field needs new ideas, and that is a big part of what MLSys is about.”

Additional co-authors include James Farwell, Sameh Gobriel and Charlie Tai, all of Intel Labs.

The research was supported by the National Science Foundation (NSF-1652131, NSF-BIGDATA 1838177), the Air Force Office of Scientific Research (FA9550-18-1-0152), Amazon and the Office of Naval Research.

Links and resources:

A copy of the MLSys paper is available at: https://www.cs.rice.edu/~as143/Papers/SLIDE_MLSys.pdf

About Rice University 

Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,962 undergraduates and 3,027 graduate students, Rice’s undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 4 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger’s Personal Finance.


Source: Jade Boyd, Rice University 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the Read more…

By Tiffany Trader

Microsoft’s Azure Quantum Platform Now Offers Toshiba’s ‘Simulated Bifurcation Machine’

September 22, 2020

While pure-play quantum computing (QC) gets most of the QC-related attention, there’s also been steady progress adapting quantum methods for select use on classical computers. Today, Microsoft announced that Toshiba’ Read more…

By John Russell

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availability of instances with Nvidia’s newest GPU, the A100. OCI als Read more…

By John Russell

IBM, CQC Enable Cloud-based Quantum Random Number Generation

September 21, 2020

IBM and Cambridge Quantum Computing (CQC) have partnered to achieve progress on one of the major business aspirations for quantum computing – the goal of generating verified, truly random numbers that can be used for a Read more…

By Todd R. Weiss

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at current count) across the European Union and supplanting HPC Read more…

By Oliver Peckham

AWS Solution Channel

Next-generation aerospace modeling and simulation: benchmarking Amazon Web Services High Performance Computing services

The aerospace industry has been using Computational Fluid Dynamics (CFD) for decades to create and optimize designs digitally, from the largest passenger planes and fighter jets to gliders and drones. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for high-performance computing, a newly created position that is a Read more…

By Tiffany Trader

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s seco Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie, Tiffany Trader and Todd R. Weiss

IBM’s Quantum Race to One Million Qubits

September 15, 2020

IBM today outlined its ambitious quantum computing technology roadmap at its virtual Quantum Summit. The eye-popping million qubit number is still far out, agrees IBM, but perhaps not that far out. Just as eye-popping is IBM’s nearer-term plan for a 1,000-plus qubit system named Condor... Read more…

By John Russell

Nvidia Commits to Buy Arm for $40B

September 14, 2020

Nvidia is acquiring semiconductor design company Arm Ltd. for $40 billion from SoftBank in a blockbuster deal that catapults the GPU chipmaker to a dominant position in the datacenter while helping troubled SoftBank reverse its financial woes. The deal, which has been rumored for... Read more…

By Todd R. Weiss and George Leopold

AMD’s Massive COVID-19 HPC Fund Adds 18 Institutions, 5 Petaflops of Power

September 14, 2020

Almost exactly five months ago, AMD announced its COVID-19 HPC Fund, an ongoing flow of resources and equipment to research institutions studying COVID-19 that began with an initial donation of $15 million. In June, AMD announced major equipment donations to several major institutions. Now, AMD is making its third major COVID-19 HPC Fund... Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This