Data-Hungry Algorithms and the Thirst for AI

By Tiffany Trader

March 29, 2017

At Tabor Communications’ Leverage Big Data + EnterpriseHPC Summit in Florida last week, esteemed HPC professional Jay Boisseau, chief HPC technology strategist at Dell EMC, engaged the audience with his presentation, “Big Computing, Big Data, Big Trends, Big Results.”

Trends around big computing and big data are converging in powerful ways, including the Internet of Things (IoT), artificial intelligence (AI) and deep learning. Innovating and competing is now about big, scalable computing and big, fast data analytics – and “those with the tools and talent will reap the big rewards,” Boisseau expressed.

Prior to joining Dell EMC (then Dell Inc.) in 2014, Boisseau made his mark as the founding director of the Texas Advanced Computing Center (TACC). Under his leadership the site became a center of HPC innovation, a legacy that continues today under Director Dan Stanzione.

Jay Boisseau

“I’m an HPC person who’s fascinated by the possibilities of augmenting intelligence with deep learning techniques; I’ve drunk the ‘deep learning Kool-Aid,’” Boisseau told the crowd of advanced computing professionals.

AI as a field goes back to the 50s, Boisseau noted, but the current proliferation of deep learning using deep neural networks has been made possible by three advances: “One is that we actually have big data; these deep learning algorithms are data hungry. Whereas we sometimes lament the growth of our data sizes, these deep neural networks are useless on small data. Use other techniques if you have small data, but if you have massive data and you want to draw insights that you’re not even sure how to formulate the hypothesis ahead of time, these neural network based methods can be really really powerful.

“Parallelizing the deep learning algorithms was another one of the advances, and having sufficiently powerful processors is another one,” Boisseau said.

AI, big data, cloud and deep learning are all intertwined and they are driving rapid expansion of the market for HPC-class hardware. Boisseau mines for correlations with the aid of Google Trends; the fun-to-play-with Google tool elucidates the contemporaneous rise of big data, deep learning, and IoT. Boisseau goes a step a further showing how Nvidia stock floats up on these tech trends.

The narrow point here is that deep learning/big data is an engine for GPU sales; the larger point is that these multiple related trends are driving silicon specialization and impacting market dynamics. As Boisseau points out, we’re only at the beginning of this trend cluster and we’re seeing silicon developed specifically for AI workloads as hardware vendors compete to establish themselves as the incumbent in this emerging field.

Another deep learning champion Nvidia CEO Jen Hsun Huang refers to machine learning as HPC’s consumer first killer app. When Nvidia’s CUDA-based ecosystem for HPC application acceleration launched in 2006, it kick started an era of heterogeneity in HPC (we’ll give the IBM-Sony Cell BE processor some cred here too even if the processor design was an evolutionary dead end). Fast forward to 2013-2014 and the emerging deep learning community found a friend in GPUs. With Nvidia, they could get their foot in the DL door with an economical gaming board and work their way up the chain to server-class Tesla GPUs, for max bandwidth and FLOPS.

Optimizations for single-precision (32-bit) processing, and support for half-precision (16-bit) on Nvidia’s newer GPUs, translates into faster computation for most AI workloads, which unlike many traditional HPC applications do not require full 64-bit precision. Intel is incorporating variable precision compute into its next-gen Phi product, the Knights Mill processor (due out this year).

Boisseau observed that starting about two decades ago HPC began the swing towards commodity architectures, with the invention of commodity-grade Beowulf clusters by Thomas Sterling in 1994. Benefiting from PC-based economies of scale, these x86 server-based Linux clusters became the dominant architecture in HPC. In turn, this spurred the movement toward broader enterprise adoption of HPC.

Although Xeon-flavored x86 is something of a de facto standard in HPC (with > 90 percent share), the pendulum appears headed back toward greater specialization and greater “disaggregation of technology,” to use a phrase offered by industry analyst Addison Snell (CEO, Intersect360 Research). Examples include IBM’s OpenPower systems; GPU-accelerated computing (and Power+GPU); ARM (now in server variants with HPC optimizations); AMD’s Zen/Ryzen CPU; and Intel’s Xeon Phi line (also its Altera FPGAs and imminent Xeon Skylake).

A major driver of all this: a gathering profusion of data.

“In short, HPC may be getting diverse again, but much of the forcing function is big data,” Boisseau observed. “Very simply, we used to have no digital data, then a trickle, but the ubiquity of computers, mobile devices, sensors, instruments and user/producers has produced an avalanche of data.”

Buzz terminology aside, big data is a fact of life now, “a forever reality” and those who can use big data effectively (or just “data” if the “big” tag drops off), will be in a position to out-compete, Boisseau added.

When data is your pinnacle directive and prime advantage, opportunity accrues to whoever holds the data, and that would be the hyperscalers, said Boisseau. Google, Facebook, Amazon, et al. are investing heavily in AI, amassing AI-friendly hardware like GPUs but also innovating ahead with even more efficient AI hardware (e.g., Tensor Processing Units at Google, FPGAs at Microsoft). On the tool side are about a dozen popular frameworks; TensorFlow (Google), mxnet (Amazon), and CNTK (Microsoft) among them.

Tech giants are advancing quickly too with AI strategies, Boisseau noted. Intel has made a quick succession of acquisitions (Nervana, Movidius, Saffron, MobilEye); IBM’s got its acquisition-enhanced Watson; Apple bought Turi.

“You [also] have companies like GraphCore, Wave Computing, and KnuPath that are designing special silicon with lower precision and higher performance,” said Boisseau. “There was a fourth one, Nervana, and Intel liked that company so much they bought it. So there were at least four companies making silicon dedicated to deep learning. I’m really eager to see if Nvidia – and I don’t have inside knowledge on this – further optimizes their technology for deep learning and removes some of the circuitry that’s still heritage graphics oriented as well as how the special silicon providers do competing against Intel and Nvidia as well as how Intel’s Nervana shapes up.”

Adding to the cloud/hyperscaler mix is the quickly expanding world of IoT, which is driving big data. The Internet of Things is enabling companies to operate more efficiently; it’s facilitating smart buildings, smart manufacturing, and smart products, said Boisseau. But as the spate of high-profile DDoS attacks attest, there’s a troubling security gap. The biggest challenge for IoT is “security, security, security,” Boisseau emphasized.

Another top-level point Boisseau made is that over half of HPC systems are now sold to industry, notably across manufacturing, financial services, life sciences, energy, EDA, weather and digital content creation. “Big computing is now as fundamental to many industries as it is in research,” Boisseau said. Half of the high performance computing TAM (total addressable market), estimated at nearly $30 billion, is now in enterprise/industry, and there’s still a lot of untapped potential, in Boisseau’s opinion.

Market projections for AI are even steeper. Research houses are predicting that AI will grow to tens of billions of dollars a year (IDC predicts a surge past $4 billion in 2020; IBM expects market to be $2 trillion over next decade; Tractica plots $36.8 billion in revenue by 2025).

Boisseau is confident that the world needs big data AND deep learning, citing the following reasons/scenarios:

  • Innovation requires ever more capability: to design, engineer, manufacture, distribute, market and produce new/better products and services.
  • Modeling and simulation enable design, in accordance with physics/natural laws, and virtual engineering, manufacturing, testing.
  • Machine learning and deep learning enable discovery and innovation
    • When laws of nature don’t apply (social media, sentiment, etc.) or are non-linear/difficult to simulate accurately over time (e.g. weather forecasting).
    • That may be quicker and/or less costly depending on simulation scale, complexity versus data completeness.

“When we understand the laws of nature, when we understand the equations, it gives us an ability to model and simulate highly accurately,” said Boisseau. “But for crash simulations, we still don’t want to drive a car that’s designed with data analysis; we need modeling and simulation to truly understand structural dynamics and fluid flow and even then data analysis can be used in the interpretation.

“There will be times where data mining over all those crash simulations adds to the modeling and simulation accuracy. So modeling and simulation will always remain important, at least as long as the universe is governed by visible laws, especially in virtual engineering and manufacturing testing, but machine learning and deep learning enable discovery in other ways, especially when the laws of nature don’t apply.”

“If you’ve adopted HPC great, but deep learning is next,” Boisseau told the audience. “It might not be next year for some of you, it might be two years, five years, but I suspect it’s sooner than you think.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

D-Wave Delivers 5000-qubit System; Targets Quantum Advantage

September 29, 2020

D-Wave today launched its newest and largest quantum annealing computer, a 5000-qubit goliath named Advantage that features 15-way qubit interconnectivity. It also introduced the D-Wave Launch program intended to jump st Read more…

By John Russell

What’s New in Computing vs. COVID-19: AMD, Remdesivir, Fab Spending & More

September 29, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

Global QC Market Projected to Grow to More Than $800 million by 2024

September 28, 2020

The Quantum Economic Development Consortium (QED-C) and Hyperion Research are projecting that the global quantum computing (QC) market - worth an estimated $320 million in 2020 - will grow at an anticipated 27% CAGR betw Read more…

By Staff Reports

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Committee last week accepted a subcommittee report calling for a t Read more…

By John Russell

Supercomputer Research Aims to Supercharge COVID-19 Antiviral Remdesivir

September 25, 2020

Remdesivir is one of a handful of therapeutic antiviral drugs that have been proven to improve outcomes for COVID-19 patients, and as such, is a crucial weapon in the fight against the pandemic – especially in the abse Read more…

By Oliver Peckham

AWS Solution Channel

The Water Institute of the Gulf runs compute-heavy storm surge and wave simulations on AWS

The Water Institute of the Gulf (Water Institute) runs its storm surge and wave analysis models on Amazon Web Services (AWS)—a task that sometimes requires large bursts of compute power. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

D-Wave Delivers 5000-qubit System; Targets Quantum Advantage

September 29, 2020

D-Wave today launched its newest and largest quantum annealing computer, a 5000-qubit goliath named Advantage that features 15-way qubit interconnectivity. It a Read more…

By John Russell

DoE’s ASCAC Backs AI for Science Program that Emulates the Exascale Initiative

September 28, 2020

Roughly a year after beginning formal efforts to explore an AI for Science initiative the Department of Energy’s Advanced Scientific Computing Advisory Commit Read more…

By John Russell

NOAA Announces Major Upgrade to Ensemble Forecast Model, Extends Range to 35 Days

September 23, 2020

A bit over a year ago, the United States’ Global Forecast System (GFS) received a major upgrade: a new dynamical core – its first in 40 years – called the finite-volume cubed-sphere, or FV3. Now, the National Oceanic and Atmospheric Administration (NOAA) is bringing the FV3 dynamical core to... Read more…

By Oliver Peckham

Arm Targets HPC with New Neoverse Platforms

September 22, 2020

UK-based semiconductor design company Arm today teased details of its Neoverse roadmap, introducing V1 (codenamed Zeus) and N2 (codenamed Perseus), Arm’s second generation N-series platform. The chip IP vendor said the new platforms will deliver 50 percent and 40 percent more... Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

Future of Fintech on Display at HPC + AI Wall Street

September 17, 2020

Those who tuned in for Tuesday's HPC + AI Wall Street event got a peak at the future of fintech and lively discussion of topics like blockchain, AI for risk man Read more…

By Alex Woodie, Tiffany Trader and Todd R. Weiss

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics code. These optimizations will be incorporated into release 2.15 with patches available for earlier versions. Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This