The Machine Learning Hype Cycle and HPC

By Dairsie Latimer

June 14, 2018

Like many other HPC professionals I’m following the hype cycle[1] around Machine Learning/Deep Learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectations’ but not quite yet starting the descent into the ‘trough of disillusionment.’

This still raises the probability that we are seeing the emergence of a truly disruptive presence in the HPC space – but perhaps not for the reasons you might expect. We’ve already seen how the current dominance of GPUs in the training of current ML/DL techniques has powered Nvidia to record revenues in the datacenter.

But is that hegemony set to be challenged? At last count there were 25 or more start-ups emerging from stealth or already within a few quarters of shipping hardware implementations aimed directly at accelerating aspects of training and inference.

They will be looking to capture market share from the current incumbents (Intel and Nvidia) as well as positioning themselves for the expected growth in ML/DL for edge computing applications. These companies are also going up against several of the hyperscalers and behemoths of the consumer market that are also rolling their own inference engines (thought admittedly mostly aimed at the mobile/edge space).

Gartner Hype Cycle shows five key phases of a technology’s life cycle (source: Gartner)

Since we seem to have accepted that HPC and big data are two elements of the same problem, how will the fact that research and development for ML/DL (regardless of domain) is often carried out on HPC systems skew procurements in the next few years? Looking at the latest crop of petascale and exascale pathfinders their performance stems mostly from Nvidia’s V100s. However smaller scale more general purpose systems are still predominantly homogeneous in composition with modest if any GPU deployment.

What’s interesting about this is that accelerators are now mainstream at the upper end of the market. While both CPUs and GPUs work well with the existing ML frameworks it’s clear that the new entrants are likely to bring significant advantages in performance and power efficiency even when measured against Nvidia’s mighty V100. What odds on Nvidia having to split their Tesla line to produce pure ML/DL targeted accelerators? How will this affect the way in which we procure heterogeneous HPC systems?

I personally think ML/DL methodology is and will continue to have a more immediate practical impact at the ‘edge’ than in scientific simulation (and there are lots of reasons for this) but there is no doubt that ML/DL will cohabit with more traditional HPC applications on many research systems.

Can we please stop abusing the term AI?

Like many I have a pet peeve which is the tendency to conflate traditional meaning of Artificial Intelligence (AI) with ML and DL. If we must use the term AI to encompass the various techniques by which machines can build models that approximate and in some cases outperform humans also expert in a problem area, can we at least start using the term Artificial Generalized Intelligence (AGI) more widely. There’s a useful primer on the subject on EnterpriseTech which saved me from having to write it myself.

So what will AI be good for in HPC and Big Data?

There are of course many arrows to the AI quiver and many are already successfully deployed as part of various HPC workflows, but most are essentially used for automation of data analysis and visualization tasks that can be performed by humans (or at least programs written by humans). The models have been conceived, built and trained by humans to replicate or improve upon some data analytics task.

Source: Shutterstock

The pursuit of new knowledge from discrete data is still something that is currently very much beyond us in the field of AGI let alone AI, and it also speaks to the method of scientific enquiry and human nature.

When we run simulations for well understood, or at least well defined scientific domain area, we already know how to extract value from the data that is generated. We’ve set up the numerical simulation after all so we know what to expect within certain bounds and we can interpret the results within that framework and mental model.

For new science we often don’t know the right questions to pose in advance, and as a result we can’t set up a precise or well defined process to extract value from it. The discovery process is more in the form of a dialog with the data, where a series of ‘what if’ questions are posed and the results scrutinized to see what value or insights they deliver. It is by nature an iterative process and it still requires a human to judge the value of the results.

If conceivably we could turn over the automation of this process to an AI it would bump up against a significant issue, which is that an AI model almost certainly won’t’ solve a problem in the same way as a scientist. The scientist would not necessarily have the ability to build a mental model that allows the transfer of knowledge and as a result it becomes an unverifiable black box. In science this acts as a red flag, and if a process is not well understood then someone will inevitably set out to document and postulate a theory that can be confirmed by experimental observation.

Now for those computational scientists I have spoken to about this, we accept that we routinely deploy fudge factors, or approximations, which we know are imperfect but serve a purpose, but we console ourselves that there is usually published science behind their use. As humans we are actually quite limited by the scope of the information we can process in pursuit of a solution and this is what DL models are exceedingly good at.

Now take the case of a DL model that has been trained to approximate some computationally expensive part of a time critical simulation. We know what data went into training it, though we many not understand the significance of some of it. We have observed the outputs and at some point they will meet a set criterion which means they are ‘good enough’ to use. But all models have corner cases; you can call them bugs if you like. In the event that a DL model produces a result that trips some sanity check how do you debug or verify a DL model, especially one that a human hasn’t explicitly guided the creation of?

It’s not so much that these models won’t be able to do the job, but we will naturally start to question how comfortable we are as scientists replying on a model that we don’t understand or can’t verify. Like most scientists and engineers I prefer to have a mental model of a process that is a bit more sophisticated than ’it just works.’

As a result, I do think that the uptake of AI in HPC will be tempered by the natural reluctance of many to see too many black boxes in their workflows. Perhaps there will be moves to ensure that the AI frameworks support some sort of human-verifiable intermediate representation rather than rather than us just making the leap of faith that the AI is right.

As humans we also rely on intuition which often requires an equivalent leap of faith but as scientists we’re on the brink of creating systems whose operation we don’t understand and can’t trace. The power of deep learning models and their ability to ingest prodigious quantities of widely different data and provide insights can’t be ignored but the temptation to waive the explainability factor should also be resisted.

[1] https://www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-hype-cycle-for-emerging-technologies-2017/

About the Author

Dairsie Latimer, Technical Advisor at Red Oak Consulting, has a somewhat eclectic background, having worked in a variety of roles on supplier side and client side across the commercial and public sectors as an consultant and software engineer. Following an early career in computer graphics, micro-architecture design and full stack software development, he has over twelve years’ specialist experience in the HPC sector, ranging from developing low-level libraries and software for novel computing architectures to porting complex HPC applications to a range of accelerators. Dairise joined Red Oak Consulting (@redoakHPC) in 2010 bringing his wealth of experience to both the business and customers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced computing technologies for the AI and exascale era. "Over th Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has now encompassed CPUs offered by the leading public cloud serv Read more…

By Doug Black

Medical Imaging Gets an AI Boost

December 3, 2019

AI technologies incorporated into diagnostic imaging tools have proven useful in eliminating confirmation bias, often outperforming human clinicians who may bring their own prejudices. Another issue slowing progress is t Read more…

By George Leopold

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science itself. At SC19, Steve Squyres’ opening keynote recounting th Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

AI Needs Intelligent HPC infrastructure

Artificial Intelligence (AI) has revolutionized entire industries and enables humanity to solve some of the most daunting challenges. To accomplish this, it requires massive amounts of data from heterogeneous sources that is processed it new ways that differs significantly from HPC applications. Read more…

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science its Read more…

By John Russell

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

Tsinghua University Racks Up Its Ninth Student Cluster Championship Win at SC19

November 27, 2019

Tsinghua University has done it again. At SC19 last week, the eight-time gold medal-winner team took home the top prize in the 2019 Student Cluster Competition Read more…

By Oliver Peckham

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

How the Gordon Bell Prize Winners Used Summit to Illuminate Transistors

November 22, 2019

At SC19, the Association for Computing Machinery (ACM) awarded the prestigious Gordon Bell Prize to the Swiss Federal Institute of Technology (ETH) Zurich. The Read more…

By Oliver Peckham

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This