The Machine Learning Hype Cycle and HPC

By Dairsie Latimer

June 14, 2018

Like many other HPC professionals I’m following the hype cycle[1] around Machine Learning/Deep Learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectations’ but not quite yet starting the descent into the ‘trough of disillusionment.’

This still raises the probability that we are seeing the emergence of a truly disruptive presence in the HPC space – but perhaps not for the reasons you might expect. We’ve already seen how the current dominance of GPUs in the training of current ML/DL techniques has powered Nvidia to record revenues in the datacenter.

But is that hegemony set to be challenged? At last count there were 25 or more start-ups emerging from stealth or already within a few quarters of shipping hardware implementations aimed directly at accelerating aspects of training and inference.

They will be looking to capture market share from the current incumbents (Intel and Nvidia) as well as positioning themselves for the expected growth in ML/DL for edge computing applications. These companies are also going up against several of the hyperscalers and behemoths of the consumer market that are also rolling their own inference engines (thought admittedly mostly aimed at the mobile/edge space).

Gartner Hype Cycle shows five key phases of a technology’s life cycle (source: Gartner)

Since we seem to have accepted that HPC and big data are two elements of the same problem, how will the fact that research and development for ML/DL (regardless of domain) is often carried out on HPC systems skew procurements in the next few years? Looking at the latest crop of petascale and exascale pathfinders their performance stems mostly from Nvidia’s V100s. However smaller scale more general purpose systems are still predominantly homogeneous in composition with modest if any GPU deployment.

What’s interesting about this is that accelerators are now mainstream at the upper end of the market. While both CPUs and GPUs work well with the existing ML frameworks it’s clear that the new entrants are likely to bring significant advantages in performance and power efficiency even when measured against Nvidia’s mighty V100. What odds on Nvidia having to split their Tesla line to produce pure ML/DL targeted accelerators? How will this affect the way in which we procure heterogeneous HPC systems?

I personally think ML/DL methodology is and will continue to have a more immediate practical impact at the ‘edge’ than in scientific simulation (and there are lots of reasons for this) but there is no doubt that ML/DL will cohabit with more traditional HPC applications on many research systems.

Can we please stop abusing the term AI?

Like many I have a pet peeve which is the tendency to conflate traditional meaning of Artificial Intelligence (AI) with ML and DL. If we must use the term AI to encompass the various techniques by which machines can build models that approximate and in some cases outperform humans also expert in a problem area, can we at least start using the term Artificial Generalized Intelligence (AGI) more widely. There’s a useful primer on the subject on EnterpriseTech which saved me from having to write it myself.

So what will AI be good for in HPC and Big Data?

There are of course many arrows to the AI quiver and many are already successfully deployed as part of various HPC workflows, but most are essentially used for automation of data analysis and visualization tasks that can be performed by humans (or at least programs written by humans). The models have been conceived, built and trained by humans to replicate or improve upon some data analytics task.

Source: Shutterstock

The pursuit of new knowledge from discrete data is still something that is currently very much beyond us in the field of AGI let alone AI, and it also speaks to the method of scientific enquiry and human nature.

When we run simulations for well understood, or at least well defined scientific domain area, we already know how to extract value from the data that is generated. We’ve set up the numerical simulation after all so we know what to expect within certain bounds and we can interpret the results within that framework and mental model.

For new science we often don’t know the right questions to pose in advance, and as a result we can’t set up a precise or well defined process to extract value from it. The discovery process is more in the form of a dialog with the data, where a series of ‘what if’ questions are posed and the results scrutinized to see what value or insights they deliver. It is by nature an iterative process and it still requires a human to judge the value of the results.

If conceivably we could turn over the automation of this process to an AI it would bump up against a significant issue, which is that an AI model almost certainly won’t’ solve a problem in the same way as a scientist. The scientist would not necessarily have the ability to build a mental model that allows the transfer of knowledge and as a result it becomes an unverifiable black box. In science this acts as a red flag, and if a process is not well understood then someone will inevitably set out to document and postulate a theory that can be confirmed by experimental observation.

Now for those computational scientists I have spoken to about this, we accept that we routinely deploy fudge factors, or approximations, which we know are imperfect but serve a purpose, but we console ourselves that there is usually published science behind their use. As humans we are actually quite limited by the scope of the information we can process in pursuit of a solution and this is what DL models are exceedingly good at.

Now take the case of a DL model that has been trained to approximate some computationally expensive part of a time critical simulation. We know what data went into training it, though we many not understand the significance of some of it. We have observed the outputs and at some point they will meet a set criterion which means they are ‘good enough’ to use. But all models have corner cases; you can call them bugs if you like. In the event that a DL model produces a result that trips some sanity check how do you debug or verify a DL model, especially one that a human hasn’t explicitly guided the creation of?

It’s not so much that these models won’t be able to do the job, but we will naturally start to question how comfortable we are as scientists replying on a model that we don’t understand or can’t verify. Like most scientists and engineers I prefer to have a mental model of a process that is a bit more sophisticated than ’it just works.’

As a result, I do think that the uptake of AI in HPC will be tempered by the natural reluctance of many to see too many black boxes in their workflows. Perhaps there will be moves to ensure that the AI frameworks support some sort of human-verifiable intermediate representation rather than rather than us just making the leap of faith that the AI is right.

As humans we also rely on intuition which often requires an equivalent leap of faith but as scientists we’re on the brink of creating systems whose operation we don’t understand and can’t trace. The power of deep learning models and their ability to ingest prodigious quantities of widely different data and provide insights can’t be ignored but the temptation to waive the explainability factor should also be resisted.

[1] https://www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-hype-cycle-for-emerging-technologies-2017/

About the Author

Dairsie Latimer, Technical Advisor at Red Oak Consulting, has a somewhat eclectic background, having worked in a variety of roles on supplier side and client side across the commercial and public sectors as an consultant and software engineer. Following an early career in computer graphics, micro-architecture design and full stack software development, he has over twelve years’ specialist experience in the HPC sector, ranging from developing low-level libraries and software for novel computing architectures to porting complex HPC applications to a range of accelerators. Dairise joined Red Oak Consulting (@redoakHPC) in 2010 bringing his wealth of experience to both the business and customers.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

DNA Data Storage Innovation Reduces Write Times, Boosts Density

September 20, 2019

Storing digital data inside of DNA has been an idea since the 1960s, and recent developments have addressed some of the obstacles facing its scaled implementation. Now, researchers at the Technion-Israel Institute of Technology and the Interdisciplinary Center Herzliya have crossed another major milestone by using new techniques to store 10 petabytes of data in one gram of DNA. Read more…

By Oliver Peckham

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit systems up and running and a 53-qubit system expected to go Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

AWS Solution Channel

A Guide to Discovering the Best AWS Instances and Configurations for Your HPC Workload

The flexibility and heterogeneity of HPC cloud services provide a welcome contrast to the constraints of on-premises HPC. Every HPC configuration is potentially accessible to any given workload in a well-resourced cloud HPC deployment, with vast scalability to spin up as much compute as that workload demands in any given moment. Read more…

HPE Extreme Performance Solutions

Intel FPGAs: More Than Just an Accelerator Card

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Rumors of My Death Are Still Exaggerated: The Mainframe

[Connect with Spectrum users and learn new skills in the IBM Spectrum LSF User Community.]

As of 2017, 92 of the world’s top 100 banks used mainframes. Read more…

The European Processor Initiative’s Ambitious Vision of the Future

September 19, 2019

With the EuroHPC program well underway, much of the European Union’s ambition to be a leader in the exascale era rests with the European Processor Initiative (EPI). The project – which has a budget of roughly €160 Read more…

By Oliver Peckham

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

The European Processor Initiative’s Ambitious Vision of the Future

September 19, 2019

With the EuroHPC program well underway, much of the European Union’s ambition to be a leader in the exascale era rests with the European Processor Initiative Read more…

By Oliver Peckham

When in Rome: AMD Announces New Epyc CPU for HPC, Server and Cloud Wins

September 18, 2019

Where else but Rome could AMD hold the official Europe launch party for its second generation of Epyc microprocessors, codenamed Rome. Today, AMD did just that announcing key server wins, important cloud provider wins... Read more…

By John Russell

Dell’s AMD-Powered Server Line Targets High-End Jobs

September 17, 2019

Dell Technologies rolled out five new servers this week based on AMD’s latest Epyc processor that are geared toward data-driven workloads running on increasin Read more…

By George Leopold

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

IDAS: ‘Automagic’ HPC With Training Wheels

September 12, 2019

High-performance computing (HPC) for research is notorious for having steep barriers to entry. For this reason, high-tech disciplines were early adopters, have Read more…

By Elizabeth Leake

Univa Brings Cloud Automation to Slurm Users with Navops Launch 2.0

September 11, 2019

Univa, the company behind Grid Engine, announced today its HPC cloud-automation platform NavOps Launch will support the popular open-source workload scheduler Slurm. With the release of NavOps Launch 2.0, “Slurm users will have access to the same cloud automation capabilities... Read more…

By Tiffany Trader

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This