Like many other HPC professionals I’m following the hype cycle around Machine Learning/Deep Learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectations’ but not quite yet starting the descent into the ‘trough of disillusionment.’
This still raises the probability that we are seeing the emergence of a truly disruptive presence in the HPC space – but perhaps not for the reasons you might expect. We’ve already seen how the current dominance of GPUs in the training of current ML/DL techniques has powered Nvidia to record revenues in the datacenter.
But is that hegemony set to be challenged? At last count there were 25 or more start-ups emerging from stealth or already within a few quarters of shipping hardware implementations aimed directly at accelerating aspects of training and inference.
They will be looking to capture market share from the current incumbents (Intel and Nvidia) as well as positioning themselves for the expected growth in ML/DL for edge computing applications. These companies are also going up against several of the hyperscalers and behemoths of the consumer market that are also rolling their own inference engines (thought admittedly mostly aimed at the mobile/edge space).
Since we seem to have accepted that HPC and big data are two elements of the same problem, how will the fact that research and development for ML/DL (regardless of domain) is often carried out on HPC systems skew procurements in the next few years? Looking at the latest crop of petascale and exascale pathfinders their performance stems mostly from Nvidia’s V100s. However smaller scale more general purpose systems are still predominantly homogeneous in composition with modest if any GPU deployment.
What’s interesting about this is that accelerators are now mainstream at the upper end of the market. While both CPUs and GPUs work well with the existing ML frameworks it’s clear that the new entrants are likely to bring significant advantages in performance and power efficiency even when measured against Nvidia’s mighty V100. What odds on Nvidia having to split their Tesla line to produce pure ML/DL targeted accelerators? How will this affect the way in which we procure heterogeneous HPC systems?
I personally think ML/DL methodology is and will continue to have a more immediate practical impact at the ‘edge’ than in scientific simulation (and there are lots of reasons for this) but there is no doubt that ML/DL will cohabit with more traditional HPC applications on many research systems.
Can we please stop abusing the term AI?
Like many I have a pet peeve which is the tendency to conflate traditional meaning of Artificial Intelligence (AI) with ML and DL. If we must use the term AI to encompass the various techniques by which machines can build models that approximate and in some cases outperform humans also expert in a problem area, can we at least start using the term Artificial Generalized Intelligence (AGI) more widely. There’s a useful primer on the subject on EnterpriseTech which saved me from having to write it myself.
So what will AI be good for in HPC and Big Data?
There are of course many arrows to the AI quiver and many are already successfully deployed as part of various HPC workflows, but most are essentially used for automation of data analysis and visualization tasks that can be performed by humans (or at least programs written by humans). The models have been conceived, built and trained by humans to replicate or improve upon some data analytics task.
The pursuit of new knowledge from discrete data is still something that is currently very much beyond us in the field of AGI let alone AI, and it also speaks to the method of scientific enquiry and human nature.
When we run simulations for well understood, or at least well defined scientific domain area, we already know how to extract value from the data that is generated. We’ve set up the numerical simulation after all so we know what to expect within certain bounds and we can interpret the results within that framework and mental model.
For new science we often don’t know the right questions to pose in advance, and as a result we can’t set up a precise or well defined process to extract value from it. The discovery process is more in the form of a dialog with the data, where a series of ‘what if’ questions are posed and the results scrutinized to see what value or insights they deliver. It is by nature an iterative process and it still requires a human to judge the value of the results.
If conceivably we could turn over the automation of this process to an AI it would bump up against a significant issue, which is that an AI model almost certainly won’t’ solve a problem in the same way as a scientist. The scientist would not necessarily have the ability to build a mental model that allows the transfer of knowledge and as a result it becomes an unverifiable black box. In science this acts as a red flag, and if a process is not well understood then someone will inevitably set out to document and postulate a theory that can be confirmed by experimental observation.
Now for those computational scientists I have spoken to about this, we accept that we routinely deploy fudge factors, or approximations, which we know are imperfect but serve a purpose, but we console ourselves that there is usually published science behind their use. As humans we are actually quite limited by the scope of the information we can process in pursuit of a solution and this is what DL models are exceedingly good at.
Now take the case of a DL model that has been trained to approximate some computationally expensive part of a time critical simulation. We know what data went into training it, though we many not understand the significance of some of it. We have observed the outputs and at some point they will meet a set criterion which means they are ‘good enough’ to use. But all models have corner cases; you can call them bugs if you like. In the event that a DL model produces a result that trips some sanity check how do you debug or verify a DL model, especially one that a human hasn’t explicitly guided the creation of?
It’s not so much that these models won’t be able to do the job, but we will naturally start to question how comfortable we are as scientists replying on a model that we don’t understand or can’t verify. Like most scientists and engineers I prefer to have a mental model of a process that is a bit more sophisticated than ’it just works.’
As a result, I do think that the uptake of AI in HPC will be tempered by the natural reluctance of many to see too many black boxes in their workflows. Perhaps there will be moves to ensure that the AI frameworks support some sort of human-verifiable intermediate representation rather than rather than us just making the leap of faith that the AI is right.
As humans we also rely on intuition which often requires an equivalent leap of faith but as scientists we’re on the brink of creating systems whose operation we don’t understand and can’t trace. The power of deep learning models and their ability to ingest prodigious quantities of widely different data and provide insights can’t be ignored but the temptation to waive the explainability factor should also be resisted.
About the Author
Dairsie Latimer, Technical Advisor at Red Oak Consulting, has a somewhat eclectic background, having worked in a variety of roles on supplier side and client side across the commercial and public sectors as an consultant and software engineer. Following an early career in computer graphics, micro-architecture design and full stack software development, he has over twelve years’ specialist experience in the HPC sector, ranging from developing low-level libraries and software for novel computing architectures to porting complex HPC applications to a range of accelerators. Dairise joined Red Oak Consulting (@redoakHPC) in 2010 bringing his wealth of experience to both the business and customers.