The Masters of Uncertainty

By Nicole Hemsoth

September 13, 2013

According to Dr. Houman Owhadi and Dr. Clint Scovel, both from the California Institute of Technology, Bayesian methods are becoming more prevalent as high performance computing advances continue. In this special audio-based feature interview, we talk with both about what these methods will contribute to a number of research and enterprise endeavors, what computational requirements exist as we move toward more advanced questions, and how the field is evolving—and will continue to evolve with exascale (or even quantum) class systems.

HPCwire: In the context of high performance computing the two of you have argued recently that Bayesian methods are becoming even more popular than they ever were before to quantify uncertainty both in science and industry. What is it about these methods that are necessary and even more necessary as we move towards ever more advanced systems?

Owhadi: Bayesian inference goes all the way back to a formula discovered by Reverend Thomas Bayes, and when he found that formula, Pierre-Simon Laplace took his research and developed further into a field called Baysian Inference. This started a 250 year old controversy. What is the controversy about? In Bayesian inference, you have some prior about what reality could be, then you condition your prior with some data that you observe. This is basically the Bayes’ rule.

Now Pierre-Simon Laplace took that one step further where he said, ok, I don’t really need to have an exact prior we’re representing – an exact measure corresponding to what reality could be. I could just make up a prior, or a belief about what it could be, then I could use Bayes’ formula to update my belief. And this started the field that we know today as Bayesian inference.

In the 50’s, Bayesian inference was mainly looked at as a curiosity because we couldn’t really compute those Bayesian posteriors for complex systems. But now with the advent of high performance computing, we can actually compute those posterior probabilities. Since Bayesian inference is also an elegant and simple way of combining information with beliefs, it’s becoming increasingly popular.

HPCwire: Dr. Scovel to build on what he just said – HPC is so often considered to be about increasing fidelity and resolution, moving as close to reality as possible, so where does uncertaintly fit in in the next generation of systems and applications.

Scovel: Well certainly no one believes that those systems compute exact, anything. There’s always an error, and so having some confidence in what the results are is always going to be of interest. There’s another way that these things all fit together, and that is that not only is uncertainty quantification useful for high performance computing, but high performance computing is useful for uncertainty quantification; because that computation that he was saying that has just come about in the 50’s is basically about our ability to do these numerical computations – in particular Markov chain Monte Carlo simulations to compute these posteriors.

HPCwire: Let’s talk about what areas of industry and scientific computing most important. Where is this most valuable?

Owhadi: Risk analysis. Climate modeling. Take for instance Boeing. So when Boeing is developing a new plane, most of the budget goes into the safety assessment of their new model. What you have to understand with respect to that industry is that they have to certify that their new model of airplane has a probability of catastrophic event that is smaller than 10 to the power of minus nine per hour of flight. Now this is really small.

And of course, they cannot fly one billion airplanes and just see how many crash, so they have to assess the safety of their system and they have limited information. Take another example, you are JPL and you wanted to design your new satellite and you want your satellite to go around the planet in the solar system, and you are spending a lot of money for it. How do you certify that your system is not going to crash? One way is to build 1,000 of these satellites and just count how many crash, but that would be too costly. So you have to do it with a very limited amount of data. This has created a new field called uncertainty quantification, which is an emerging field. It is basically a field that is at the interface between probability and statistics and computer science.

It mainly has to do with engineering systems characterized by low number of samples, and complex information. The way we are seeing that it should be pushed forward is basically to be able to process information in an optimal way, to assess the risk in an optimal way, without making assumptions that may not be true, and without ignoring relevant information.

So let me explain – at the end of the day our point of view is that you cannot really say if a piece of information or piece of data is accurate or not unless you test it. But once this information is given to you, the best thing that you can do is to process it in an optimal way. Basically what we are striving to do is to develop an algorithmic framework to allow us to do just that: process information in an optimal way. Now you can imagine that there are plenty of places where you can apply these things.

HPCwire: Dr. Scovel, I believe this leads into your pet project right now which is the scientific computation of optimal estimators. Where does that fit into this conversation we’re having. Can you describe it more thoroughly.

Scovel: Yes, when he was talking doing this in the optimal way, the first question is what does that mean. Instead of providing solutions, the first thing that we do is actually formulate a problem that incorporates everybody’s – the customers objectives that they’re interested in, the available information, what we know about the information, what the domain experts know about the information et cetera – and then you formulate this optimization problem which essentially defines what it means to be an optimal solution to this question that you’ve asked – like how reliable is that satellite going to be.

Where this is new is that historically what has been done, you provide some model for this process and you see what happens with the model. Ours is different. We’re saying we want to formulate this problem that we’re trying to solve and we’re going to use our computing capability – in particular, high performance computing – to solve these problems. I think historically is very similar to Bayes methods in the past. Historically, the reason why people didn’t go down this path is we didn’t have the computing power to do it, but I think that we now do have the computing power to actually solve these problems defining optimal estimation problems, or optimal prediction problems given optimal being optimal over some set of assumptions that we’re all willing to agree on.

HPCwire: I noticed that both of you have, just in your body of research, talked about solving exascale class problems. Those are the types of systems you’re looking at to be able to do this at a much higher level, is that correct?

Owhadi: Yes. I could take another analogy here – 200 years ago, if I were to ask you to solve a partial differential equation, you wouldn’t use a computer, you would probably use your brain. You would probably not come with a quantitiative estimate of the solution, but only qualitative estimates.

Now, if ask you the same question today, you will not use your brain to solve the partial differential equation – you will use a computer. But you will still use your brain to program the computer that will crunch numbers for you to solve the partial differential equation. This paradigm shift can be traced back to seminal work by John Von Neumann and (Herman) Goldstein in the 50’s, and humans organized as computers in the beginning of the previous century.

Now today if I asked you to find a statistical estimator, or to find the best possible climate model, or to find a test that will tell me if some data that I’m observing is corrupted or not – you’re not going to use a computer to do that. You’re going to use your brain and guess work. What we want to do here is basically turn this guesswork into an algorithm that we’ll be able to implement on a high performance cluster.

If you look at the mathematics behind this problem of turning the process of scientific discovery into an algorithm, you can basically translate it into an optimization problem, but optimization variables are not discrete. They’re not zeros and ones – they’re basically infinite dimensional objects. What we have found is a new form of calculus that allows us to turn these infinite dimensional optimization problems into finite dimensional optimization problems that we can start solving on computers.

Even after reduction these optimization problems are extremely large, so that’s why we believe that we’ll probably need exascale or petascale machines for solving these kind of problems for complex systems.

HPCwire: What’s interesting in that conversation about the systems required, I recently talked to, I believe the only, quote, “quantum computer company,” D-Wave recently – and this exact type of optimization problem – this best of all worlds solution is exactly the sweet spot for quantum cmputers. Do you believe those actually exist, and if so, are they a good fit for the types of problems you’re seeking to solve?

Owhadi: This is interesting because with respect to exascale machines – some people believe that there are two ways to approach high performance computing. The first way is just to solve the same kind of problems, but bigger problems.  So for instance if you’re interested in climate modeling, you’re still going to do climate modeling. You still are going to run your model, but with a finer mesh, with a finer resolution, and hopefully instead of predicting the weather for four days ahead, you’ll be able to do it for five days out.

What we envision here is some kind of paradigm shift where instead of numerically solving a bigger model, you actually use your computer to find the model itself. So yes, if there are quantum computers out there, that would be a great thing for this new kind of framework.

Scovel: The more computing power we can have the better our success is going to be. 

Owhadi: Information doesn’t necessarily come in the form of zero and ones. If you look at the interlaying optimization problems, they involve optimization variables that are measures of probability and function and these objects live in infinite dimensional spaces. Calculus on a computer is necessarily discrete and finite. So the first step of the technology is mathematical.  You have to be able to manipulate – to come up with a new form of calculus that is able to manipulate these infinite dimensional objects. This is basically what we’ve done. This new form of calculus allows us to take these huge optimization problems and turn them into something discrete and finite that we can start solving on a computer.

Scovel: It’s also more than that in the sense that it’s not just a question of computing power. Part of this program and this paradigm shift that we’re talking about is actually coming up with formulation of what it means to be an optimal solution to these things. That’s actually a big part of the program. It’s not just, I know what I want to compute and I need more computing power. It’s actually, what do we want to compute and what does it mean to be the best.

I think the complexity involved into the formulation of these problems – and these formulations for these problems of what it means to be an optimal predictor or an optimal estimator requires communication from all levels of the effort – from the customer to the project leader to the domain expert, the material science domain experts, the statisticians – instead of for example in many places you do a bunch of runs, you do a bunch of modeling, you do a bunch of stuff, and then you hand all the results off to a statisticians and you want them to put that stuff together.

We’re saying, no you need to do that all together. You need to have everybody communicating so you’re formulating not only what the objectives that you’re interested in but you formulate what pieces of information you have good confidence about, and those establish a quantitative set of realistic paradigms. Then the optimization now proceeds as, ok, now we have this huge optimization problem, how do we reduce it analytically, how do we take those reduced analytic problems, and how do we implement them on the computer, and how do we know we’re done – that’s sort of the picture.

Owhadi: Let me give you two examples here.

The first one will be investing in the stock market. Question: is there an optimal way to invest in the stock market. Currently, it’s not clear how to turn this into an optimization problem, but with the framework that we are developing, we think that we’ll be able to turn this into an optimization problem and reduce it to something that we can start solving with a high performance cluster. The idea is to invest in an optimal way given the limited information that you have at hand.

Let me give you another example. Consider the game of chess. We have computers playing chess, and they play chess very well. Question: can you have a computer play an information game – an information war? So for instance, you have two armies and they are playing this information war – or you have two companies, and each company has limited information about the other company and they have some decisions to make with respect to what the other company could or could not know and could or could not do. If you look at this as just an information game, but the board is not just composed of 64 squares, and the moves are not discrete, they could be anything.

If you want to start addressing these kind of problems, you need some new mathematics that allows you to manipulate these pieces of information that do not live on a chess board. This is basically the first step in our technology to develop this new mathematics.

HPCwire: Where do you see this maybe revolutionizing certain approaches to computing down the road?

Owhadi: I think that eventually we will use computers to help the process of scientific discovery, but not just to solve mathematical equations, but to develop the mathematical equations themselves. We see this going into the field of machine learning. In the field of machine learning, if you want to develop an intelligent agent, you basically decompose the tasks that you ask the computer to do into small steps, and you chew the work for the computer.

What this technology that we are developing will allow to do – this is basically a long term vision – is give the ability to a computer to develop a model of reality, act on that model of reality, and update the model based on feedback that it is receiving from reality. This is basically a change in machine learning.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed. Speaking Read more…

By Doug Black

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Student Cluster Competition Configurations: Fewer Nodes, Way More Accelerators

November 16, 2017

The final configurations for each of the SC17 “Donnybrook in Denver” Student Cluster Competition have been released. Fortunately, each team received their equipment shipments on time and undamaged, so the teams are r Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This