The Masters of Uncertainty

By Nicole Hemsoth

September 13, 2013

According to Dr. Houman Owhadi and Dr. Clint Scovel, both from the California Institute of Technology, Bayesian methods are becoming more prevalent as high performance computing advances continue. In this special audio-based feature interview, we talk with both about what these methods will contribute to a number of research and enterprise endeavors, what computational requirements exist as we move toward more advanced questions, and how the field is evolving—and will continue to evolve with exascale (or even quantum) class systems.

HPCwire: In the context of high performance computing the two of you have argued recently that Bayesian methods are becoming even more popular than they ever were before to quantify uncertainty both in science and industry. What is it about these methods that are necessary and even more necessary as we move towards ever more advanced systems?

Owhadi: Bayesian inference goes all the way back to a formula discovered by Reverend Thomas Bayes, and when he found that formula, Pierre-Simon Laplace took his research and developed further into a field called Baysian Inference. This started a 250 year old controversy. What is the controversy about? In Bayesian inference, you have some prior about what reality could be, then you condition your prior with some data that you observe. This is basically the Bayes’ rule.

Now Pierre-Simon Laplace took that one step further where he said, ok, I don’t really need to have an exact prior we’re representing – an exact measure corresponding to what reality could be. I could just make up a prior, or a belief about what it could be, then I could use Bayes’ formula to update my belief. And this started the field that we know today as Bayesian inference.

In the 50’s, Bayesian inference was mainly looked at as a curiosity because we couldn’t really compute those Bayesian posteriors for complex systems. But now with the advent of high performance computing, we can actually compute those posterior probabilities. Since Bayesian inference is also an elegant and simple way of combining information with beliefs, it’s becoming increasingly popular.

HPCwire: Dr. Scovel to build on what he just said – HPC is so often considered to be about increasing fidelity and resolution, moving as close to reality as possible, so where does uncertaintly fit in in the next generation of systems and applications.

Scovel: Well certainly no one believes that those systems compute exact, anything. There’s always an error, and so having some confidence in what the results are is always going to be of interest. There’s another way that these things all fit together, and that is that not only is uncertainty quantification useful for high performance computing, but high performance computing is useful for uncertainty quantification; because that computation that he was saying that has just come about in the 50’s is basically about our ability to do these numerical computations – in particular Markov chain Monte Carlo simulations to compute these posteriors.

HPCwire: Let’s talk about what areas of industry and scientific computing most important. Where is this most valuable?

Owhadi: Risk analysis. Climate modeling. Take for instance Boeing. So when Boeing is developing a new plane, most of the budget goes into the safety assessment of their new model. What you have to understand with respect to that industry is that they have to certify that their new model of airplane has a probability of catastrophic event that is smaller than 10 to the power of minus nine per hour of flight. Now this is really small.

And of course, they cannot fly one billion airplanes and just see how many crash, so they have to assess the safety of their system and they have limited information. Take another example, you are JPL and you wanted to design your new satellite and you want your satellite to go around the planet in the solar system, and you are spending a lot of money for it. How do you certify that your system is not going to crash? One way is to build 1,000 of these satellites and just count how many crash, but that would be too costly. So you have to do it with a very limited amount of data. This has created a new field called uncertainty quantification, which is an emerging field. It is basically a field that is at the interface between probability and statistics and computer science.

It mainly has to do with engineering systems characterized by low number of samples, and complex information. The way we are seeing that it should be pushed forward is basically to be able to process information in an optimal way, to assess the risk in an optimal way, without making assumptions that may not be true, and without ignoring relevant information.

So let me explain – at the end of the day our point of view is that you cannot really say if a piece of information or piece of data is accurate or not unless you test it. But once this information is given to you, the best thing that you can do is to process it in an optimal way. Basically what we are striving to do is to develop an algorithmic framework to allow us to do just that: process information in an optimal way. Now you can imagine that there are plenty of places where you can apply these things.

HPCwire: Dr. Scovel, I believe this leads into your pet project right now which is the scientific computation of optimal estimators. Where does that fit into this conversation we’re having. Can you describe it more thoroughly.

Scovel: Yes, when he was talking doing this in the optimal way, the first question is what does that mean. Instead of providing solutions, the first thing that we do is actually formulate a problem that incorporates everybody’s – the customers objectives that they’re interested in, the available information, what we know about the information, what the domain experts know about the information et cetera – and then you formulate this optimization problem which essentially defines what it means to be an optimal solution to this question that you’ve asked – like how reliable is that satellite going to be.

Where this is new is that historically what has been done, you provide some model for this process and you see what happens with the model. Ours is different. We’re saying we want to formulate this problem that we’re trying to solve and we’re going to use our computing capability – in particular, high performance computing – to solve these problems. I think historically is very similar to Bayes methods in the past. Historically, the reason why people didn’t go down this path is we didn’t have the computing power to do it, but I think that we now do have the computing power to actually solve these problems defining optimal estimation problems, or optimal prediction problems given optimal being optimal over some set of assumptions that we’re all willing to agree on.

HPCwire: I noticed that both of you have, just in your body of research, talked about solving exascale class problems. Those are the types of systems you’re looking at to be able to do this at a much higher level, is that correct?

Owhadi: Yes. I could take another analogy here – 200 years ago, if I were to ask you to solve a partial differential equation, you wouldn’t use a computer, you would probably use your brain. You would probably not come with a quantitiative estimate of the solution, but only qualitative estimates.

Now, if ask you the same question today, you will not use your brain to solve the partial differential equation – you will use a computer. But you will still use your brain to program the computer that will crunch numbers for you to solve the partial differential equation. This paradigm shift can be traced back to seminal work by John Von Neumann and (Herman) Goldstein in the 50’s, and humans organized as computers in the beginning of the previous century.

Now today if I asked you to find a statistical estimator, or to find the best possible climate model, or to find a test that will tell me if some data that I’m observing is corrupted or not – you’re not going to use a computer to do that. You’re going to use your brain and guess work. What we want to do here is basically turn this guesswork into an algorithm that we’ll be able to implement on a high performance cluster.

If you look at the mathematics behind this problem of turning the process of scientific discovery into an algorithm, you can basically translate it into an optimization problem, but optimization variables are not discrete. They’re not zeros and ones – they’re basically infinite dimensional objects. What we have found is a new form of calculus that allows us to turn these infinite dimensional optimization problems into finite dimensional optimization problems that we can start solving on computers.

Even after reduction these optimization problems are extremely large, so that’s why we believe that we’ll probably need exascale or petascale machines for solving these kind of problems for complex systems.

HPCwire: What’s interesting in that conversation about the systems required, I recently talked to, I believe the only, quote, “quantum computer company,” D-Wave recently – and this exact type of optimization problem – this best of all worlds solution is exactly the sweet spot for quantum cmputers. Do you believe those actually exist, and if so, are they a good fit for the types of problems you’re seeking to solve?

Owhadi: This is interesting because with respect to exascale machines – some people believe that there are two ways to approach high performance computing. The first way is just to solve the same kind of problems, but bigger problems.  So for instance if you’re interested in climate modeling, you’re still going to do climate modeling. You still are going to run your model, but with a finer mesh, with a finer resolution, and hopefully instead of predicting the weather for four days ahead, you’ll be able to do it for five days out.

What we envision here is some kind of paradigm shift where instead of numerically solving a bigger model, you actually use your computer to find the model itself. So yes, if there are quantum computers out there, that would be a great thing for this new kind of framework.

Scovel: The more computing power we can have the better our success is going to be. 

Owhadi: Information doesn’t necessarily come in the form of zero and ones. If you look at the interlaying optimization problems, they involve optimization variables that are measures of probability and function and these objects live in infinite dimensional spaces. Calculus on a computer is necessarily discrete and finite. So the first step of the technology is mathematical.  You have to be able to manipulate – to come up with a new form of calculus that is able to manipulate these infinite dimensional objects. This is basically what we’ve done. This new form of calculus allows us to take these huge optimization problems and turn them into something discrete and finite that we can start solving on a computer.

Scovel: It’s also more than that in the sense that it’s not just a question of computing power. Part of this program and this paradigm shift that we’re talking about is actually coming up with formulation of what it means to be an optimal solution to these things. That’s actually a big part of the program. It’s not just, I know what I want to compute and I need more computing power. It’s actually, what do we want to compute and what does it mean to be the best.

I think the complexity involved into the formulation of these problems – and these formulations for these problems of what it means to be an optimal predictor or an optimal estimator requires communication from all levels of the effort – from the customer to the project leader to the domain expert, the material science domain experts, the statisticians – instead of for example in many places you do a bunch of runs, you do a bunch of modeling, you do a bunch of stuff, and then you hand all the results off to a statisticians and you want them to put that stuff together.

We’re saying, no you need to do that all together. You need to have everybody communicating so you’re formulating not only what the objectives that you’re interested in but you formulate what pieces of information you have good confidence about, and those establish a quantitative set of realistic paradigms. Then the optimization now proceeds as, ok, now we have this huge optimization problem, how do we reduce it analytically, how do we take those reduced analytic problems, and how do we implement them on the computer, and how do we know we’re done – that’s sort of the picture.

Owhadi: Let me give you two examples here.

The first one will be investing in the stock market. Question: is there an optimal way to invest in the stock market. Currently, it’s not clear how to turn this into an optimization problem, but with the framework that we are developing, we think that we’ll be able to turn this into an optimization problem and reduce it to something that we can start solving with a high performance cluster. The idea is to invest in an optimal way given the limited information that you have at hand.

Let me give you another example. Consider the game of chess. We have computers playing chess, and they play chess very well. Question: can you have a computer play an information game – an information war? So for instance, you have two armies and they are playing this information war – or you have two companies, and each company has limited information about the other company and they have some decisions to make with respect to what the other company could or could not know and could or could not do. If you look at this as just an information game, but the board is not just composed of 64 squares, and the moves are not discrete, they could be anything.

If you want to start addressing these kind of problems, you need some new mathematics that allows you to manipulate these pieces of information that do not live on a chess board. This is basically the first step in our technology to develop this new mathematics.

HPCwire: Where do you see this maybe revolutionizing certain approaches to computing down the road?

Owhadi: I think that eventually we will use computers to help the process of scientific discovery, but not just to solve mathematical equations, but to develop the mathematical equations themselves. We see this going into the field of machine learning. In the field of machine learning, if you want to develop an intelligent agent, you basically decompose the tasks that you ask the computer to do into small steps, and you chew the work for the computer.

What this technology that we are developing will allow to do – this is basically a long term vision – is give the ability to a computer to develop a model of reality, act on that model of reality, and update the model based on feedback that it is receiving from reality. This is basically a change in machine learning.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art of “The Grand Hotel Of The West,” contrasted nicely with Read more…

By Arno Kolster

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

September 21, 2017

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” That's because the search giant has announced the beta launch of the high-end P100 Nvidia Tesla GPUs on t Read more…

By George Leopold

Cray Wins $48M Supercomputer Contract from KISTI

September 21, 2017

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scal Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort

September 21, 2017

Brookhaven National Laboratory announced today that Adolfy Hoisie will chair its newly formed Computing for National Security department, which is part of Brookhaven’s new Computational Science Initiative (CSI). Read more…

By John Russell

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art o Read more…

By Arno Kolster

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National Laboratory. Kothe is a 32-year veteran of DOE’s National Laboratory System. Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire. Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This