The Masters of Uncertainty

By Nicole Hemsoth

September 13, 2013

According to Dr. Houman Owhadi and Dr. Clint Scovel, both from the California Institute of Technology, Bayesian methods are becoming more prevalent as high performance computing advances continue. In this special audio-based feature interview, we talk with both about what these methods will contribute to a number of research and enterprise endeavors, what computational requirements exist as we move toward more advanced questions, and how the field is evolving—and will continue to evolve with exascale (or even quantum) class systems.

HPCwire: In the context of high performance computing the two of you have argued recently that Bayesian methods are becoming even more popular than they ever were before to quantify uncertainty both in science and industry. What is it about these methods that are necessary and even more necessary as we move towards ever more advanced systems?

Owhadi: Bayesian inference goes all the way back to a formula discovered by Reverend Thomas Bayes, and when he found that formula, Pierre-Simon Laplace took his research and developed further into a field called Baysian Inference. This started a 250 year old controversy. What is the controversy about? In Bayesian inference, you have some prior about what reality could be, then you condition your prior with some data that you observe. This is basically the Bayes’ rule.

Now Pierre-Simon Laplace took that one step further where he said, ok, I don’t really need to have an exact prior we’re representing – an exact measure corresponding to what reality could be. I could just make up a prior, or a belief about what it could be, then I could use Bayes’ formula to update my belief. And this started the field that we know today as Bayesian inference.

In the 50’s, Bayesian inference was mainly looked at as a curiosity because we couldn’t really compute those Bayesian posteriors for complex systems. But now with the advent of high performance computing, we can actually compute those posterior probabilities. Since Bayesian inference is also an elegant and simple way of combining information with beliefs, it’s becoming increasingly popular.

HPCwire: Dr. Scovel to build on what he just said – HPC is so often considered to be about increasing fidelity and resolution, moving as close to reality as possible, so where does uncertaintly fit in in the next generation of systems and applications.

Scovel: Well certainly no one believes that those systems compute exact, anything. There’s always an error, and so having some confidence in what the results are is always going to be of interest. There’s another way that these things all fit together, and that is that not only is uncertainty quantification useful for high performance computing, but high performance computing is useful for uncertainty quantification; because that computation that he was saying that has just come about in the 50’s is basically about our ability to do these numerical computations – in particular Markov chain Monte Carlo simulations to compute these posteriors.

HPCwire: Let’s talk about what areas of industry and scientific computing most important. Where is this most valuable?

Owhadi: Risk analysis. Climate modeling. Take for instance Boeing. So when Boeing is developing a new plane, most of the budget goes into the safety assessment of their new model. What you have to understand with respect to that industry is that they have to certify that their new model of airplane has a probability of catastrophic event that is smaller than 10 to the power of minus nine per hour of flight. Now this is really small.

And of course, they cannot fly one billion airplanes and just see how many crash, so they have to assess the safety of their system and they have limited information. Take another example, you are JPL and you wanted to design your new satellite and you want your satellite to go around the planet in the solar system, and you are spending a lot of money for it. How do you certify that your system is not going to crash? One way is to build 1,000 of these satellites and just count how many crash, but that would be too costly. So you have to do it with a very limited amount of data. This has created a new field called uncertainty quantification, which is an emerging field. It is basically a field that is at the interface between probability and statistics and computer science.

It mainly has to do with engineering systems characterized by low number of samples, and complex information. The way we are seeing that it should be pushed forward is basically to be able to process information in an optimal way, to assess the risk in an optimal way, without making assumptions that may not be true, and without ignoring relevant information.

So let me explain – at the end of the day our point of view is that you cannot really say if a piece of information or piece of data is accurate or not unless you test it. But once this information is given to you, the best thing that you can do is to process it in an optimal way. Basically what we are striving to do is to develop an algorithmic framework to allow us to do just that: process information in an optimal way. Now you can imagine that there are plenty of places where you can apply these things.

HPCwire: Dr. Scovel, I believe this leads into your pet project right now which is the scientific computation of optimal estimators. Where does that fit into this conversation we’re having. Can you describe it more thoroughly.

Scovel: Yes, when he was talking doing this in the optimal way, the first question is what does that mean. Instead of providing solutions, the first thing that we do is actually formulate a problem that incorporates everybody’s – the customers objectives that they’re interested in, the available information, what we know about the information, what the domain experts know about the information et cetera – and then you formulate this optimization problem which essentially defines what it means to be an optimal solution to this question that you’ve asked – like how reliable is that satellite going to be.

Where this is new is that historically what has been done, you provide some model for this process and you see what happens with the model. Ours is different. We’re saying we want to formulate this problem that we’re trying to solve and we’re going to use our computing capability – in particular, high performance computing – to solve these problems. I think historically is very similar to Bayes methods in the past. Historically, the reason why people didn’t go down this path is we didn’t have the computing power to do it, but I think that we now do have the computing power to actually solve these problems defining optimal estimation problems, or optimal prediction problems given optimal being optimal over some set of assumptions that we’re all willing to agree on.

HPCwire: I noticed that both of you have, just in your body of research, talked about solving exascale class problems. Those are the types of systems you’re looking at to be able to do this at a much higher level, is that correct?

Owhadi: Yes. I could take another analogy here – 200 years ago, if I were to ask you to solve a partial differential equation, you wouldn’t use a computer, you would probably use your brain. You would probably not come with a quantitiative estimate of the solution, but only qualitative estimates.

Now, if ask you the same question today, you will not use your brain to solve the partial differential equation – you will use a computer. But you will still use your brain to program the computer that will crunch numbers for you to solve the partial differential equation. This paradigm shift can be traced back to seminal work by John Von Neumann and (Herman) Goldstein in the 50’s, and humans organized as computers in the beginning of the previous century.

Now today if I asked you to find a statistical estimator, or to find the best possible climate model, or to find a test that will tell me if some data that I’m observing is corrupted or not – you’re not going to use a computer to do that. You’re going to use your brain and guess work. What we want to do here is basically turn this guesswork into an algorithm that we’ll be able to implement on a high performance cluster.

If you look at the mathematics behind this problem of turning the process of scientific discovery into an algorithm, you can basically translate it into an optimization problem, but optimization variables are not discrete. They’re not zeros and ones – they’re basically infinite dimensional objects. What we have found is a new form of calculus that allows us to turn these infinite dimensional optimization problems into finite dimensional optimization problems that we can start solving on computers.

Even after reduction these optimization problems are extremely large, so that’s why we believe that we’ll probably need exascale or petascale machines for solving these kind of problems for complex systems.

HPCwire: What’s interesting in that conversation about the systems required, I recently talked to, I believe the only, quote, “quantum computer company,” D-Wave recently – and this exact type of optimization problem – this best of all worlds solution is exactly the sweet spot for quantum cmputers. Do you believe those actually exist, and if so, are they a good fit for the types of problems you’re seeking to solve?

Owhadi: This is interesting because with respect to exascale machines – some people believe that there are two ways to approach high performance computing. The first way is just to solve the same kind of problems, but bigger problems.  So for instance if you’re interested in climate modeling, you’re still going to do climate modeling. You still are going to run your model, but with a finer mesh, with a finer resolution, and hopefully instead of predicting the weather for four days ahead, you’ll be able to do it for five days out.

What we envision here is some kind of paradigm shift where instead of numerically solving a bigger model, you actually use your computer to find the model itself. So yes, if there are quantum computers out there, that would be a great thing for this new kind of framework.

Scovel: The more computing power we can have the better our success is going to be. 

Owhadi: Information doesn’t necessarily come in the form of zero and ones. If you look at the interlaying optimization problems, they involve optimization variables that are measures of probability and function and these objects live in infinite dimensional spaces. Calculus on a computer is necessarily discrete and finite. So the first step of the technology is mathematical.  You have to be able to manipulate – to come up with a new form of calculus that is able to manipulate these infinite dimensional objects. This is basically what we’ve done. This new form of calculus allows us to take these huge optimization problems and turn them into something discrete and finite that we can start solving on a computer.

Scovel: It’s also more than that in the sense that it’s not just a question of computing power. Part of this program and this paradigm shift that we’re talking about is actually coming up with formulation of what it means to be an optimal solution to these things. That’s actually a big part of the program. It’s not just, I know what I want to compute and I need more computing power. It’s actually, what do we want to compute and what does it mean to be the best.

I think the complexity involved into the formulation of these problems – and these formulations for these problems of what it means to be an optimal predictor or an optimal estimator requires communication from all levels of the effort – from the customer to the project leader to the domain expert, the material science domain experts, the statisticians – instead of for example in many places you do a bunch of runs, you do a bunch of modeling, you do a bunch of stuff, and then you hand all the results off to a statisticians and you want them to put that stuff together.

We’re saying, no you need to do that all together. You need to have everybody communicating so you’re formulating not only what the objectives that you’re interested in but you formulate what pieces of information you have good confidence about, and those establish a quantitative set of realistic paradigms. Then the optimization now proceeds as, ok, now we have this huge optimization problem, how do we reduce it analytically, how do we take those reduced analytic problems, and how do we implement them on the computer, and how do we know we’re done – that’s sort of the picture.

Owhadi: Let me give you two examples here.

The first one will be investing in the stock market. Question: is there an optimal way to invest in the stock market. Currently, it’s not clear how to turn this into an optimization problem, but with the framework that we are developing, we think that we’ll be able to turn this into an optimization problem and reduce it to something that we can start solving with a high performance cluster. The idea is to invest in an optimal way given the limited information that you have at hand.

Let me give you another example. Consider the game of chess. We have computers playing chess, and they play chess very well. Question: can you have a computer play an information game – an information war? So for instance, you have two armies and they are playing this information war – or you have two companies, and each company has limited information about the other company and they have some decisions to make with respect to what the other company could or could not know and could or could not do. If you look at this as just an information game, but the board is not just composed of 64 squares, and the moves are not discrete, they could be anything.

If you want to start addressing these kind of problems, you need some new mathematics that allows you to manipulate these pieces of information that do not live on a chess board. This is basically the first step in our technology to develop this new mathematics.

HPCwire: Where do you see this maybe revolutionizing certain approaches to computing down the road?

Owhadi: I think that eventually we will use computers to help the process of scientific discovery, but not just to solve mathematical equations, but to develop the mathematical equations themselves. We see this going into the field of machine learning. In the field of machine learning, if you want to develop an intelligent agent, you basically decompose the tasks that you ask the computer to do into small steps, and you chew the work for the computer.

What this technology that we are developing will allow to do – this is basically a long term vision – is give the ability to a computer to develop a model of reality, act on that model of reality, and update the model based on feedback that it is receiving from reality. This is basically a change in machine learning.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

Manufacturers Reaping the Benefits of Remote Visualization

Today’s manufacturers are operating in an ever-changing atmosphere, and finding new ways to boost productivity has never been more vital.

This is why manufacturers are ramping up their investments in high performance computing (HPC), a trend which has helped give rise to the “connected factory” and Industrial Internet of Things (IIoT) concepts that are proliferating throughout the industry today. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This