The Masters of Uncertainty

By Nicole Hemsoth

September 13, 2013

According to Dr. Houman Owhadi and Dr. Clint Scovel, both from the California Institute of Technology, Bayesian methods are becoming more prevalent as high performance computing advances continue. In this special audio-based feature interview, we talk with both about what these methods will contribute to a number of research and enterprise endeavors, what computational requirements exist as we move toward more advanced questions, and how the field is evolving—and will continue to evolve with exascale (or even quantum) class systems.

HPCwire: In the context of high performance computing the two of you have argued recently that Bayesian methods are becoming even more popular than they ever were before to quantify uncertainty both in science and industry. What is it about these methods that are necessary and even more necessary as we move towards ever more advanced systems?

Owhadi: Bayesian inference goes all the way back to a formula discovered by Reverend Thomas Bayes, and when he found that formula, Pierre-Simon Laplace took his research and developed further into a field called Baysian Inference. This started a 250 year old controversy. What is the controversy about? In Bayesian inference, you have some prior about what reality could be, then you condition your prior with some data that you observe. This is basically the Bayes’ rule.

Now Pierre-Simon Laplace took that one step further where he said, ok, I don’t really need to have an exact prior we’re representing – an exact measure corresponding to what reality could be. I could just make up a prior, or a belief about what it could be, then I could use Bayes’ formula to update my belief. And this started the field that we know today as Bayesian inference.

In the 50’s, Bayesian inference was mainly looked at as a curiosity because we couldn’t really compute those Bayesian posteriors for complex systems. But now with the advent of high performance computing, we can actually compute those posterior probabilities. Since Bayesian inference is also an elegant and simple way of combining information with beliefs, it’s becoming increasingly popular.

HPCwire: Dr. Scovel to build on what he just said – HPC is so often considered to be about increasing fidelity and resolution, moving as close to reality as possible, so where does uncertaintly fit in in the next generation of systems and applications.

Scovel: Well certainly no one believes that those systems compute exact, anything. There’s always an error, and so having some confidence in what the results are is always going to be of interest. There’s another way that these things all fit together, and that is that not only is uncertainty quantification useful for high performance computing, but high performance computing is useful for uncertainty quantification; because that computation that he was saying that has just come about in the 50’s is basically about our ability to do these numerical computations – in particular Markov chain Monte Carlo simulations to compute these posteriors.

HPCwire: Let’s talk about what areas of industry and scientific computing most important. Where is this most valuable?

Owhadi: Risk analysis. Climate modeling. Take for instance Boeing. So when Boeing is developing a new plane, most of the budget goes into the safety assessment of their new model. What you have to understand with respect to that industry is that they have to certify that their new model of airplane has a probability of catastrophic event that is smaller than 10 to the power of minus nine per hour of flight. Now this is really small.

And of course, they cannot fly one billion airplanes and just see how many crash, so they have to assess the safety of their system and they have limited information. Take another example, you are JPL and you wanted to design your new satellite and you want your satellite to go around the planet in the solar system, and you are spending a lot of money for it. How do you certify that your system is not going to crash? One way is to build 1,000 of these satellites and just count how many crash, but that would be too costly. So you have to do it with a very limited amount of data. This has created a new field called uncertainty quantification, which is an emerging field. It is basically a field that is at the interface between probability and statistics and computer science.

It mainly has to do with engineering systems characterized by low number of samples, and complex information. The way we are seeing that it should be pushed forward is basically to be able to process information in an optimal way, to assess the risk in an optimal way, without making assumptions that may not be true, and without ignoring relevant information.

So let me explain – at the end of the day our point of view is that you cannot really say if a piece of information or piece of data is accurate or not unless you test it. But once this information is given to you, the best thing that you can do is to process it in an optimal way. Basically what we are striving to do is to develop an algorithmic framework to allow us to do just that: process information in an optimal way. Now you can imagine that there are plenty of places where you can apply these things.

HPCwire: Dr. Scovel, I believe this leads into your pet project right now which is the scientific computation of optimal estimators. Where does that fit into this conversation we’re having. Can you describe it more thoroughly.

Scovel: Yes, when he was talking doing this in the optimal way, the first question is what does that mean. Instead of providing solutions, the first thing that we do is actually formulate a problem that incorporates everybody’s – the customers objectives that they’re interested in, the available information, what we know about the information, what the domain experts know about the information et cetera – and then you formulate this optimization problem which essentially defines what it means to be an optimal solution to this question that you’ve asked – like how reliable is that satellite going to be.

Where this is new is that historically what has been done, you provide some model for this process and you see what happens with the model. Ours is different. We’re saying we want to formulate this problem that we’re trying to solve and we’re going to use our computing capability – in particular, high performance computing – to solve these problems. I think historically is very similar to Bayes methods in the past. Historically, the reason why people didn’t go down this path is we didn’t have the computing power to do it, but I think that we now do have the computing power to actually solve these problems defining optimal estimation problems, or optimal prediction problems given optimal being optimal over some set of assumptions that we’re all willing to agree on.

HPCwire: I noticed that both of you have, just in your body of research, talked about solving exascale class problems. Those are the types of systems you’re looking at to be able to do this at a much higher level, is that correct?

Owhadi: Yes. I could take another analogy here – 200 years ago, if I were to ask you to solve a partial differential equation, you wouldn’t use a computer, you would probably use your brain. You would probably not come with a quantitiative estimate of the solution, but only qualitative estimates.

Now, if ask you the same question today, you will not use your brain to solve the partial differential equation – you will use a computer. But you will still use your brain to program the computer that will crunch numbers for you to solve the partial differential equation. This paradigm shift can be traced back to seminal work by John Von Neumann and (Herman) Goldstein in the 50’s, and humans organized as computers in the beginning of the previous century.

Now today if I asked you to find a statistical estimator, or to find the best possible climate model, or to find a test that will tell me if some data that I’m observing is corrupted or not – you’re not going to use a computer to do that. You’re going to use your brain and guess work. What we want to do here is basically turn this guesswork into an algorithm that we’ll be able to implement on a high performance cluster.

If you look at the mathematics behind this problem of turning the process of scientific discovery into an algorithm, you can basically translate it into an optimization problem, but optimization variables are not discrete. They’re not zeros and ones – they’re basically infinite dimensional objects. What we have found is a new form of calculus that allows us to turn these infinite dimensional optimization problems into finite dimensional optimization problems that we can start solving on computers.

Even after reduction these optimization problems are extremely large, so that’s why we believe that we’ll probably need exascale or petascale machines for solving these kind of problems for complex systems.

HPCwire: What’s interesting in that conversation about the systems required, I recently talked to, I believe the only, quote, “quantum computer company,” D-Wave recently – and this exact type of optimization problem – this best of all worlds solution is exactly the sweet spot for quantum cmputers. Do you believe those actually exist, and if so, are they a good fit for the types of problems you’re seeking to solve?

Owhadi: This is interesting because with respect to exascale machines – some people believe that there are two ways to approach high performance computing. The first way is just to solve the same kind of problems, but bigger problems.  So for instance if you’re interested in climate modeling, you’re still going to do climate modeling. You still are going to run your model, but with a finer mesh, with a finer resolution, and hopefully instead of predicting the weather for four days ahead, you’ll be able to do it for five days out.

What we envision here is some kind of paradigm shift where instead of numerically solving a bigger model, you actually use your computer to find the model itself. So yes, if there are quantum computers out there, that would be a great thing for this new kind of framework.

Scovel: The more computing power we can have the better our success is going to be. 

Owhadi: Information doesn’t necessarily come in the form of zero and ones. If you look at the interlaying optimization problems, they involve optimization variables that are measures of probability and function and these objects live in infinite dimensional spaces. Calculus on a computer is necessarily discrete and finite. So the first step of the technology is mathematical.  You have to be able to manipulate – to come up with a new form of calculus that is able to manipulate these infinite dimensional objects. This is basically what we’ve done. This new form of calculus allows us to take these huge optimization problems and turn them into something discrete and finite that we can start solving on a computer.

Scovel: It’s also more than that in the sense that it’s not just a question of computing power. Part of this program and this paradigm shift that we’re talking about is actually coming up with formulation of what it means to be an optimal solution to these things. That’s actually a big part of the program. It’s not just, I know what I want to compute and I need more computing power. It’s actually, what do we want to compute and what does it mean to be the best.

I think the complexity involved into the formulation of these problems – and these formulations for these problems of what it means to be an optimal predictor or an optimal estimator requires communication from all levels of the effort – from the customer to the project leader to the domain expert, the material science domain experts, the statisticians – instead of for example in many places you do a bunch of runs, you do a bunch of modeling, you do a bunch of stuff, and then you hand all the results off to a statisticians and you want them to put that stuff together.

We’re saying, no you need to do that all together. You need to have everybody communicating so you’re formulating not only what the objectives that you’re interested in but you formulate what pieces of information you have good confidence about, and those establish a quantitative set of realistic paradigms. Then the optimization now proceeds as, ok, now we have this huge optimization problem, how do we reduce it analytically, how do we take those reduced analytic problems, and how do we implement them on the computer, and how do we know we’re done – that’s sort of the picture.

Owhadi: Let me give you two examples here.

The first one will be investing in the stock market. Question: is there an optimal way to invest in the stock market. Currently, it’s not clear how to turn this into an optimization problem, but with the framework that we are developing, we think that we’ll be able to turn this into an optimization problem and reduce it to something that we can start solving with a high performance cluster. The idea is to invest in an optimal way given the limited information that you have at hand.

Let me give you another example. Consider the game of chess. We have computers playing chess, and they play chess very well. Question: can you have a computer play an information game – an information war? So for instance, you have two armies and they are playing this information war – or you have two companies, and each company has limited information about the other company and they have some decisions to make with respect to what the other company could or could not know and could or could not do. If you look at this as just an information game, but the board is not just composed of 64 squares, and the moves are not discrete, they could be anything.

If you want to start addressing these kind of problems, you need some new mathematics that allows you to manipulate these pieces of information that do not live on a chess board. This is basically the first step in our technology to develop this new mathematics.

HPCwire: Where do you see this maybe revolutionizing certain approaches to computing down the road?

Owhadi: I think that eventually we will use computers to help the process of scientific discovery, but not just to solve mathematical equations, but to develop the mathematical equations themselves. We see this going into the field of machine learning. In the field of machine learning, if you want to develop an intelligent agent, you basically decompose the tasks that you ask the computer to do into small steps, and you chew the work for the computer.

What this technology that we are developing will allow to do – this is basically a long term vision – is give the ability to a computer to develop a model of reality, act on that model of reality, and update the model based on feedback that it is receiving from reality. This is basically a change in machine learning.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

NSF Seeks Input on Cyberinfrastructure Advances Needed

January 12, 2017

In cased you missed it, the National Science Foundation posted a “Dear Colleague Letter” (DCL) late last week seeking input on needs for the next generation of cyberinfrastructure to support science and engineering. Read more…

By John Russell

NSF Approves Bridges Phase 2 Upgrade for Broader Research Use

January 12, 2017

The recently completed phase 2 upgrade of the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC) has been approved by the National Science Foundation (NSF) making it now available for research allocations to the national scientific community, according to an announcement posted this week on the XSEDE web site. Read more…

By John Russell

Clemson Software Optimizes Big Data Transfers

January 11, 2017

Data-intensive science is not a new phenomenon as the high-energy physics and astrophysics communities can certainly attest, but today more and more scientists are facing steep data and throughput challenges fueled by soaring data volumes and the demands of global-scale collaboration. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

FPGA-Based Genome Processor Bundles Storage

January 6, 2017

Bio-processor developer Edico Genome is collaborating with storage specialist Dell EMC to bundle computing and storage for analyzing gene-sequencing data. Read more…

By George Leopold

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Fast Rewind: 2016 Was a Wild Ride for HPC

December 23, 2016

Some years quietly sneak by – 2016 not so much. It’s safe to say there are always forces reshaping the HPC landscape but this year’s bunch seemed like a noisy lot. Among the noisemakers: TaihuLight, DGX-1/Pascal, Dell EMC & HPE-SGI et al., KNL to market, OPA-IB chest thumping, Fujitsu-ARM, new U.S. President-elect, BREXIT, JR’s Intel Exit, Exascale (whatever that means now), NCSA@30, whither NSCI, Deep Learning mania, HPC identity crisis…You get the picture. Read more…

By John Russell

AWI Uses New Cray Cluster for Earth Sciences and Bioinformatics

December 22, 2016

The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), headquartered in Bremerhaven, Germany, is one of the country's premier research institutes within the Helmholtz Association of German Research Centres, and is an internationally respected center of expertise for polar and marine research. In November 2015, AWI awarded Cray a contract to install a cluster supercomputer that would help the institute accelerate time to discovery. Now the effort is starting to pay off. Read more…

By Linda Barney

Addison Snell: The ‘Wild West’ of HPC Disaggregation

December 16, 2016

We caught up with Addison Snell, CEO of HPC industry watcher Intersect360, at SC16 last month, and Snell had his expected, extensive list of insights into trends driving advanced-scale technology in both the commercial and research sectors. Read more…

By Doug Black

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

New Genomics Pipeline Combines AWS, Local HPC, and Supercomputing

September 22, 2016

Declining DNA sequencing costs and the rush to do whole genome sequencing (WGS) of large cohort populations – think 5000 subjects now, but many more thousands soon – presents a formidable computational challenge to researchers attempting to make sense of large cohort datasets. Read more…

By John Russell

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Deep Learning Paves Way for Better Diagnostics

September 19, 2016

Stanford researchers are leveraging GPU-based machines in the Amazon EC2 cloud to run deep learning workloads with the goal of improving diagnostics for a chronic eye disease, called diabetic retinopathy. The disease is a complication of diabetes that can lead to blindness if blood sugar is poorly controlled. It affects about 45 percent of diabetics and 100 million people worldwide, many in developing nations. Read more…

By Tiffany Trader

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This