At SC19: What Is UrgentHPC and Why Is It Needed?

By Tiffany Trader

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more.

HPCwire: Tell us about the mission and background of UrgentHPC?

Nick Brown: We started from the observation that our ability to capture data is growing at an incredible rate, as is the power of HPC machines. But at the same time, it seems that there is always some sort of disaster in the news, from wildfires, to health emergencies, to extreme weather, and it seems to me that the frequency and severity of these things is only getting worse! So I think it is natural to ask ourselves as a community, whether there is a role that we can play in helping to tackle these emergencies. HPC has been able simulate disasters after the fact for many years, but what about if we could run these urgent simulations in real-time, fed by the latest data streaming in from the field? What sort of opportunities could this then open-up to assist urgent responders, ultimately translating into more lives saved and reduced economic impact?

What I find so fascinating is not only the significant potential impact if we can get this right, but also all the technical challenges that must be overcome to reach this point! These challenges cover many aspects of our and other communities. From visualisation, to data engineering, HPC system support, algorithmic techniques, and integration with current disaster response systems. One such example are the batch queues employed by all major HPC machines, as these are simply not set-up to support an urgent workload – it’s pointless waiting two hours for a job to run when the forest is burning right now!

HPCwire: What is urgent decision making? 

This is where a front-line responder is making real world decisions in response to some emergency, for instance the recent California wildfires. With lives often in the balance, it is crucial for these individuals to make correct decisions first time, every time and a key question for us, is how real-time data and HPC can assist here.

The Center for Satellite-Based Crisis Information’s (ZKI) crisis mapping room, where earth observation data such as satellite images, aerial photos and other geodata are analyzed and used to generate up-to-date position information before, during or after catastrophic and crisis situations. Source: German Aerospace Center, (DLR)

HPCwire: What are some of the use cases UrgentHPC is focused on?

From discussions with the community it surprised us how much is going on and how many groups would like to use HPC, to some extent or another, with their disaster response applications. For instance, the keynote speaker at our workshop this year is from Technosylva, a company who develop the world’s leading wildfire simulation code. This has been used extensively during the recent Californian wildfires and we are really excited that Joaquin will be joining us to talk about their work and the critical role played by supercomputing, along with future plans to take advantage of the even greater capability that is being developed by the HPC community.

Other individuals involved in the workshop are interested in areas including mosquito borne diseases, space weather anomalies, earthquakes, tsunamis, wildland fire and smoke progression. So really wide range of use-cases, and we are hoping to identify even more during the workshop session!

HPCwire: There’s a workshop this year at SC — what is on the agenda?

Yup, and it is great to be back at SC as this initiative began a year ago with a BoF at SC18. At the time I was pleasantly surprised by how many people were interested in this topic and seemed to be working on related activities. Following on from this we felt that it would be beneficial to pursue developing a community around urgent computing and HPC, and potentially consolidate efforts, hence this workshop!

We have a fantastic programme (https://www.urgenthpc.com) and I am really excited for the workshop which is running on Sunday afternoon (17th) from 2:30pm in room 603. I mentioned previously about the wildfire keynote talk, we also have six research papers being presented which all describe solutions and technologies for addressing different parts of the overall urgent HPC challenge. In addition we also have a panel session with some really interesting individuals lined up who will provide their own views on the use of HPC for urgent decision making and respond to some of the themes raised in the workshop.

HPCwire: What kind of computing infrastructure is required and how would that be delivered (on-prem, in the cloud)?

That’s a very interesting question, not least because it was one of the most extensively discussed topics during our BoF last year! It is clear to us that HPC machines as they currently stand are not enough (not just issues around the suitability of the batch system, but furthermore supercomputers often do not operate to the SLAs required by emergency responders). Certainly the cloud has a role to play, but it is not a silver bullet because, whilst there is some elasticity here, being able to unpredictably run jobs immediately requiring high performance across thousands of nodes is also not really a usage model these organisations have considered.

Inevitably this is going to require further development, both on the technology and policy sides, across a wide variety of infrastructure and we also believe that there is a role for edge computing to play, where computation (such as data reduction) can be quickly performed at source to reduce the amount of work required centrally.

Nowadays there are quite a lot of sensors, for instance satellites, out there that freely available data, which we think helps on the infrastructure side quite a lot too, although of course the pipes must be in place to transmit it quickly enough for processing.

HPCwire: Seems like there is a big data science element here. Is there focus on a converged stack? Bringing HPC and big data tools together?

You are absolutely right, and I think some of the opportunities that are opening up are down to some of the efforts made around this convergence already. Although there is still quite a way to go – an example of this is the Topology ToolKit (TTK) developed by Sorbonne University in Paris. This is an advanced method of feature extraction which can be run on the raw data output of HPC codes, and significantly reduces the amount of data that must then be stored or transferred, but the big challenge is that currently it does not support running across multiple nodes. So we think things like this, combining the knowledge and algorithms of other communities with our expertise in parallelism, could be a very potent collaboration.

HPCwire: What (other) challenges exist on the technology and policy side?

Lots, and I suspect even more will be uncovered at the workshop! As a technologist myself, I tend to focus on that side, but do think the policy side is at-least as challenging or more. This is because fully realising the use of HPC for urgent decision making will require changes to how HPC machines are operated and used. This is one of the reasons why I think activities like our workshop are so important, as if we can bring together the community and build momentum, then it will make a much stronger case to the machine owners and operators to change things. Bearing in mind some of these rules and policies have been around for decades, there is an uphill struggle!

HPCwire: What is your role and how did you come to be involved?

I first got involved in this via the VESTEC (https://vestec-project.eu/) EU funded FET project, where we are looking to fuse HPC with real-time data for disaster response. This project involves partners from across Europe with a wide variety of expertise, from fire simulation, to in-situ visualisation, to HPC. But what’s important to say is that we appreciate that to build a community one needs to look outwards and consider the global picture, especially as there is some interesting work going across the US and further afield. This is represented in my fellow organisers of the workshop who are from PNNL, NCAR, and the LEXIS EU project, and we think this mix is a strong one in encouraging good participation across the board.

HPCwire: Who is the workshop for? Who should attend?

Anyone with any interest in the topic will be more than welcome! It’s funny, the more we chat with people about this, the more we discover just how much is going on that is complementary to the central aims of what we are trying to do. So irrespective of your area of expertise, please come along and let’s see what we can learn from each other! As a frequent attendee of SC I am really excited both to have the opportunity to run a workshop at the conference and also for the conference itself – it’s going to be a great week ahead!

About Nick Brown

Dr Nick Brown is a research fellow at EPCC, University of Edinburgh, with interests in data engineering, machine learning, parallel programming language design, novel compute architectures, compilers and runtimes. He is a work package leader in VESTEC, an EU FET HPC project, which aims to fuse HPC with real-time data for urgent decision making and as part of this is responsible for the HPC side of this project. He has worked with both industry and academia, for instance managing a project using machine learning to optimise the interpretation of well log data in the oil and gas industry. He has worked on a number of large scale parallel codes including developing MONC, an atmospheric model used by the UK climate and weather communities which involves novel in-situ data analytics. Nick is a course organiser on EPCC’s MSc in HPC and data science courses, and supervises MSc and PhD students.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight Gary Patton, GlobalFoundries’ CTO and R&D SVP as well a Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth required on some problems and D-Wave struck a deal with NEC to coll Read more…

By John Russell

How Formula 1 Used Cloud HPC to Build the Next Generation of Racing

December 12, 2019

Formula 1, Rob Smedley explained, is maybe the biggest racing spectacle in the world, with five hundred million fans tuning in for every race. Smedley, a chief engineer with Formula 1’s performance engineering and anal Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

Supercomputers Help Predict Carbon Dioxide Levels

December 10, 2019

The Earth’s terrestrial ecosystems – its lands, forests, jungles and so on – are crucial “sinks” for atmospheric carbon, holding nearly 30 percent of our annual CO2 emissions as they breathe in the carbon-rich Read more…

By Oliver Peckham

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science its Read more…

By John Russell

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This