DesignSafe Research Platform Helps Prevent Natural Hazards from Becoming Societal Disasters

August 3, 2018

Aug. 3, 2018 — The 2017 hurricane season was one of the worst on record. 17 named storm caused more than 100 direct fatalities, incurred $265 billion in damages and disrupted millions of lives.

We may not be able to prevent hurricanes from occurring, but we can improve our ability to predict them, move people out of harm’s way, respond quickly to their aftermath and build our homes and infrastructure in a way that can survive the worst nature can throw at us.

A new tool called DesignSafe is helping to do so. A web-based research platform for the NHERI (Natural Hazards Engineering Research Infrastructure) Network, DesignSafe allows researchers to manage, analyze, and understand critical information about natural hazards – from earthquakes and tornados to hurricanes and sinkholes.

Supported by grants from the National Science Foundation (NSF) and developed at the Texas Advanced Computing Center (TACC) in collaboration with partners at The University of Texas at Austin, Rice University, and Florida Institute of Technology, DesignSafe is advancing research that will prevent natural hazard events from becoming societal disasters. This means helping engineers build safer structures in the future to withstand natural hazards and enabling emergency responders to better target their efforts.

Powering post-storm reconnaissance

The 2017 hurricane season put DesignSafe to the test and showed the promise of the platform, according to David Roueche, an assistant professor of Civil Engineering at Auburn University. Roueche was on the front lines of the season’s hurricane response, participating in reconnaissance missions to coastal Texas, the Florida Keys, Puerto Rico and several Caribbean islands in the wake of Hurricanes Harvey, Irma and Maria.

After Harvey, Roueche and his collaborators targeted clusters of single-family homes impacted by a range of wind speeds. They inspected more than 1,000 individual homes and logged more than 5,000 geotagged photographs captured by ground-based teams and unmanned aerial vehicles. They participated in similar efforts after Irma and Maria.

DesignSafe helped them in a variety of ways. They coordinated their deployments via a virtual community channel on Slack (a cloud-based collaboration tool) established by DesignSafe; they used wind map data developed by other researchers and shared on DesignSafe to determine where they would focus their efforts; and once they began capturing data in the field, they uploaded it immediately to DesignSafe and used mapping and visualization software like HazMapper and QGIS to generate maps that synthesized their and others’ data collections.

“We were interested in capturing data about structures before they’re destroyed, torn down and rebuilt,” Roueche said. “This is perishable data – that’s the purpose of the NSF RAPID program – to capture this perishable data before it’s lost.”

Roueche and his team found examples of houses side-by-side, built around the same time, where one was completely destroyed and the other was intact. What factors influenced survival? And how could rebuilding efforts be improved by understanding what features led some structures to stand up to storms?

DesignSafe’s integrated workflow accelerates the “resilience curve” so recommendations from natural hazard engineers can be disseminated in months rather than years. Image courtesy of TACC.

DesignSafe’s Reconnaissance Portal, which launched in 2017, provided both the computing capabilities Roueche needed for his analyses and a place to share more than 200 gigabytes of gathered data. The portal also allowed his team to immediately begin quality control and assessments on the data and rapidly generate reports that others going into the field later could use and contribute to.

Typically, it takes years for data gathered by researchers after a storm to be analyzed and reported on, which means rebuilding efforts cannot take advantage of engineers’ insights. With a system like DesignSafe, however, there is hope that the “resilience curve” can be accelerated and that recommendations can be disseminated in months.

“We want cities to be able to rebuild more resiliently,” Roueche said. “Typically, it’s one to five years before products from data are out in literature. That doesn’t allow us to help communities in rebuilding. By having a more streamlined workflow, standardizing processes, and publishing data sooner, it allows us to affect the reconstruction process and have a greater impact. That’s why I’m super excited about where this is going.”

(Roueche’s collaborators included: Frank Lombardo from University of Illinois at Urbana-Champaign, Rich Krupar from the University of Maryland, Daniel Smith from the Cyclone Testing Station at James Cook University and Tracy Kijewski-Correa from Notre Dame University.)

Street-level simulations of storm impacts

In addition to enabling post-storm reconnaissance, being on the front-lines of Hurricane Harvey inspired DesignSafe’s developers to create new tools to assist first responders in the future.

Dan Stanzione, TACC’s executive director and the principal investigator for DesignSafe, assisted decision-makers at the Texas State Operations Center during Hurricane Harvey.

“While there, a first responder said to me: ‘I want to know a list of addresses in Houston that have been flooded above their electrical outlet height. You have a supercomputer, you can do that right?'” Stanzione recalled. “We weren’t able to provide that assistance in the moment, but it inspired us to create such a tool.”

The components required for such an analysis — storm surge forecasts, elevation maps and home constructions records — already existed. But the ability to connect these datasets and generate a list of potentially damaged homes in a reasonable amount of time did not.

In the months that followed, researchers on the DesignSafe team stitched together tools to integrate simulations and data across scales, from the entirety of the Gulf of Mexico to a particular stretch of coastline, and from individual neighborhoods to specific homes.

Using data from 2008’s Hurricane Ike and from a hypothetical storm that FEMA uses for their research, the DesignSafe team showed that it is indeed possible to generate a list of potentially damaged addresses in real-time. In fact, all aspects of the process can be computed within DesignSafe, using TACC’s massive supercomputers in the background.

“We can go from gulf-wide, large-scale HPC simulation to, ‘Did this house flood or not?'” Stanzione said. “It’s a long-time goal that was not technologically possible before, but that we’ve been able to get to. All of the data stays in the DesignSafe environment. The next step is determining how we can automate these workflows so they run automatically.”

When the next storm hits, thanks to DesignSafe, first responders will have the information they need for quick action at their fingertips.


Source: Aaron Dubrow, TACC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

A Beginner’s Guide to the ASC19 Finals

April 22, 2019

Three thousand watts. That's how much power the competitors in the 2019 ASC Student Supercomputer Challenge here in Dalian, China, have to work with. Everybody would like more juice to run compute-intensive HPC simulatio Read more…

By Alex Woodie

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

A Beginner’s Guide to the ASC19 Finals

April 22, 2019

Three thousand watts. That's how much power the competitors in the 2019 ASC Student Supercomputer Challenge here in Dalian, China, have to work with. Everybody Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This