RENCI/Dell Supercomputer Charts Hurricane Matthew’s Storm Surge

By John Russell

October 6, 2016

Hurricane Matthew, now headed into Florida having already hammered Haiti and other parts of the Caribbean, is a stark reminder of the importance of computer modeling not only in predicting the storm strength and path but also in predicting and plotting the storm surge which is often its most destructive component. Right now, the Hatteras supercomputer (Dell) at Renaissance Computing Institute (RENCI) in North Carolina is doing just that for Hurricane Matthew.

Named after North Carolina’s famous Outer Banks lighthouse, the Hatteras supercomputer is a 150-node M420 Dell cluster (full specs at the end of article) that runs the ADCIRC storm surge model every six hours when a hurricane is active. Visualizations of the models appear on the Coastal Emergency Risks Assessment website. The outputs from these runs are incorporated into guidance information by the National Weather Service, the National Hurricane Center, and agencies such as the U.S. Coast Guard, the U.S. Army Corps of Engineers, FEMA, and local and regional emergency management divisions.

The models are a tool used to help make decisions about evacuations, and where to position supplies and response personnel. In Florida, Governor Rick Scott has urged about 1.5 million Floridians in the storm’s path to evacuate. Hurricane Matthew, whose winds have again reached 140 miles per hour as it nears the Florida coast turning it into a Category 4 storm, has already killed more than 200 people.

The work to apply high-performance computing and data analysis to understanding dangerous storm surges is part of a long-term collaboration involving RENCI, the Coastal Resilience Center at UNC-Chapel Hill, and UNC’s Institute of Marine Sciences. Over the last 10 years, Brian Blanton, a coastal oceanographer and director of RENCI environmental initiatives, has worked closely with Rick Luettich, lead principal investigator of the Coastal Resilience Center and director IMS, and others to enhance and improve the ADCIRC coastal circulation and storm surge model.

matthew-renci-640x437“We model the way the ocean moves and particularly the ocean and coastal areas and so we are trying to always predict that. It moves because of tides, because of rivers that flow into it, it also moves because of the wind and so when we get these severe storms whether they are winter Nor’easters or hurricanes like Matthew, they blow the wind around if you will, in particularly when they blow it up onto shore then it causes flooding and we have what typically refer to as storm surge,” said Leuttich.

Every time the Dell system at RENCI computes another storm surge model for use by the emergency response community, Blanton is busy running a series of at least nine possible storm surge scenarios on the same HPC system. The process is much like ensemble weather forecasting, where meteorologists run a large number of weather models using slightly different initial conditions in order to account for the uncertainty in such a dynamic system.

The model output available on the web for Matthew can resolve the detail of coastal storm surge to a level of less than 200 meters. And the team’s current research could mean that storm surge models next year will provide even more detail and accuracy.  “We are working on doing storm surge predictions the same way that meteorologists develop predictions for rain and wind speeds,” said Blanton. “It will provide high-resolution storm surge probabilities that account for uncertainty in the track and intensity of hurricane forecasts.” Blanton said the research team plans to acquire enough test simulations this year to be able to produce ensemble models regularly for hurricane season 2017.

renci-official-logo1-300x160ADCIRC – a system of computer programs for solving time dependent, free surface circulation and transport problems in two and three dimensions – was developed by Luettich and researchers at the University of Notre Dame. These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The researchers and developers who maintain the software and develop the visual models represent universities on the East and Gulf coasts as well as agencies such as the National Oceanic and Atmospheric Agency, the National Weather Service, the National Science Foundation, and the Department of Homeland Security.

In one sense, storm surge forecasting is lower on the HPC totem pole than weather forecasting in terms of access to necessary resources. The major weather forecasting services often have access to bigger machines, modernized codes, and sometime can be the dominant user of the resource. These agencies use ensemble of modeling – sometimes looking at thousands of models as well as other data sources such as that from hurricane hunter aircraft to “develop with a hand-created forecast.” Even then, as the forecast extends out a couple of days it’s uncertainty grows significantly.

In times of an event such as Hurricane Matthew the National Weather Forecasting Service uses its substantial resources to update its forecast every six hours. Keeping pace is a challenge for the storm surge forecasters. “If it takes us five and a half hours to do a run and process it and get everything displayed and out there for the public to see, then it is pretty much useless. Its relevancy window has left. I typically think two hours is the maximum amount of time we have to stay relevant and I am much happier if we can get results done in an hour,” said Luettich.

Luettich’s team starts with the basic forecast provided by the National Hurricane Center and runs that through its model: “It’s the hurricane center forecast and it’s the first thing we want to go out because that’s our best estimate of what’s likely to occur. The next question is what’s the range of things that could occur. The only way we can address that issue of range [is] using ensembles. At that point we have to do multiple runs to try to bracket and depending on what we have for resources we can do this either heuristically, just picking a couple of storms or a few storms to give us kind of a sensitivity study, or ideally we can get into the dozens or hundreds storms to give us truly a statistically valid population that we can then compute statistics from and whatnot. In a nutshell that’s the challenge,” he said.

A single run on several hundred to one-thousand processors may take hours. “The challenge for us, as the ocean modelers, as storm surge modelers, is to properly account for that uncertainty in the way in which we deliver forecasts of the ocean’s response. So right now we do the forecast which is right smack down the middle of that cone of uncertainty and then we will do a few runs which kind of bracket either the possible track variations over time or changes to the predictive intensity of the storm.”

Hatteras Supercomputer by Dell at RENCI
Hatteras Supercomputer by Dell at RENCI

Perhaps not surprisingly, access to sufficient compute horsepower is a bottleneck. “We are fortunate if we can get enough computer horsepower either at RENCI and RENCI is our go-to-place for in-house HPC but realistically we can get enough processors there to do more than one or two runs each compute cycle. We collaborate with folks at LSU and TACC and other places so we can typically add in a few more runs but we are still only a the phase of being able to do the primary forecast and a few sensitivity runs around it.”

The need for speed, emphasizes Luettich, is critical, however it’s important to note the ADCIRC tools are also used extensively in design and hazard assessment, which are generally not time-constrained projects.

“By far these models are used, [maybe] 100X more often than for active storms, for design purposes. For example a model we developed was used by the Army Corp. to design the hurricane protection systems that is now around New Orleans. [It’s] also being used to design a major levy system (so-called Ike Dike) that might protect the Houston Galveston area in the future. So it is very much a design tool and gets used extensively for that purpose.”

Secondly the models are used to define what the hazards of storm surge are in coastal regions. “FEMA uses it for 100-year flood levels and where those are for insurance purposes,” he noted. Recently the Nuclear Regulatory Commission has been using it to define what the threats are to coastal nuclear power plants. All of that work goes on outside of the context of actual event.

“It’s very HPC intensive. We may end up having to run many, many hundreds or thousands of storms to get a full sweep of the design or the hazard situation that exists. But time is not nearly as big a constraint. If it takes a run one hour or five hours or ten hours to do as long as you can stack up the hundreds or thousands of runs you need and get them done over a reasonable time, a few months or a year or whatever your study length, it’s [acceptable].”

That said, Leuttich and his colleagues are actively pushing to advance ADCIRC on at least three fronts. Leuttich notes the code, though old, is already very parallelizable and already scales well on existing architecture, but not on newer architecture. Moreover, rigid code parallelization isn’t always the best approach. He singled the following three areas of active effort:

  • Parallelization. “In these modeling applications we need very high resolution in these areas where the storm is impacting but in other areas we can use very low resolution. Yet to automate the process in the parallelization, the leading parallelization paradigm middleware that is out there is very challenging. So we have a NSF funded project that is looking into new parallelization strategies that will allow us to optimize our calculations and consequently be much more efficiently and faster.”
  • Modern Hardware. ADCIRC have started looking into manycore chips such as Intel’s newly-released Knights Landing Phi. That’s one area. “It looks like it is going to take some code reengineering to optimize the code for use on that hardware but that’s is something that we are starting to think about at RENCI. In the last month or so, gotten [KNL-based system] that will give us at least the opportunity to test some of software re-engineering we have to do to see how extensive it is and to what extent we can get performance increases.”
  • irods_logo_hdMore Computers. “The third direction is looking for other partners and in fact our colleagues at RENCI have been extremely helpful. One of their fortés is the iRODS systems and ability to move data around between HPC centers, distributed HPC. We wouldn’t want to necessarily distribute a single run among centers at various locations but again thinking back to the ensemble approach if we can farm out X number of runs to different machines at different location and compile the information back efficiently then that may help us considerably, and that may even include a cloud type application.”

Interestingly, the ADCIRC code has not performed well on GPUs. “It is predominantly because of the way the algorithms are written; they are not terribly compatible with GPU acceleration,” said Luettich.

Without doubt, a certain amount of inertia exists in the code, says Luettich, and a massive rewrite to take advantage of the next generation of hardware may be necessary. Funding is always an issue for projects such ADCIRC. Luettich noted, “Think about how much damage is going to result from this Hurricane Matthew. Imagine if you took one percent of that and invested it in computer resources, whether hardware or software, what advances we could make and what the returns in lessened damage in the future would be.”

Hatteras Supercomputer Profile (from RENCI web site)

Deployed in summer 2013 and expanded in early 2014, Hatteras is a 5168-core cluster running CentOS Linux.  Hatteras is not fully MPI interconnected, and is instead segmented into several independent sub-clusters with varying architectures.  Hatteras is capable of concurrently running 9 512-way ensemble members.  Hatteras uses Dell’s densest blade enclosure to allow for maximum core-count within each chassis.

Hatteras’ sub-clusters have the following configurations:

  • Chassis 0-3 (512 interconnected cores per chassis)
    • 32 x Dell M420 quarter-height blade server
      • Two Intel Xeon E5-2450 CPUs (2.1GHz, 8-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Chassis 4-7 (640 interconnected cores per chassis)
    • 32 x Dell M420 Quarter-Height Blade Server
      • Two Intel Xeon E5-2470v2 CPUs (2.4GHz, 10-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Hadoop (560 interconnected cores)
    • 30 x Dell R720xd 2U Rack Server
      • Two Intel Xeon E5-2670 processors (16 cores total @ 2.6GHz)
      • 256GB RDIMM RAM @ 1600MHz
      • 36 Terabytes (12 x 3TB) of raw local disk dedicated to the node
      • 146GB RAID-1 volume dedicated for OS
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 2 x Dell R820 2U Rack Server (LargeMem)
      • Four Intel Xeon E5-4640v2 processors (40 cores total @ 2.2GHz)
      • 1.5TB LRDIMM RAM @ 1600MHz
      • 9.6 Terabytes (8 x 1.2TB) of raw local disk dedicated to the node
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 56Gb/s Mellanox FDR Infiniband Interconnect
    • 40Gb/s Mellanox Ethernet Interconnect

Related Links
ADCRIC website
Coastal Resilience Center Website
Institute of Marine Sciences Website

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Russian and American Scientists Achieve 50% Increase in Data Transmission Speed

September 20, 2018

As high-performance computing becomes increasingly data-intensive and the demand for shorter turnaround times grows, data transfer speed becomes an ever more important bottleneck. Now, in an article published in IEEE Tra Read more…

By Oliver Peckham

IBM to Brand Rescale’s HPC-in-Cloud Platform

September 20, 2018

HPC (or big compute)-in-the-cloud platform provider Rescale has formalized the work it’s been doing in partnership with public cloud vendors by announcing its Powered by Rescale program – with IBM as its first named Read more…

By Doug Black

Democratization of HPC Part 1: Simulation Sheds Light on Building Dispute

September 20, 2018

This is the first of three articles demonstrating the growing acceptance of High Performance Computing especially in new user communities and application areas. Major reasons for this trend are the ongoing improvements i Read more…

By Wolfgang Gentzsch

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Clouds Over the Ocean – a Healthcare Perspective

Advances in precision medicine, genomics, and imaging; the widespread adoption of electronic health records; and the proliferation of medical Internet of Things (IoT) and mobile devices are resulting in an explosion of structured and unstructured healthcare-related data. Read more…

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Gordon Bell Prize used Summit in their work. That’s impres Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This