RENCI/Dell Supercomputer Charts Hurricane Matthew’s Storm Surge

By John Russell

October 6, 2016

Hurricane Matthew, now headed into Florida having already hammered Haiti and other parts of the Caribbean, is a stark reminder of the importance of computer modeling not only in predicting the storm strength and path but also in predicting and plotting the storm surge which is often its most destructive component. Right now, the Hatteras supercomputer (Dell) at Renaissance Computing Institute (RENCI) in North Carolina is doing just that for Hurricane Matthew.

Named after North Carolina’s famous Outer Banks lighthouse, the Hatteras supercomputer is a 150-node M420 Dell cluster (full specs at the end of article) that runs the ADCIRC storm surge model every six hours when a hurricane is active. Visualizations of the models appear on the Coastal Emergency Risks Assessment website. The outputs from these runs are incorporated into guidance information by the National Weather Service, the National Hurricane Center, and agencies such as the U.S. Coast Guard, the U.S. Army Corps of Engineers, FEMA, and local and regional emergency management divisions.

The models are a tool used to help make decisions about evacuations, and where to position supplies and response personnel. In Florida, Governor Rick Scott has urged about 1.5 million Floridians in the storm’s path to evacuate. Hurricane Matthew, whose winds have again reached 140 miles per hour as it nears the Florida coast turning it into a Category 4 storm, has already killed more than 200 people.

The work to apply high-performance computing and data analysis to understanding dangerous storm surges is part of a long-term collaboration involving RENCI, the Coastal Resilience Center at UNC-Chapel Hill, and UNC’s Institute of Marine Sciences. Over the last 10 years, Brian Blanton, a coastal oceanographer and director of RENCI environmental initiatives, has worked closely with Rick Luettich, lead principal investigator of the Coastal Resilience Center and director IMS, and others to enhance and improve the ADCIRC coastal circulation and storm surge model.

matthew-renci-640x437“We model the way the ocean moves and particularly the ocean and coastal areas and so we are trying to always predict that. It moves because of tides, because of rivers that flow into it, it also moves because of the wind and so when we get these severe storms whether they are winter Nor’easters or hurricanes like Matthew, they blow the wind around if you will, in particularly when they blow it up onto shore then it causes flooding and we have what typically refer to as storm surge,” said Leuttich.

Every time the Dell system at RENCI computes another storm surge model for use by the emergency response community, Blanton is busy running a series of at least nine possible storm surge scenarios on the same HPC system. The process is much like ensemble weather forecasting, where meteorologists run a large number of weather models using slightly different initial conditions in order to account for the uncertainty in such a dynamic system.

The model output available on the web for Matthew can resolve the detail of coastal storm surge to a level of less than 200 meters. And the team’s current research could mean that storm surge models next year will provide even more detail and accuracy.  “We are working on doing storm surge predictions the same way that meteorologists develop predictions for rain and wind speeds,” said Blanton. “It will provide high-resolution storm surge probabilities that account for uncertainty in the track and intensity of hurricane forecasts.” Blanton said the research team plans to acquire enough test simulations this year to be able to produce ensemble models regularly for hurricane season 2017.

renci-official-logo1-300x160ADCIRC – a system of computer programs for solving time dependent, free surface circulation and transport problems in two and three dimensions – was developed by Luettich and researchers at the University of Notre Dame. These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The researchers and developers who maintain the software and develop the visual models represent universities on the East and Gulf coasts as well as agencies such as the National Oceanic and Atmospheric Agency, the National Weather Service, the National Science Foundation, and the Department of Homeland Security.

In one sense, storm surge forecasting is lower on the HPC totem pole than weather forecasting in terms of access to necessary resources. The major weather forecasting services often have access to bigger machines, modernized codes, and sometime can be the dominant user of the resource. These agencies use ensemble of modeling – sometimes looking at thousands of models as well as other data sources such as that from hurricane hunter aircraft to “develop with a hand-created forecast.” Even then, as the forecast extends out a couple of days it’s uncertainty grows significantly.

In times of an event such as Hurricane Matthew the National Weather Forecasting Service uses its substantial resources to update its forecast every six hours. Keeping pace is a challenge for the storm surge forecasters. “If it takes us five and a half hours to do a run and process it and get everything displayed and out there for the public to see, then it is pretty much useless. Its relevancy window has left. I typically think two hours is the maximum amount of time we have to stay relevant and I am much happier if we can get results done in an hour,” said Luettich.

Luettich’s team starts with the basic forecast provided by the National Hurricane Center and runs that through its model: “It’s the hurricane center forecast and it’s the first thing we want to go out because that’s our best estimate of what’s likely to occur. The next question is what’s the range of things that could occur. The only way we can address that issue of range [is] using ensembles. At that point we have to do multiple runs to try to bracket and depending on what we have for resources we can do this either heuristically, just picking a couple of storms or a few storms to give us kind of a sensitivity study, or ideally we can get into the dozens or hundreds storms to give us truly a statistically valid population that we can then compute statistics from and whatnot. In a nutshell that’s the challenge,” he said.

A single run on several hundred to one-thousand processors may take hours. “The challenge for us, as the ocean modelers, as storm surge modelers, is to properly account for that uncertainty in the way in which we deliver forecasts of the ocean’s response. So right now we do the forecast which is right smack down the middle of that cone of uncertainty and then we will do a few runs which kind of bracket either the possible track variations over time or changes to the predictive intensity of the storm.”

Hatteras Supercomputer by Dell at RENCI
Hatteras Supercomputer by Dell at RENCI

Perhaps not surprisingly, access to sufficient compute horsepower is a bottleneck. “We are fortunate if we can get enough computer horsepower either at RENCI and RENCI is our go-to-place for in-house HPC but realistically we can get enough processors there to do more than one or two runs each compute cycle. We collaborate with folks at LSU and TACC and other places so we can typically add in a few more runs but we are still only a the phase of being able to do the primary forecast and a few sensitivity runs around it.”

The need for speed, emphasizes Luettich, is critical, however it’s important to note the ADCIRC tools are also used extensively in design and hazard assessment, which are generally not time-constrained projects.

“By far these models are used, [maybe] 100X more often than for active storms, for design purposes. For example a model we developed was used by the Army Corp. to design the hurricane protection systems that is now around New Orleans. [It’s] also being used to design a major levy system (so-called Ike Dike) that might protect the Houston Galveston area in the future. So it is very much a design tool and gets used extensively for that purpose.”

Secondly the models are used to define what the hazards of storm surge are in coastal regions. “FEMA uses it for 100-year flood levels and where those are for insurance purposes,” he noted. Recently the Nuclear Regulatory Commission has been using it to define what the threats are to coastal nuclear power plants. All of that work goes on outside of the context of actual event.

“It’s very HPC intensive. We may end up having to run many, many hundreds or thousands of storms to get a full sweep of the design or the hazard situation that exists. But time is not nearly as big a constraint. If it takes a run one hour or five hours or ten hours to do as long as you can stack up the hundreds or thousands of runs you need and get them done over a reasonable time, a few months or a year or whatever your study length, it’s [acceptable].”

That said, Leuttich and his colleagues are actively pushing to advance ADCIRC on at least three fronts. Leuttich notes the code, though old, is already very parallelizable and already scales well on existing architecture, but not on newer architecture. Moreover, rigid code parallelization isn’t always the best approach. He singled the following three areas of active effort:

  • Parallelization. “In these modeling applications we need very high resolution in these areas where the storm is impacting but in other areas we can use very low resolution. Yet to automate the process in the parallelization, the leading parallelization paradigm middleware that is out there is very challenging. So we have a NSF funded project that is looking into new parallelization strategies that will allow us to optimize our calculations and consequently be much more efficiently and faster.”
  • Modern Hardware. ADCIRC have started looking into manycore chips such as Intel’s newly-released Knights Landing Phi. That’s one area. “It looks like it is going to take some code reengineering to optimize the code for use on that hardware but that’s is something that we are starting to think about at RENCI. In the last month or so, gotten [KNL-based system] that will give us at least the opportunity to test some of software re-engineering we have to do to see how extensive it is and to what extent we can get performance increases.”
  • irods_logo_hdMore Computers. “The third direction is looking for other partners and in fact our colleagues at RENCI have been extremely helpful. One of their fortés is the iRODS systems and ability to move data around between HPC centers, distributed HPC. We wouldn’t want to necessarily distribute a single run among centers at various locations but again thinking back to the ensemble approach if we can farm out X number of runs to different machines at different location and compile the information back efficiently then that may help us considerably, and that may even include a cloud type application.”

Interestingly, the ADCIRC code has not performed well on GPUs. “It is predominantly because of the way the algorithms are written; they are not terribly compatible with GPU acceleration,” said Luettich.

Without doubt, a certain amount of inertia exists in the code, says Luettich, and a massive rewrite to take advantage of the next generation of hardware may be necessary. Funding is always an issue for projects such ADCIRC. Luettich noted, “Think about how much damage is going to result from this Hurricane Matthew. Imagine if you took one percent of that and invested it in computer resources, whether hardware or software, what advances we could make and what the returns in lessened damage in the future would be.”

Hatteras Supercomputer Profile (from RENCI web site)

Deployed in summer 2013 and expanded in early 2014, Hatteras is a 5168-core cluster running CentOS Linux.  Hatteras is not fully MPI interconnected, and is instead segmented into several independent sub-clusters with varying architectures.  Hatteras is capable of concurrently running 9 512-way ensemble members.  Hatteras uses Dell’s densest blade enclosure to allow for maximum core-count within each chassis.

Hatteras’ sub-clusters have the following configurations:

  • Chassis 0-3 (512 interconnected cores per chassis)
    • 32 x Dell M420 quarter-height blade server
      • Two Intel Xeon E5-2450 CPUs (2.1GHz, 8-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Chassis 4-7 (640 interconnected cores per chassis)
    • 32 x Dell M420 Quarter-Height Blade Server
      • Two Intel Xeon E5-2470v2 CPUs (2.4GHz, 10-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Hadoop (560 interconnected cores)
    • 30 x Dell R720xd 2U Rack Server
      • Two Intel Xeon E5-2670 processors (16 cores total @ 2.6GHz)
      • 256GB RDIMM RAM @ 1600MHz
      • 36 Terabytes (12 x 3TB) of raw local disk dedicated to the node
      • 146GB RAID-1 volume dedicated for OS
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 2 x Dell R820 2U Rack Server (LargeMem)
      • Four Intel Xeon E5-4640v2 processors (40 cores total @ 2.2GHz)
      • 1.5TB LRDIMM RAM @ 1600MHz
      • 9.6 Terabytes (8 x 1.2TB) of raw local disk dedicated to the node
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 56Gb/s Mellanox FDR Infiniband Interconnect
    • 40Gb/s Mellanox Ethernet Interconnect

Related Links
ADCRIC website
Coastal Resilience Center Website
Institute of Marine Sciences Website

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combined peak computing capacity, the new systems will extend the a Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

ASC18: Tough Applications & Tough Luck

May 17, 2018

The applications at the ASC18 Student Cluster Competition were tough. Tougher than the $3.99 steak special at your local greasy spoon restaurant. The apps are so tough that even Chuck Norris backs away from them slowly. Read more…

By Dan Olds

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and the technology challenges ahead. These discussions happened in Read more…

By Alex R. Larzelere

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

Emerging Advanced Scale Tech Trends Focus of Annual Tabor Conference

May 9, 2018

At Tabor Communications' annual Advanced Scale Forum (ASF) held this week in Austin, the focus was on enterprise adoption of HPC-class technologies and high performance data analytics (HPDA). It’s a confab that brings together end users (CIOs, IT planners, department heads) and vendors and encourages... Read more…

By the Editorial Team

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Leading Solution Providers

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This