RENCI/Dell Supercomputer Charts Hurricane Matthew’s Storm Surge

By John Russell

October 6, 2016

Hurricane Matthew, now headed into Florida having already hammered Haiti and other parts of the Caribbean, is a stark reminder of the importance of computer modeling not only in predicting the storm strength and path but also in predicting and plotting the storm surge which is often its most destructive component. Right now, the Hatteras supercomputer (Dell) at Renaissance Computing Institute (RENCI) in North Carolina is doing just that for Hurricane Matthew.

Named after North Carolina’s famous Outer Banks lighthouse, the Hatteras supercomputer is a 150-node M420 Dell cluster (full specs at the end of article) that runs the ADCIRC storm surge model every six hours when a hurricane is active. Visualizations of the models appear on the Coastal Emergency Risks Assessment website. The outputs from these runs are incorporated into guidance information by the National Weather Service, the National Hurricane Center, and agencies such as the U.S. Coast Guard, the U.S. Army Corps of Engineers, FEMA, and local and regional emergency management divisions.

The models are a tool used to help make decisions about evacuations, and where to position supplies and response personnel. In Florida, Governor Rick Scott has urged about 1.5 million Floridians in the storm’s path to evacuate. Hurricane Matthew, whose winds have again reached 140 miles per hour as it nears the Florida coast turning it into a Category 4 storm, has already killed more than 200 people.

The work to apply high-performance computing and data analysis to understanding dangerous storm surges is part of a long-term collaboration involving RENCI, the Coastal Resilience Center at UNC-Chapel Hill, and UNC’s Institute of Marine Sciences. Over the last 10 years, Brian Blanton, a coastal oceanographer and director of RENCI environmental initiatives, has worked closely with Rick Luettich, lead principal investigator of the Coastal Resilience Center and director IMS, and others to enhance and improve the ADCIRC coastal circulation and storm surge model.

matthew-renci-640x437“We model the way the ocean moves and particularly the ocean and coastal areas and so we are trying to always predict that. It moves because of tides, because of rivers that flow into it, it also moves because of the wind and so when we get these severe storms whether they are winter Nor’easters or hurricanes like Matthew, they blow the wind around if you will, in particularly when they blow it up onto shore then it causes flooding and we have what typically refer to as storm surge,” said Leuttich.

Every time the Dell system at RENCI computes another storm surge model for use by the emergency response community, Blanton is busy running a series of at least nine possible storm surge scenarios on the same HPC system. The process is much like ensemble weather forecasting, where meteorologists run a large number of weather models using slightly different initial conditions in order to account for the uncertainty in such a dynamic system.

The model output available on the web for Matthew can resolve the detail of coastal storm surge to a level of less than 200 meters. And the team’s current research could mean that storm surge models next year will provide even more detail and accuracy.  “We are working on doing storm surge predictions the same way that meteorologists develop predictions for rain and wind speeds,” said Blanton. “It will provide high-resolution storm surge probabilities that account for uncertainty in the track and intensity of hurricane forecasts.” Blanton said the research team plans to acquire enough test simulations this year to be able to produce ensemble models regularly for hurricane season 2017.

renci-official-logo1-300x160ADCIRC – a system of computer programs for solving time dependent, free surface circulation and transport problems in two and three dimensions – was developed by Luettich and researchers at the University of Notre Dame. These programs utilize the finite element method in space allowing the use of highly flexible, unstructured grids. The researchers and developers who maintain the software and develop the visual models represent universities on the East and Gulf coasts as well as agencies such as the National Oceanic and Atmospheric Agency, the National Weather Service, the National Science Foundation, and the Department of Homeland Security.

In one sense, storm surge forecasting is lower on the HPC totem pole than weather forecasting in terms of access to necessary resources. The major weather forecasting services often have access to bigger machines, modernized codes, and sometime can be the dominant user of the resource. These agencies use ensemble of modeling – sometimes looking at thousands of models as well as other data sources such as that from hurricane hunter aircraft to “develop with a hand-created forecast.” Even then, as the forecast extends out a couple of days it’s uncertainty grows significantly.

In times of an event such as Hurricane Matthew the National Weather Forecasting Service uses its substantial resources to update its forecast every six hours. Keeping pace is a challenge for the storm surge forecasters. “If it takes us five and a half hours to do a run and process it and get everything displayed and out there for the public to see, then it is pretty much useless. Its relevancy window has left. I typically think two hours is the maximum amount of time we have to stay relevant and I am much happier if we can get results done in an hour,” said Luettich.

Luettich’s team starts with the basic forecast provided by the National Hurricane Center and runs that through its model: “It’s the hurricane center forecast and it’s the first thing we want to go out because that’s our best estimate of what’s likely to occur. The next question is what’s the range of things that could occur. The only way we can address that issue of range [is] using ensembles. At that point we have to do multiple runs to try to bracket and depending on what we have for resources we can do this either heuristically, just picking a couple of storms or a few storms to give us kind of a sensitivity study, or ideally we can get into the dozens or hundreds storms to give us truly a statistically valid population that we can then compute statistics from and whatnot. In a nutshell that’s the challenge,” he said.

A single run on several hundred to one-thousand processors may take hours. “The challenge for us, as the ocean modelers, as storm surge modelers, is to properly account for that uncertainty in the way in which we deliver forecasts of the ocean’s response. So right now we do the forecast which is right smack down the middle of that cone of uncertainty and then we will do a few runs which kind of bracket either the possible track variations over time or changes to the predictive intensity of the storm.”

Hatteras Supercomputer by Dell at RENCI
Hatteras Supercomputer by Dell at RENCI

Perhaps not surprisingly, access to sufficient compute horsepower is a bottleneck. “We are fortunate if we can get enough computer horsepower either at RENCI and RENCI is our go-to-place for in-house HPC but realistically we can get enough processors there to do more than one or two runs each compute cycle. We collaborate with folks at LSU and TACC and other places so we can typically add in a few more runs but we are still only a the phase of being able to do the primary forecast and a few sensitivity runs around it.”

The need for speed, emphasizes Luettich, is critical, however it’s important to note the ADCIRC tools are also used extensively in design and hazard assessment, which are generally not time-constrained projects.

“By far these models are used, [maybe] 100X more often than for active storms, for design purposes. For example a model we developed was used by the Army Corp. to design the hurricane protection systems that is now around New Orleans. [It’s] also being used to design a major levy system (so-called Ike Dike) that might protect the Houston Galveston area in the future. So it is very much a design tool and gets used extensively for that purpose.”

Secondly the models are used to define what the hazards of storm surge are in coastal regions. “FEMA uses it for 100-year flood levels and where those are for insurance purposes,” he noted. Recently the Nuclear Regulatory Commission has been using it to define what the threats are to coastal nuclear power plants. All of that work goes on outside of the context of actual event.

“It’s very HPC intensive. We may end up having to run many, many hundreds or thousands of storms to get a full sweep of the design or the hazard situation that exists. But time is not nearly as big a constraint. If it takes a run one hour or five hours or ten hours to do as long as you can stack up the hundreds or thousands of runs you need and get them done over a reasonable time, a few months or a year or whatever your study length, it’s [acceptable].”

That said, Leuttich and his colleagues are actively pushing to advance ADCIRC on at least three fronts. Leuttich notes the code, though old, is already very parallelizable and already scales well on existing architecture, but not on newer architecture. Moreover, rigid code parallelization isn’t always the best approach. He singled the following three areas of active effort:

  • Parallelization. “In these modeling applications we need very high resolution in these areas where the storm is impacting but in other areas we can use very low resolution. Yet to automate the process in the parallelization, the leading parallelization paradigm middleware that is out there is very challenging. So we have a NSF funded project that is looking into new parallelization strategies that will allow us to optimize our calculations and consequently be much more efficiently and faster.”
  • Modern Hardware. ADCIRC have started looking into manycore chips such as Intel’s newly-released Knights Landing Phi. That’s one area. “It looks like it is going to take some code reengineering to optimize the code for use on that hardware but that’s is something that we are starting to think about at RENCI. In the last month or so, gotten [KNL-based system] that will give us at least the opportunity to test some of software re-engineering we have to do to see how extensive it is and to what extent we can get performance increases.”
  • irods_logo_hdMore Computers. “The third direction is looking for other partners and in fact our colleagues at RENCI have been extremely helpful. One of their fortés is the iRODS systems and ability to move data around between HPC centers, distributed HPC. We wouldn’t want to necessarily distribute a single run among centers at various locations but again thinking back to the ensemble approach if we can farm out X number of runs to different machines at different location and compile the information back efficiently then that may help us considerably, and that may even include a cloud type application.”

Interestingly, the ADCIRC code has not performed well on GPUs. “It is predominantly because of the way the algorithms are written; they are not terribly compatible with GPU acceleration,” said Luettich.

Without doubt, a certain amount of inertia exists in the code, says Luettich, and a massive rewrite to take advantage of the next generation of hardware may be necessary. Funding is always an issue for projects such ADCIRC. Luettich noted, “Think about how much damage is going to result from this Hurricane Matthew. Imagine if you took one percent of that and invested it in computer resources, whether hardware or software, what advances we could make and what the returns in lessened damage in the future would be.”

Hatteras Supercomputer Profile (from RENCI web site)

Deployed in summer 2013 and expanded in early 2014, Hatteras is a 5168-core cluster running CentOS Linux.  Hatteras is not fully MPI interconnected, and is instead segmented into several independent sub-clusters with varying architectures.  Hatteras is capable of concurrently running 9 512-way ensemble members.  Hatteras uses Dell’s densest blade enclosure to allow for maximum core-count within each chassis.

Hatteras’ sub-clusters have the following configurations:

  • Chassis 0-3 (512 interconnected cores per chassis)
    • 32 x Dell M420 quarter-height blade server
      • Two Intel Xeon E5-2450 CPUs (2.1GHz, 8-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Chassis 4-7 (640 interconnected cores per chassis)
    • 32 x Dell M420 Quarter-Height Blade Server
      • Two Intel Xeon E5-2470v2 CPUs (2.4GHz, 10-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Hadoop (560 interconnected cores)
    • 30 x Dell R720xd 2U Rack Server
      • Two Intel Xeon E5-2670 processors (16 cores total @ 2.6GHz)
      • 256GB RDIMM RAM @ 1600MHz
      • 36 Terabytes (12 x 3TB) of raw local disk dedicated to the node
      • 146GB RAID-1 volume dedicated for OS
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 2 x Dell R820 2U Rack Server (LargeMem)
      • Four Intel Xeon E5-4640v2 processors (40 cores total @ 2.2GHz)
      • 1.5TB LRDIMM RAM @ 1600MHz
      • 9.6 Terabytes (8 x 1.2TB) of raw local disk dedicated to the node
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 56Gb/s Mellanox FDR Infiniband Interconnect
    • 40Gb/s Mellanox Ethernet Interconnect

Related Links
ADCRIC website
Coastal Resilience Center Website
Institute of Marine Sciences Website

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Bill Gropp – Pursuing the Next Big Thing at NCSA

March 28, 2017

About eight months ago Bill Gropp was elevated to acting director of the National Center for Supercomputing Applications (NCSA). Read more…

By John Russell

UK to Launch Six Major HPC Centers

March 27, 2017

Six high performance computing centers will be formally launched in the U.K. later this week intended to provide wider access to HPC resources to U.K. Read more…

By John Russell

AI in the News: Rao in at Intel, Ng out at Baidu, Nvidia on at Tencent Cloud

March 26, 2017

Just as AI has become the leitmotif of the advanced scale computing market, infusing much of the conversation about HPC in commercial and industrial spheres, it also is impacting high-level management changes in the industry. Read more…

By Doug Black

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Quants Achieving Maximum Compute Power without the Learning Curve

The financial services industry is a fast-paced and data-intensive environment, and financial firms are realizing that they must modernize their IT infrastructures and invest in high performance computing (HPC) tools in order to survive. Read more…

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

Bill Gropp – Pursuing the Next Big Thing at NCSA

March 28, 2017

About eight months ago Bill Gropp was elevated to acting director of the National Center for Supercomputing Applications (NCSA). Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Leading Solution Providers

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This