Visit additional Tabor Communication Publications
September 01, 2006
In 2006, the Department of Energy's Office of Science made two separate allocations of 400,000 processor hours of supercomputing time at the National Energy Research Scientific Computing Center (NERSC) to the U.S. Army Corps of Engineers for studying ways to improve hurricane defenses along the Gulf Coast. The research is being done in cooperation with the Federal Emergency Management Agency (FEMA).
As hurricanes move from the ocean toward land, the force of the storm causes the seawater to rise as it surges inland. The Corps of Engineers used its DOE supercomputer allocations to create revised models for predicting the effects of 100-year storm-surges -- the worst case scenario based on 100 years of hurricane data -- along the Gulf Coast. In particular, simulations were generated for the critical five parish area of Louisiana surrounding New Orleans and the Lower Mississippi River. These revised effects, known as "storm-surge elevations," are serving as the basis of design for levee repairs and improvements currently being designed and constructed by the Corps of Engineers in the wake of Hurricane Katrina's destruction in the New Orleans Metro Area.
Additionally, Gulf Coast Recovery Maps were generated for Southern Louisiana based on FEMA's revised analysis of the frequency of hurricanes and estimates of the resulting waves. While still preliminary, these maps are being used on an advisory basis by communities currently rebuilding from the 2005 storms. Final maps are expected to be completed later this year.
The Corps used its first NERSC allocation, announced in February, to conduct Storm Surge simulations using the ADvanced CIRCulation (ADCIRC) coastal model and Empirical Simulation Technique (EST) to study both how high the storm-surge waters would rise and how often such surges would occur.
The Corps of Engineers plans to use the second NERSC allocation, announced in July, to finalize the revised stage frequency relationships by the end of 2006. Having access to the NERSC supercomputer will allow the Corps of Engineers to create more detailed models of the effects of Hurricane Rita and other storms along the Texas-Louisiana coasts. Increased detail will give the Corps of Engineers and FEMA more information about the local effects of such storms. For example, storm surge elevations are greatly influenced by local features such as roads and elevated railroads. Representing these details in the model greatly improves the degree to which computed elevations match observed storm surge high-water marks and allows the Corps to make better recommendations to protect against such surges.
At NERSC, the Corps of Engineers team is running their simulations on an 888-processor IBM cluster called "Bassi." The cluster is powered by IBM's newest Power5 processors and is specially tuned for scientific computation. The Corps' simulations typically use 128 to 256 processors and run for two-and-a-half to four-and-a-half hours per simulation batch.
The Corps of Engineers team is also running hurricane simulations on the DoD Major Shared Resource computers at the Engineering Research and Development Center (ERDC). Due to the tremendous computational requirements of these hurricane protection projects and urgent timelines, only by working together and using both DOE and DoD resources, will the Corps be able to provide high quality engineering solutions.
As a result of the runs, the Corps determined that the applications produced incorrect results at topographic boundaries in some instances and codes were modified to improve the accuracy of the results. For example, the runs at NERSC have improved the Corps' ability to model the effects of vegetation and land use on storm surges that propagate far inland, as Hurricane Rita did on Sept. 24, 2005.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.