Visit additional Tabor Communication Publications
May 30, 2012
BURLINGTON, MA, May 30 -- Taiwan Typhoon and Flood Research Institute (TTFRI) conducted an ensemble experiment for quantitative precipitation forecast (QPF) that was designed to provide skillful typhoon predictions to government agencies such as the Central Weather Bureau (CWB), Water Resources Agency (WRA), the Soil and Water Conservation Bureau (SWCB), and the National Science and Technology Center for Disaster Reduction (NCDR). The ensemble QPF experiment included 20 members produced by the TTFRI, CWB, NCDR, National Taiwan University (NTU), National Taiwan Normal University (NTNU), National Central University (NCU) and Chinese Culture University (CCU).
The challenges in developing ensemble prediction include the selection of a good set of ensemble members, improving model physics and the initial/boundary conditions.
The experiment was designed to examine the sensitivities of numerical models to uncertainties in initial conditions and model physical parameterizations. As a result, the ensemble forecasts reduced the typhoon track forecast errors, and also improved the QPFs for five typhoons for which warnings were issued by CWB in 2010.
Although the ensemble forecast performed quite well for the rainfall distribution and typhoon tracks, the extremely heavy rainfall over a very short duration – which was related to mesoscale convective systems (MCS) - could still not be well predicted by the models. However, TTFRI continues to improve the ensemble forecast as the results may contribute to disaster reduction.
To interactively explore the forecast results, TTFRI chose the Avizo Green software as it generates eye-catching pictures and movies with 3D effects that attract audience’s attention and greatly facilitates results presentation. “We generate movies and figures to show special or severe cases to the public, highlighting and explaining heavy precipitations, fierce winds, etc.”, says Chin-Cheng Tsai, researcher at TTFRI. “Besides the presentation aspect, the Avizo software allows researchers at TTFRI to deal with some special cross section along the typhoon center for comparing the typhoon structure forecast with observations.”
About Avizo imaging and analysis software
Avizo software is a powerful, multifaceted tool for visualizing, manipulating and understanding scientific and industrial data. Wherever 3D data sets need to be processed, in materials and physical sciences, engineering applications, non-destructive testing, and earth sciences, Avizo offers abundant state-of-the-art features within an intuitive workflow and easy-to-use graphical user interface.
More information: www.avizo3d.com
The mission of TTFRI is to advance the prediction techniques of typhoon and flood, and to serve as a platform for bridging the gaps between the government authorities and the academia. TTFRI combines theories, observations and models, with particular emphases on the development of the core technologies for typhoon forecasts and hydrological applications. TTFRI further provides assistances to the national meteorological and hydrological services and the national disaster reduction operation.
More information: www.ttfri.narl.org.tw
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.