Visit additional Tabor Communication Publications
April 13, 2011
BOULDER, Colo., April 12 -- Aircraft safety is getting a boost from a new computer-generated forecast that provides pilots with critical weather information on the likelihood of encountering dangerous in-flight icing conditions.
Each year in the United States, 20-40 aircraft accidents are linked to in-flight icing encounters. Icing conditions, created by water droplets from clouds that freeze on the surface of an aircraft, can affect air travel anywhere, especially during colder months. Hazardous icing conditions cost the U.S. aviation industry an estimated $20 million annually in injuries, aircraft damage, and fuel.
The Forecast Icing Product with Severity, or FIP-Severity, provides 12-hour icing forecasts that are updated hourly for pilots, air traffic controllers, and other users of aviation weather information who plan their flight paths over the continental United States. It was developed by researchers at the National Center for Atmospheric Research (NCAR), with funding from the Federal Aviation Administration.
"In-flight icing can create extremely dangerous conditions for pilots," says NCAR scientist Marcia Politovich, who leads in-flight icing research at NCAR. "Recognizing the potential for icing conditions to develop over time and the degree of severity are both crucial for safe flight planning."
FIP-Severity will most benefit commuter planes and small aircraft, says Politovich. Such aircraft are more vulnerable to icing hazards because they cruise at lower, ice-prone altitudes, below 24,000 feet. They also may lack mechanisms common on larger jets that prevent ice buildup, such as heated wing edges.
In January, the Aviation Digital Data Service began displaying icing products generated by the FIP-Severity program. ADDS, which operates out of the National Weather Service's Aviation Weather Center in Kansas City, provides digital and graphical weather forecasts, analysis, and observations to the aviation community.
FIP-Severity is a computer-based forecast of the probability of icing based on an analysis of temperature and humidity data associated with clouds, which are the source of in-flight icing. The automated algorithm gathers real-time information from satellites, radars, weather models, surface stations, and pilot reports, and determines the probability of encountering icing, its expected severity, and the likelihood of large droplet icing conditions.
This capability is an update to NCAR's original Forecast Icing Product (FIP), which has been in operation for several years but only calculated an uncalibrated icing "potential." The Current Icing Product (CIP) depicts severity and probability of an encounter with icing, but only for current conditions. Requests from users for more detailed information led to the development of FIP-Severity.
"This tool improves users' abilities to map flight paths across the country with safety and efficiency in mind," says Politovich.
Mechanisms of icing
Icing conditions require supercooled liquid water drops -- cloud drops, drizzle, or rain that exist at temperatures lower than 32°F (0°C).
In-flight icing occurs when water droplets from clouds freeze on the surface of an aircraft. "Supercooled" large drops with diameters greater than 50 microns are particularly dangerous because they rapidly impede an airplane's aerodynamics. Icing can increase drag and decrease lift, ultimately causing the pilot to lose control of the aircraft.
Both local- and regional-scale weather patterns, particularly air rising over frontal systems or mountain ranges, play a role in creating icing conditions. "We often see warm or cold fronts, low pressure systems, and rising convective air masses associated with icing encounters," Politovich says.
Commercial jets generally fly above 29,000 feet, much higher than typical icing altitudes, and are equipped with de-icing systems. Even with these systems, however, commercial pilots prefer to avoid icing risks. Adding severity information to FIP assists pilots in safe and efficient route planning around potential hazards.
Solving data conflicts
The FIP-Severity computer software incorporates measurements of temperature at the tops of clouds, humidity levels in vertical columns in the atmosphere, and other atmospheric variables. It then employs a "fuzzy logic" algorithm, somewhat similar to the human thought process, to identify cloud types and the likelihood of precipitation aloft.
"Fuzzy logic helps us analyze and discern among information that can sometimes conflict so we can more accurately identify those clouds that present an especially high risk of icing," Politovich says. "The program weighs each factor associated with icing, and once all the data have been examined, it provides a probability and severity forecast."
A study in 2004 by the U.S. National Transportation Safety Board found that in-flight icing was responsible for dozens of accidents a year, mostly among smaller, general aviation aircraft. An estimated 819 people died in accidents related to in-flight icing from 1982 to 2000, with most accidents occurring between the months of October and March, according to the study.
This research is in response to requirements and funding by the Federal Aviation Administration (FAA). The views expressed are those of the authors and do not necessarily represent the official policy or position of the FAA.
The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.