SCIENCE & ENGINEERING NEWS
San Francisco, CALIF. — Keay Davidson reports for the San Francisco Examiner that a debate of atmospheric proportions has broken out over El Nino, raising a big question for Californians: How reliably can these quasi-cyclical weather traumas be predicted?
Until recently, sophisticated supercomputers were credited with one of the great triumphs of modern atmospheric science – the forecasting, many months in advance, of the El Nino that generated the winter storms of 1997-98 and flooded much of the Golden State.
But that triumph is looking less and less certain, a new study shows. True, El Nino arrived as scheduled, but with one catch: Forecasters who rely on supercomputers disagreed markedly on when it would arrive and how long it would last.
More embarrassing, the supercomputers’ forecasts were generally no better – and sometimes worse – than those made by a program small enough to run on a hand calculator, according to a new study by researchers from the U.S. National Oceanic and Atmospheric Administration and Colorado State University.
The finding by researchers Christopher Landsea and John Knaff has humbled some U.S. climatologists, who until recently thought they had figured out how to forecast El Nino.
“There was all the excitement in ’97 that “we’ve solved the El Nino problem!’ ” Landsea said. “But we haven’t.” Their downbeat rating of the once-praised El Nino-casts is so unexpected that Science magazine recently compared it to one’s high school teacher calling “you up at grad school to tell you that, on further thought, she was dropping your grade from an A to a C-plus.”
Their research has larger implications, too, Landsea and Knaff believe. Many climatologists have used the purported success of El Nino forecasting to champion the use of supercomputers to forecast the planetary impact of global warming. But such ambition is seriously premature, Landsea and Knaff agree, based on their glum assessment of El Nino forecasting.
Given the questionable state of El Nino forecasting, “it would be presumptuous to say we know what’s going to happen” during global warming, Landsea said.
U.S. climatologists have reacted diversely to Landsea and Knaff’s claims. On the one hand, their work “makes one good point, but otherwise is much ado about nothing,” charges Robert E. Livezey, senior meteorologist at the National Weather Service’s Climate Prediction Center in Camp Springs, Md. Livezey and Knaff debated the Landsea-Knaff thesis at a meteorological conference in May.
On the other hand, noted hurricane forecaster William Gray of Colorado State says the new study proves climatologists “have grossly over-hyped their ability to predict this phenomena (El Nino).
“They have, in general, given public claim to their few successes but kept quiet about their many busts. And most science writers have fallen for their claims,” charged Gray, who in recent years has gained fame for the reliability of his forecasts of Atlantic hurricanes.
Gray’s forecast technique relies partly on an apparent correlation between El Nino and Atlantic hurricanes: When the former is strong, the latter are few. Both Landsea and Knaff are former students of Gray, who served as their doctoral advisor.
An El Nino event involves a warming of Pacific waters that triggers meteorological upsets in California and the rest of the world. El Nino events typically occur every three to five years, with the most significant events every seven to 12 years. The last serious El Nino was in ’97-’98.
That El Nino unleashed mountainous rain clouds over the Golden State, submerging parts of it in lakes that stretched from horizon to horizon.
Landsea and Knaff analyzed 12 different computer models used to forecast the beginning, evolution and end of the ’97-98 El Nino, they say in an article for the September issue of the Bulletin of the American Meteorological Society.
Some of those complex models contain hundreds of thousands or millions of lines of computer code, written laboriously over years.
They compared the 12 models’ forecasting performance to that of their own minuscule computer program, which they threw together in a few weeks. Their program contains a mere 729 lines of Fortran code and is dubbed ENSO-CLIPER, for El Nino-Southern Oscillation Climatology and Persistence.
To their surprise, David held his own against Goliath: ENSO-CLIPER fared better at forecasting the beginning, evolution and end of the ’97-’98 El Nino than most of the more costly, sophisticated models.
Hence, Landsea and Knaff suggest, scientists who forecast El Nino should rely less on costly, complex supercomputer models and more on cheap, simple statistical methods that Science magazine has called, only partly tongue in cheek, “automated rules of thumb.”
There’s plenty at stake besides whether coming winters will flood California coastlines. To those in the climatology business, their way of life might be at stake, or so charges the vociferously outspoken Gray.
Gray sees political motives behind climatologists’ reliance on big computers as opposed to his preference for simple statistical models. He claims that forecasters brag about great prediction accuracy to attract more federal funds – and uncritical press coverage of alleged forecasting triumphs makes their task easier, he claims.
“The real battle is over federal resource allocation,” Gray charges. “Favorable press articles help convince government officials – who are not down in the trenches and don’t understand the topic – to hand out grants to those appearing to have success and knowledge. It is a game well played by the (climate computer) modelers.”
But with equal passion, Livezey defends the big-computer approach to El Nino forecasting. Livezey acknowledges that Landsea and Knaff correctly say “the very sophisticated dynamical/ numerical models cannot yet outperform state-of-the-art statistical models for prediction of (El Nino).”
But their statement that “there were no models that provided both useful and skillful forecasts for the entirety of the 1997-98 El Nino (is) only true in the sense that different models might have been weak in representing one aspect (of El Nino) or another – like the maximum strength and the timing or rapidity of its demise,” Livezey says. Even so, the models “collectively . . . represented an unprecedented, powerful forecast tool.
“We knew early in the summer of 1997 that a strong El Nino would be in place by the early autumn and that it would maintain its strength at least through the winter,” Livezey continued. “This allowed (federal forecasters) to make confident, detailed forecasts of U.S. wintertime conditions which were largely correct, set all-time records for performance, and were the source of enormous savings by (emergency) planners and managers who heeded them.”
A middle-of-the-road view is taken by Huug van den Dool, who manages the long-term climate forecasting division at the Climate Prediction Center. “I mainly agree with Landsea and Knaff’s findings . . . (But) opinions in the scientific/ technical community are guarded (regarding their work).
“It is undeniable that big computer models did not outperform several much cheaper empirical methods – neither in ’97-’98, nor in the cold event (La Nina) following,” van den Dool said. “But one could argue until the cows come home whether that means a complete absence of (usefulness) for big models.”
============================================================