Printer Friendly

Forecasting into chaos: meteorologists seek to foresee unpredictability.

Forecasting Into Chaos

The National Weather Service celebrated a historic moment early this year when it purchased a Cray YMP, one of the newest, most powerful supercomputers on the market. But even with all those megabytes devoted to forecasting, don't expect meteorologists to know whether it will rain next Tuesday.

In the last 15 years, forecasters have come to rely heavily on complex computer models that simulate the swirling and shifting winds of the atmosphere. By watching this digitized weather evolve at high speed, they gain insight into how the real atmosphere might behave in the near future.

Yet the models can peer only a few days ahead before running into something that obscures their view: chaos. earth's atmosphere, the source of all weather, has an inherently chaotic character that can wreak havoc on a forecaster's scorecard. Sometimes the winds turn so unpredictable that meteorologists can't reliably forecast the weather for two days hence. At other times the atmosphere remains abnormally stable, providing a chance to predict the weather with remarkable accuracy many days ahead.

In the game of forecasting, meteorologists can't hope to win every round. But new strategies may warn them when particularly tough rounds are on their way. Right now, several groups in the United States and Europe are trying to devise techniques for determining when the atmosphere is unpredictable and when conditions are auspicious for reliable forecasts.

"This area is going to become increasingly important for medium- and extended-range predictions," says Tim Palmer, who heads the predictability section of the world's top forecasting office, the European Center for Medium Range Weather Forecasts in Reading, England. "More and more people are wanting to know how reliable a particular forecast is."

These techniques could make forecasts more useful for the public. Take, for example, a family considering a weekend picnic. If the weather service determines on Wednesday that it cannot yet issue a reliable weekend forecast, the family may hold off on a decision; if there's a confident advance forecast for clear skies, they can make definite plans.

More imporant, an indication of forecasting reliability could mean substantial savings for businesses such as construction companies or the home heating industry, which must plan strategies around weather predictions. During winter, for instance, if meteorologists say they can confidently predict colder-than-normal temperatures for the following month, utility companies can begin to prepare for increased demand.

On Wall Street, economists and traders know it's much easier to predict tomorrow's Dow Jones averages than those for the following week. The same holds true for meteorologists working with computer models of the atmosphere, which lose accuracy quickly as they simulate farther into the future.

In normal day-to-day operations, the weather service's National Meteorological Center (NMC) in Camp Springs, Md., runs its long-range forecast model only as far as 10 days ahead. Since the center's Control Data Cyber 205 computer takes more than 20 minutes to simulate each day, the 10-day run eats up more than 3 hours of computer time. Other demands on the machine preclude longer simulations.

But when NMC received a second Cyber 205 in 1986, researchers had a rare chance to run some extended forecasts before normal activities took up the new computer's time. Each day for 108 days, they completed a 30-day simulation of their weather model to see how well it could project atmospheric conditions several weeks ahead.

The results didn't herald an amazing breakthrough in long-range forecasting. In general, the model failed to predict weather changes beyond seven or eight days.

Yet NMC meteorologists found some days when the model forecasts proved especially accurate in foreseeing general conditions, such as above- or below-normal temperatures. "We've seen situations where the model has had substantial skill out to 25 or 30 days. We've also seen situations where it fails to anticipate a change just three days down the road," says Robert Livezey of the center's Climate Analysis Branch. They key now, he says, is to learn how to distinguish the good forecasts from the bad.

In the last five years, some research groups have explosed a strategy called ensemble forecasting to help gauge the reliability of model-based forecasts. The technique draws on a basic rule of chaos theory, which experts in the field describe with an oft-repeated phrase that has become their mantra: "sensitive dependence on initial conditions."

In all chaotic systems -- from the weather to the population of African locusts each year -- a slight disturbance can make a world of difference. "A very small perturbation, in duetime, can make things happen quite differently from the way they would have happened if the small disturbance hadn't been there," explains Edward N. Lorenz, a meteorologist at Massachusetts Institute of Technology and one of the early workers in the field of chaos. Researchers call this phenomenon the butterfly effect -- a name coined in the early 1970s after Lorenz posed an intriguing question in a lecture: Could the flap of a butterfly's wings over Brazil spawn a tornado over Texas?

The butterfly effect places a real limit on the accuracy of weather forecasting models. Other factors, such as flaws inherent in the models, also sap their predictive power. But theoretically, even a perfect model could not escape the butterfly effect.

From a pessimist's perspective, this suggests model designers are heading down a dead-end street and will never create programs that can predict whether it will rain in Chicago 19 days in the future. That's true. But the current models can improve immensely before they hit the limiting wall of chaos. Lorenz thinks such improvements may someday enable forecasting models to predict weather a week ahead as well as they now do three days ahead.

The butterfly effect stirs up trouble in weather models because they can't start out with an exact picture of the current weather at every point on the globe, but instead must rely on patchwork information. Today's models incorporate 10,000 weather measurements made every 6 hours by weather stations, balloons, aircraft and satellites. But if this sounds like a flood of data, consider that Earth's surface area measures almost 200 million square miles. A set of 10,000 measurements can't possibly represent the weather at every level of the atmosphere over the entire planet. Coverage of remote regions of the Pacific Ocean is especially sketchy. What's more, the measurements themselves introduce many small errors, since the instruments have limited accuracy.

As a computer model begins projecting into the future, the butterfly effect magnifies those initial imperfections. At first the errors remain small, and the simulation provides a fairly reliable forecast. But with time, the errors compound, growing so large that the simulation loses all reliability. Eventually, the forecast bears no more than a chance resemblance to what actually will happen.

With the strategy of ensemble forecasting, meteorologists hope to foretell whether chaos will quickly cause the model's accuracy to plummet or whether its forecasts will remain on target for many days. This technique relies on running a group of simulations. The important point is that each simulation starts off with a slightly different set of initial conditions -- a spread that reflects the errors in the weather measurements. Palmer calls this "introducing little flaps of butterflies' wings into the initial analysis."

According to the premises behind ensemble forecasting, if the forecasts diverge, offering radically different pictures of the future, that indicates the atmosphere has turned unruly and difficult to predict. If the differences grow very slowly and the forecasts resemble each other for many days, then the weather theoretically should remain predictable.

Meteorologists at NMC already use a highly simplified ensemble approach when they issue near-term national forecasts up to five days ahead. Each day, they compare the global projections of the NMC model with those made by models at the European Center and at the United Kingdom Meteorological Office in Bracknell. They weigh the known weaknesses of each model and check to see how well the simulations match each other.

If the three models agree, the forecasters can issue their predictions with confidence, as happened last December when extremely cold weather gripped most of the United States. Although much of the country loathed the deep-freeze, meteorologists relished this remarkably stable and predictable situation that made their jobs much easier. In one instance, agreement among the various models enabled the NMC to issue an accurate forecast for snowfall in Washington, D.C., five days before the storm arrived, says Louis Uccellini, head of the NMC's Meteorological Operations Division, which issues short- and medium-range forcasts.

During January's warm spell, however, the once-steady atmosphere turned unpredictable. Each of the three models offered a different version of the future weather, and one day's simulations on a particular model did not resemble the next day's. Faced with such desagreement, the forecasters lost confidence in the model projections. In one case, the variations among models prevented them from predicting a major Midwest snowstorm until a day or two before it hit.

In coming months, the NMC hopes to improve the way its meteorologists use model agreement in issuing their forecasts. "We're still more subjective than we'd like to be," says Uccellini. Researchers at the center have developed a computer-based analysis that puts a more objective measure on the agreement by showing where the models concur and how quickly they diverge. The analysis also considers other factors that can help predict forecast reliability. In the future, the NMC plans to expand this crude type of ensemble to include forecasts made by models used by the U.S. Navy and Japan's weather service.

Along the hallway of NMC's fourth floor, weather maps and screens displaying satellite images spell out the mission of the near-term forecasting team in a scene evoking TV weather coverage on the evening news. Here, Uccellini and his crew of meteorologists each day compose the forecasts that cover the next five days. But two floors down, the blackboard in Eugenia Kalnay's office bears symbols more reminiscent of a calculus lesson.

Kalnay and her colleagues are exploring chaos theory to devise much more sophisticated ensemble techniques. In the last several years, these researchers have focused on an approach called lagged-ensemble forecasting. This technique saves countless hours of computer time, providing a relatively inexpensive type of ensemble forecasting, explains Kalnay, who heads the NMC's Development Division.

A simple version of a lagged ensemble might include three forecasts -- say, the Wednesday, Thursday and Friday versions of the 10-day simulations run each day at the NMC. Because the three 10-day forecasts start on different days, with different weather information, each provides a unique projection for the next week's weather. By comparing the projections meteorologists can get a measure of the weather's predictability: The less agreement among the forecasts, the less predictable the weather.

Trials of this approach have yielded some moderately encouraging results, says M. Steven Tracton of NMC. Tracton, Kalnay and their co-workers tested the lagged-ensemble technique on special 30-day simulations they ran each day during the winter of 1986-87. They wanted to see if they could judge the reliability of a particular forecast based on its agreement with the four previous forecasts.

On several days, they found, the degree of agreement did provide an indication of forecast accuracy, especially when the researchers zeroed in on regions instead of analyzing the entire global forecast. Sometimes the simulations would agree about next weekhs weather for the eastern United States but would disagree about Europe, indicating lower reliability for the Europe forecasts.

Yet the researchers also found many times when the forecast proved wrong despite agreement among the ensemble, contradicting the premise behind ensemble forecasting. Tracton attributes this to flaws in the way the model represents the atmosphere, which make it difficult for the model to predict certain weather situations.

Though the lagged-ensemble approach showed limited success during the test, researchers have more than one trick up their sleeve for gauging forecast reliability. Another promising technique relies on recognizing certain harbingers in the air that might reveal when the atmosphere will turn stable and therefore predictable.

The 30-day winter simulations showed that forecasts grew much more reliable after a particular string of highs and lows developed over the Pacific Ocean and North America. When this pattern weakened, reliability dropped. Iths important to learn which patterns signal good times ahead for forecasters, Tracton says. Used in conjunction with ensemble forecasting, this knowledge could offer a powerful tool for judging the predictability of the atmosphere.

At the European Center, which has the world's best track record in weather forecasting, Palmer and his colleagues have experimented with the lagged approach, but they are now concentrating on a more computer-intensive technique that takes its name from the casinos of Monte Carlo.

In the Monte Carlo approach, scientists run a forecasting model many times, beginning each simulation with weather data for the same starting period. The key is that meteorologists change the weather data slightly as they start each simulation, so no two model forecasts begin with the exact same picture of the current weather.

Palmer thinks Monte Carlo ensembles may eventually yield the best predictions of forecast reliability because, unlike lagged ensembles, they give meteorologists the crucial ability to adjust the initial differences among the various simulations to reflect errors inherent in the measurements.

Palmer explains the importance of this ability in terms of the butterfly effect: "If the butterfly flaps its wings in a region of the atmosphere that is very unstable, then this disturbance will grow rapidly. But if it flaps in a stable region, then nothing will happen." This means meteorologists must take great care when adjusting the starting data. If they "place" the differences in the wrong geographic location -- i.e., in a stable region -- the Monte Carlo technique will fail because the forecasts will agree even when they aren't accurate.

The key to improving Monte Carlo simulations is learning how to introduce the initial differences into the ensemble, says Palmer. He anticipates several years may pass before he his co-workers can develop an operational program using the Monte Carlo approach to judge the reliability of forecasts.

Computer limitations in the past have forced the NMC researchers to concentrate on lagged ensembles, but Tracton says the arrival of the new Cray super-computer will allow them to work on Monte Carlo ensembles in the future. Other groups are exploring this technique at the National Center for Atmospheric Research (NCAR) in Boulder, Colo., and at the NASA Goddard Space Flight Center in Greenbelt, Md. U.S. researchers foresee several more years of work before forecasters can routinely use Monte Carlo, lagged ensembles or a combination of the two.

According to NCAR's Joseph J. Tribbia, scientists working on ensemble forecasting face several important questions. For one, it's unclear whether flaws in the computer models will significantly hamper the technique's success, although Tribbia says he remains optimistic about this approach.

Many researchers regard ensemble simulations and other reliability indicators as the wave of the forecasting future -- a boon to weather watchers with more at stake than whether watchers to carry an umbrella the next day. The general public, accustomed to the confident tone of TV forecasters, may not wish to hear estimates of reliability, which force forecasters to admit when they have little faith in their own predictions. But others are sure to appreciate the advance. Says Uccellini, "I do believe the decision makers in commerce, agriculture, government and other areas will use that information."
COPYRIGHT 1990 Science Service, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Monastersky, Richard
Publication:Science News
Date:May 5, 1990
Previous Article:Experimental method lowers multifetal risk.
Next Article:Episodic oceans: the waning and waxing waters of Mars.

Related Articles
Climate: don't look for shortcuts.
Gust work; meteorologists decipher the winds with radar.
The long view of weather: learning how to read the climate several seasons in advance.
Hurricane experts predict better forecasts.
Georgia on their minds: Olympic weather team pushes the limits of forecasting.
Spying on El Nino: the struggle to predict the Pacific prankster.
When Meteorologists See Red.
Will it rain Tuesday? Ask a supermodel.
An Overview of the Interactive Forecast Preparation System. (Atmospheric Science Section).

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters