Printer Friendly

Consensus forecasts in planning.

MACROECONOMISTS generally summarize the economic outlook by producing projections for a handful of very broad aggregate indicators. On their own, these projections represent only a general template for planners looking at the outlook for a (comparatively) narrowly defined sector of the economy. But as most corporate and strategic planners know, in many industries macro forecasts are regularly used as inputs to the planning process, often to establish a starting point or a broad framework of assumptions within which the more specific problems under consideration can be examined.

For many businesses, product demand in a given market that is sensitive to the strength of economic activity may be well correlated with the behaviour of one or more broad macroeconomic indicators. For example, demand for semiconductor chips in many markets has historically been relatively well correlated with growth in overall industrial production, which is therefore often considered by sector analysts as the best indicator to use in predicting future chip demand. One major industrial company also focuses on expected industrial production growth in various (mainly European) markets, as an indicator of future demand for ball bearings and other products widely used in the industrial production processes.

Obviously, obtaining a reliable set of forecasts for a macroeconomic variable in various countries or markets is far from being the whole story: the relationship between industrial production and demand for computer chips may vary quite widely across markets, depending, for example, on the level of technology employed. Information or knowledge that is more specific to the industry, or to the past experience of the individual firm, also will be necessary. Thus, extrapolating historical relationships between demand for a product and a macroeconomic indicator is a widely used approach but is dependent upon the quality of both the interpretation of events and the macro benchmark forecasts used.


In the short term, predictions of the timing of turning points in the economic cycle also can be invaluable in reaching decisions on production, inventory and manning levels, marketing strategies and pricing. In the trough of an economic cycle, weak demand is likely to mean that producers are facing strong competition for the few available orders, are running plant at well below full capacity and have cut inventory and manning levels. In spite of the rising unit labour costs that usually accompany a downturn in output, producers may be under considerable pressure either to cut prices or to offer significant discounts, and profit margins are inevitably squeezed. The question of whether to cut employment further in order to reduce costs, or possibly to close or scrap plant, will depend to a considerable extent on when and from what level the economy is expected to begin recovering. Producers will not wish to find themselves having cut capacity and employment as the economy is about to turn up, and also will wish to be well positioned from a marketing standpoint as demand begins to revive.

The economic cycle in different industrial sectors is frequently out of phase with that of the economy overall, however. In many countries, for example, construction sector activity turns down ahead of demand in the economy as a whole and often leads the revival. Producers of construction-related materials and equipment therefore also will feel the effects of a downturn and the subsequent revival relatively early. On the other hand, business investment often responds more slowly to a recovery in overall output, as producers first take up the excess capacity resulting from recession before investing in new plant. But even so, in examining either the short-term influence of economic cycles or the longer-term outlook, once a general relationship between demand for a particular product and a broad indicator of total output (such as gross domestic product |GDP~ or industrial production) has been established, macroeconomic forecasts adjusted for leads or lags can be used to "drive" a more specific model of demand for the individual sector or product.


Over a longer time horizon, the expected relative performance of various economic indicators in different countries can be a useful guide in reaching decisions about the location of production units, distribution networks and marketing investment. Equally, expected developments in relative wage costs and inflation rates may have a significant bearing on investment or other location decisions. One of the problems here is likely to lie in finding forecasts for all the individual countries under consideration that have been produced on as simultaneous and consistent a basis as possible.


Expectations regarding future trends in output, inflation or other macro variables can change quite rapidly over time, suggesting that forecasts for demand growth in different countries made even a few months apart might provide misleading comparisons. The outbreak of the Gulf crisis in August 1990, for example, marked the beginning of a nine-month period during which 1991 growth forecasts for most economies were revised sharply and continuously downwards. In the United Kingdom, where the gathering gloom was compounded by the realization that tight monetary policy was finally beginning to bite, the deterioration in the consensus outlook for GDP growth and Manufacturing Production was particularly severe.

Such rapid shifts in expectations can obviously pose problems for companies where the planning cycle involves relatively infrequent reviews of the forecasts underlying the plan. A company conducting an annual forecast review for the United States in August 1990, for example, would, by the beginning of 1991, have found itself with a plan based on assumed GNP growth for 1991 of 2 percent. In the meantime, however, the average independent growth forecast had deteriorated to the point where the economy was expected to contract by around 0.3 percent. Changes in expectations of this magnitude, and wars in the Gulf, are thankfully relatively rare occurrences, but even under more normal circumstances, expectations can shift quite rapidly over a few months. Since the beginning of 1992, for example, consensus forecasts for growth in Japanese industrial production have declined from an average of + 1.3 percent to the - 3.0 percent now being predicted (early June 1992). Such developments highlight the need for a reliable stream of regularly updated forecasts and the close monitoring of shifts in expectations. In such circumstances a flexible approach to reviewing established plans outside the normal six months or one year cycle and a willingness on the part of business economists to raise the red flag are clearly important. It should at least be possible to draw the attention of others involved in later stages of the planning process to such developments, even if a full scale review is impractical. In view of the difficulties that may be involved in disrupting the planning process in this way, however, it is important that the forecasts used to trigger such changes derive from a consistent and credible source. The choice of this source is therefore an important decision.


The choice of forecast source is complicated by the large number and wide diversity of economic forecasting operations. These may be large international consultancy-type firms specializing in economic forecasting and analysis, government or semigovernment institutions such as the OECD, university research units, divisions of major banks or securities firms, or the in-house economic units of large industrial companies. Our company surveys over 180 economic forecasters based in the G-7 countries and Australia every month (of which about 25 are in the United States), and this is by no means an exhaustive list of the available sources. Blue Chip Economic Indicators covers about 50 U.S. forecasters in its principal American panel.

Comparing forecasters' track records is made more complicated by the fact that forecast errors vary in type and can have different consequences for the forecast user. For example, forecasters may correctly predict the direction of change in a series, but get the magnitude wrong (under or overpredicting investment growth, for example). This kind of forecasting error is, however, probably less damaging to the forecast user than a prediction that gets the direction of change wrong (forecasting a rise when the series in fact falls). From the users' point of view, a forecaster who accurately predicts trends but fails to spot turning points may well deserve a lower rating than another who correctly predicts turning points but has a poorer track record at other times. More generally, a good track record does not guarantee consistent success. The fact that a forecaster performed well in predicting economic developments for one or two years does not mean that he or she will continue to do so. Indeed, some of the more recent evidence from studies of forecasting accuracy (reviewed below) indicates that past success is no guarantee of future accuracy. The problem is compounded when forecasts for a range of different variables are considered. One forecaster may have a better track record on production growth, but a poor record on inflation. These results might be combined or weighted in some way, but how is a percentage error in forecasting inflation to be rated vis-a-vis an absolute error in volume terms in a forecast for housing starts, for example? The relative importance of the different variables will vary from user to user.


All of this suggests that successfully differentiating among the large number of different forecasts available is a complex and challenging task. One possible solution to this problem of "picking winners" is to use aggregated or consensus forecasts, combining the predictions of a number of different forecasters into a single, mean forecast. The idea of using consensus projections is fairly well established in a number of countries, notably in the United States, where surveys of forecasters have been running for some time. Aside from reducing some of the problems of choice and weighting discussed above, the use of a consensus projection also appeals to many users because it does not rest on one particular view of the way an economy functions, but attempts to capture the information implicit in a range of forecasts. The results of these surveys have also attracted a good deal of academic interest and analysis, and several studies of the merits of consensus forecasting as an approach have been conducted.

Much of this work has concentrated on forecasts produced by various time series methods of extrapolation for individual series, although there have also been other studies comparing econometric and/or judgmental forecasts with the consensus. Most of these studies are based on data for the United States, where a long run of consistent back data is available from the surveys published in Blue Chip Economic Indicators over the past fourteen years.

As regards the accuracy of the consensus, the verdict of most of the academic work in this area has generally been favourable. In his study covering forecasts for seven variables made by twenty-two forecasters over nine years (1978 through 1986) Stephen McNees(1) concluded that "only four of the twenty-two individual forecasters were more accurate than the consensus in more than half their forecasts." For all seven variables weighted equally, the consensus forecasts ranked 6 (out of 23, including the consensus) on the basis of the RMSE (root mean squared error) criterion.

In addition, McNees noted that:

"For any particular variable, the Blue Chip consensus was more accurate than most individual forecasters but less accurate than a minority of varying size depending on the predicted variable . . . Every forecaster, |except one~, was more accurate than the consensus for at least one variable but none of the forecasters outperformed the consensus for all seven variables."(2)

Another study(3) comparing seventy-nine individual forecasts of six macroeconomic variables with the group mean found that, on average, the consensus was more accurate than around three-quarters of the individual forecasts, although again this proportion varied depending on the variable considered. On the basis of this evidence, which is broadly consistent with our own experience, it seems reasonable to assume that for some variables some of the individual forecasts making up the consensus will prove to be more accurate than the group mean when the results become known. However, the problem for a user of external forecasts remains how to determine in advance which individual forecasters will be more accurate. This would be a relatively simple task if some forecasters were clearly superior to the others and consistently achieved better results.

In fact, the evidence on this question is rather mixed. Victor Zarnowitz(4) examined forecasts submitted to the survey conducted by the American Statistical Association (ASA) and the National Bureau of Economic Research (NBER) from 1968 to 1979, and concluded (by comparing rank correlations of relative RMSEs across variables and forecast horizons) that "a small number of the more regular participants in the ASA-NBER surveys did perform better in most respects than the composite forecasts from the same surveys."

On the other hand a later analysis conducted by Roy Batchelor of the City University Business School(5) in London concluded that there were "no significant differences in the accuracy rankings of individual forecasters." This conclusion supports the argument that, without the benefit of hindsight, it is extremely difficult to pick out an individual forecaster who is likely to outperform the consensus across a range of variables and time horizons. As noted above, however, for certain variables considered in isolation the evidence does suggest that selected forecasters can perform consistently well.


There are a number of problems involved with the use of consensus forecasts. One is the choice of which forecasters to include in the consensus. However, given the competitive nature of the forecasting business (large numbers of suppliers, fairly standardized products, very low or nonexistent barriers to entry, etc.) inaccurate forecasters, or those lacking professional credentials, might be expected to be driven out of business, leaving a group of forecasters producing work of a similar quality. This is supported by the Batchelor study, which finds no evidence of significant differences in forecasters' track records. In a separate study,(6) Batchelor also finds that, perhaps because of this high level of competition in the forecasting business, some forecasters may attempt to differentiate their work by deliberately adopting a stance that is either pessimistic or optimistic in relation to their peers. Far from moving towards the consensus, some forecasters display "variety seeking" behaviour and attempt to distance themselves from the middle ground to some extent. Those that are determinedly optimistic year after year will almost certainly, at some stage, be proved correct when the outcome is better than the consensus predicted. Intuitively, this also ties in with the results showing that few forecasters beat the consensus consistently; neither the optimists nor the pessimists can always be right. This kind of behaviour probably reflects the fact that forecasts, like other types of information, are themselves a marketable commodity. From some perspectives, the middle ground may appear less valuable or interesting and thus more difficult to sell commercially. Thus accuracy may not always be the only consideration for the forecast producer, given that he is operating in a competitive market.

This leads to another caveat regarding the interpretation of consensus projections. The range or spread of different forecasts, which is often measured by the standard deviation of the sample, is frequently used as a measure of the "risk" or uncertainty attached to a consensus forecast. Clustering around the mean might, however, produce a range of forecasts that considerably understates the wide dispersion of likely outcomes, with the result that the deviation in the sample is considerably lower than the "risk" inherent in the forecast. This is reflected in the fact that the actual outcome for a particular variable is frequently outside the range of forecasts. In our experience, we have noted that the dispersion of forecasts may also vary widely from country to country. For example, the forecasts for the French economy produced (on a monthly basis) by a group of around sixteen French-based forecasters over the past two years have typically been much more closely grouped around the mean than those produced by a similar group of United States forecasters looking at the American economy. This may reflect structural differences between the two economies (the French economy may be more predictable, for example) or it may reflect more widespread attempts at product differentiation in the U.S. forecasting industry. So caution should be exercised when using forecast ranges to assess the uncertainty attached to the consensus. As always with a table of comparative forecasts, moreover, the astute analyst will endeavour to look past the numbers at the reasoning that lies behind them.

Michael R. Sykes is a Director of Consensus Economics, Inc., London.


1 Stephen McNees, "The Tyranny of the Majority," New England Economic Review, Federal Reserve Bank of Boston, Nov/Dec 1987.

2 Ibid.

3 Victor Zarnowitz, "The Accuracy of Individual and Group Forecasts from Business Outlook Surveys," Journal of Forecasting, Vol 3 (Jan-March 1984).

4 Ibid, pp. 23-24.

5 Roy A. Batchelor, "All Forecasters Are Equal," Journal of Business and Economic Statistics, 1990.

6 Roy Batchelor and Pami Dua "Conservatism and Consensus-Seeking Among Economic Forecasters," Paper presented to the Ninth International Symposium on Forecasting, Vancouver, June 1989.
COPYRIGHT 1993 The National Association for Business Economists
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sykes, Michael R.
Publication:Business Economics
Date:Jan 1, 1993
Previous Article:Economic stability in the 1990s: the implications of improved inventory control.
Next Article:Research impact assessment.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters