Printer Friendly

The effect of underforecasting on the accuracy of revenue forecasts by state governments.

In the budget process of any unit of government, the revenue forecast sets the parameters for the allocation of dollars among competing priorities. Because revenues are typically forecast 18 to 24 months prior to the beginning of each fiscal year, there is the potential for substantial error. If revenues are overestimated, disruptive midcourse corrections must be made. The recent recession in FY91 forced many states to increase revenues or cut spending in midyear because actual revenues (and spending) were out of line with earlier forecasts. The most notorious examples were the states of California, Connecticut, and New Jersey, where well-publicized disputes between governors and legislatures ensued after the discovery of huge budget shortfalls.

In this article, we will examine the proposition that state governments have consistently underforecast revenues every budget period in order to provide a cushion in the event of an unanticipated downturn in economic conditions. We will evaluate the extent and degree of underforecasting in all states during periods of economic expansion as well as periods of recession over an 18-year period.

First, we will explain how past research has treated forecasting error, and how our theory of underforecasting fits within the much wider context of previous research on revenue forecasting. Second, the hypothesis that states cushion their forecasts by underforecasting revenues will be tested by comparing data on forecast errors that were provided by state governments from FY87 through FY92, in addition to considerable evidence from states that provided us with forecast data for earlier years as well.

Explanations of Revenue Forecast Errors from Past and Current Research

The general focus of recent work on revenue forecasting has been on improving forecast accuracy, such that the smallest possible difference (or error) results between the revenues that are forecast and the revenues that are collected. Most of the existing research consists of improving forecasting models, so that all of the factors that influence revenue collections are taken into account. This is a reasonable pursuit; indeed, such efforts have made an important contribution to improving the reliability of forecasts. Despite the dedicated efforts of researchers and forecasting professionals to incorporate all factors into their models that have the potential to influence revenue streams, any single forecast may be wrong (Vasche and Williams, 1987; 66), sometimes significantly (Roberds, 1988). When revenues are overestimated, program cuts or revenue increases may be necessary (Schroeder, 1982; 122). In the case of underestimates, revenues exceed expectations and the door is open to criticisms of excessive taxation (Vasche and Williams, 1987). In either case, inaccurate estimates have the potential to cause nightmares for both government officials and political leaders.

Recent advances in forecasting technology, while significant in their own right, have focused on evaluating the impact of a wide variety of economic, political, and institutional factors on forecast accuracy (e.g., Bahl, 1980; Bretschneider & Gorr, 1987). For example, the relationship between unemployment and revenues is fairly clearly established (Belongia, 1988; Kamlet, Mowery, and Su, 1987). If the timing of an economic downturn is misforecast, the effect of unemployment on revenues in any single year may be substantially misforecast as well. This type of error evens out over the long run, since it tends to be equally likely that the economy will perform better than the forecast or worse than the forecast in any particular year.

The presence of uncertainty in revenue forecasts means that it is in the best interests of revenue forecasters to provide themselves with a cushion to guard against a drop in revenues that is unanticipated (Rubin, 1987). If forecasts of revenues incorporate a safety valve in the form of a cushion, then revenues will be consistently underestimated every forecast period, whether the forecast reflects the expectation of expansion or one of decline. This is likely to occur because the costs associated with underestimating revenues are considerably less than the costs associated with overestimating revenues. If the forecaster overestimates revenues, the consequences are serious: programs must be cut, implying that services will be curtailed, or taxes must be raised, in which case political and administrative costs are incurred. If the forecaster underestimates revenue, however, no such consequences must be confronted.

Consider the consequences of two choices that offer very different risks to a state government: the choice of underforecasting revenues or the choice of using the "best-estimate" as the official forecast. If the best-estimate choice is adopted, there is likely to be as great a chance of additional revenues becoming available that were not anticipated as having a revenue forecast that does not materialize. If, on the other hand, the forecaster chooses to accept an underforecast of revenues as the official forecast, the chances of needing to make a midcourse correction declines as a function of the size of the underestimate. Which choice do state governments make? Our thesis is that over the long run, forecasts by state governments will choose the less risky choice of underestimating revenues.

A variety of forces contribute to the decisions of state governments to underestimate revenues. First, if a state always uses a best-estimate forecast of revenues during sustained periods of economic growth, no surplus will be created to cover the revenue gap that is encountered during a recession. Second, the presumptive norm of the balanced budget predominates at the state level. Even more important than the constitutional/legal requirements are the constraints imposed by financial markets; overforecasting revenues has the potential to jeopardize bond ratings, resulting in significant political and financial costs. Third, a few states have placed legal constraints on spending that have had the same effect as revenue underforecasts. Some states, for example, limit appropriations by the legislature to 95 percent of the revenues that were officially forecast.

Many state officials that we interviewed said that the low-end forecast was almost always accepted as the official forecast. It is seen as much too foolish to use the high-end forecast, risky to accept the best-estimate forecast, and fiscally responsible to endorse the low-end forecast. One budget official summarized the challenge that confronted him during each forecast period as follows: "I am a hero when there is more money than I predicted and a villain when there is less. Let me tell you, it is much better to be a hero than a villain."

The hypothesis presented in this article that state governments have consistently underforecast revenues is certainly not new. Studies of forecasting practices in a wide variety of states (Frank and Gianakis, 1990; Klay, 1983; Albritton and Dran, 1987) have found that revenues of the states are consistently underestimated. Among seven studies that have evaluated the percent errors in revenue forecasts for one or several states, six studies found revenues were underestimated (Cargill & Walker, 1981; Cassidy, Kamlet, & Nagin, 1989; Gentry, 1989; Heins, 1975; Shkurti & Winefordner, 1989; Vasche & Williams, 1987), while only one found revenues had been overestimated (Bretschneider & Gorr, 1992). The research evidence thus lends strong support to the proposition that state governments consistently underestimate revenues.

Prior studies have done a thorough job of isolating factors--such as unemployment--that affect the predictability of revenues, and this has added substantially to the ability of jurisdictions to forecast with precision. But even when factors that explain the variation in forecast errors have been accounted for, our hypothesis is that there will still be a difference between forecasted revenues and actual revenue collections that can only be explained by the very rational choice to underestimate revenues in order to provide a cushion against a recession that is unanticipated.

During recessions, the average of the unexpected reduction in revenue streams should theoretically be offset by the average size of the revenue underestimate. In contrast, the average forecast error for states should be theoretically equal to the size of the underforecast during periods of economic expansion. For these reasons, we would expect forecasts to be significantly more accurate during recessions than during periods of economic expansion.

These expectations will not necessarily hold for the experience of any single state during any particular time period. For example, a state that is cyclically sensitive to shifts in economic conditions may, as a matter of rule and past experience, underforecast revenues by a larger magnitude than states that are cyclically insensitive. Thus, some states will have more accurate forecasts during recessions than other states, for a variety of local circumstances and conditions. Our expectation of large error during expansions and little error during recessions, however, should be soundly supported in a cumulative analysis for all states to the extent that states have systematically underforecasted revenues during every budget cycle.

Our study of revenue forecast errors for the 50 states builds on the findings of prior studies in several ways. First, prior studies have examined forecast errors during recessions or during periods of sustained growth, but not both. Because the effects of underforecasting will be different depending on the economic conditions that exist, it is critical to evaluate the accuracy of revenue forecasting during periods of economic growth and recession. Second, while prior studies have analyzed the forecast errors for only half the states (of less), we will present an analysis of forecast errors for all 50 states. Third, prior studies have focused on explaining the variation in forecast errors (Bretschneider, Gorr, Grizzle, & Klay, 1989). Our interest is in identifying the size of forecast error that is due to systematic underforecasting.

The hypothesis of underforecasting advanced here differs from the hypotheses of "bias" that are central to many of these other studies. Regardless of the forecasting procedures, political ideologies, geographical location, or economic situation in any particular state during any particular time period, we predict that revenues will be systematically and routinely underforecasted. The magnitude of the underestimate will obviously not be the same for every state. As noted, the aforementioned limits on expenditures relative to the forecast that exist in states such as Florida and Oklahoma provide an additional cushion that ameliorates the need for a large underforecast of revenues. Conversations with state budget officials have suggested to us the corollary proposition that the size of the revenue underestimate is likely to be much larger than has been previously acknowledged in the forecasting literature.

Analysis of Mean Error in Revenue Forecasts

Next we will evaluate the extent to which state governments underestimate revenues when formulating their budgets. The expectation that states systematically underestimate revenues is entirely consistent with the reasons state budget officials give for why revenue cushions represent sound fiscal practice. Previous studies have reported evidence that revenue forecasts are systematically underestimated, and we expect to find the same result. We also expect to find that the forecast errors during the recent recession will be more accurate than during the prior period of sustained growth. To test our expectations, we will analyze data on the accuracy of revenue forecasts for state governments from FY87 to FY92. Data on the overall accuracy of revenue forecasts by state governments for the FY75 through FY86 time period will also be analyzed and discussed.

Accuracy of State Revenue Forecasts from FY75 to FY92

The hypothesis that states cushion their forecasts of revenues was tested using the cumulative estimates of forecast errors in three different ways. First, the mean forecast error for all states and all time periods was derived. Second, the mean forecast error was presented for fiscal years 1975 through 1992. Support for the hypothesis of underforecasting would be shown in the forecasts error sign was more likely to be positive than negative, and if the mean forecast was significantly different from zero. Third, the mean forecast error for each state was derived, with the expectation that the number of states with mean forecast error that was positive (indicating an underforecast) would be significantly greater than the number of states with mean a forecast error that was negative (indicating an overforecast). Forecast errors were also evaluated in light of the prediction that forecasts daring the two recent recessions were expected to be considerably more accurate than the forecasts during periods of sustained economic growth.

Overall Results: Direct contact with the 50 states resulted in the collection of 336 forecast observations that covered the period from FY75 through FY92. Most of the forecast data pertained to the FY87-FY92 period, but a few states provided data on forecast errors from as early as FY75. Of the 336 forecasts, 225 (or 66 percent) were positive, a difference which was highly significant according to the sign test. The mean forecast error was 2.1 percent (number of forecasts = 336) for all states and all time periods. The size of the underforecast was positive, as predicted, and significantly different from zero according to the T test.

Accuracy of Revenue Forecasts by Time Period: Recall that one of the justifications for underforecasting revenues every budget period is to prevent revenue shortfalls during recessions. Since the timing of recessions is difficult to predict, revenue underforecasts should occur every budget period. The decision to underforecast was thus predicted to have a different impact on the average of forecast error depending on the economic conditions that prevail: we predicted large errors during expansions and small errors during recessions.

Mean forecast errors over the 18-year period from FY75 through FY92 are reported in Table 1. As expected, a vast majority of forecast errors were positive: 13 out of 18. Although the forecasts included several spells of economic recession, there was still more evidence of revenue underestimating.
Table 1
Mean Revenue Forecast by Fiscal Year, 1975-1992


Fiscal       Forecast     Number of     Growth in      Months in
Year(a)      Error       Forecasts    Real Gdp(b)     Recession(c)
FY92          1.3          40           2.1                0
FY91         -0.6          48          -1.3                8
FY90          1.4          45           1.6                0
FY89          4.7          43           2.9                0
FY88          4.4          33           4.2                0
FY87          1.0          30           2.9                0
FY86          0.5          25           3.1                0
FY85          1.8          14           2.7                0
FY84         -0.6           8           6.6                0
FY83         -7.3           8           3.1                4
FY82         -2.9           6           1.9               12
FY81          0.1           6           3.0                0
FY80          2.9           5          -1.4                6
FY79          9.3           5           2.1                0
FY78          9.0           5           5.2                0
FY77          3.9           5           4.6                0
FY76          5.5           5           5.6                0
FY75          7.7           5          -2.4                8


(a) Assumes that states share a common fiscal year, beginning July
1. Four
states have fiscal years that begin on other dates: Alabama and
Michigan
(October 1), Texas (September 1), and New York (April 1).
(b) Based on data provided by the Congressional Budget Office. In
order to
make the data for real GDP growth match precisely to tbe timing of
fiscal
years for most states, the data in this column represent the growth
in real
GDP from the second quarter of one calendar year to the second
quarter of
the next. For example, the 2.1 percent figure entered for FY92
represents the
real growth in GDP between the second quarter of 1991 and the
second
quarter of 1992.
(c) The number of months in recession were taken from the
information provided
by the National Bureau of Economic Research as follows:


November 1973-March 1975         Recession
March 1975-January 1980          Expansion
January 1980-July 1980           Recession
July 1980-July 1981              Expansion
July 1981-November 1982          Recession
November 1982-July 1990          Expansion
July 1990-March 1991             Recession
March 1991-December 1992         Expansion


Various methods exist for identifying the timing of a national recession. In our case, the important thing is to be able to compare the timing of recessions with the performance of state revenue forecasts. We will use two generally accepted (and related) methods that are in widespread use for determining the timing of recessions: (1) the rate of real growth in gross domestic product (GDP) during each fiscal year in our study and (2) the timing of peaks and troughs in the business cycle used by National Bureau of Economic Research (NBER).

Generally speaking, if no growth (or negative growth) in real domestic product occurs during an entire year, that year can clearly be identified as a year of recession. For purposes of our study, GDP growth was negative during 4 of the 18 fiscal years: FY91, FY82, FY80, and FY75 (Table 1). The mean forecast error, weighted by the number of forecasts during each of these four fiscal periods, was -0.1 percent.

In contrast, there were 14 years when gross domestic product grew. The mean forecast error, weighted by the number of forecasts during each of these fiscal years was 2.2 percent. According to a T test, the difference in forecast errors during periods of recession and expansion was significant.

Taking the NBER identification of peaks and troughs in the business cycle yields almost identical results. According to NBER, there were 38 months of recession reflected in our data; these occurred during FY75, FY80, FY82, FY83, and FY91. The mean forecast error, weighted by the number of months during each of these fiscal years, respectively, was 0.3 percent.

During this same period, there were 178 months of economic expansion (some expansion occurred in every fiscal year except FY82). The mean forecast error, weighted by the number of months when the economy was expanding during each fiscal year, was 2.8 percent. As expected, the mean forecast error during periods of expansion was positive and quite large, and near zero when the economy was in a recession.

In summary, the two methods yielded essentially the same result. Using the first method, we estimated a forecast error of -0.1 percent for recessions and +2.2 percent for expansions. Using the second method, we derived an estimate of 0.3 percent for recessions and 2.8 percent for expansions. The best estimate of forecast error using the two methods was the average of the two estimates, so our best estimate of forecast error during recessions is 0.2 percent, or [(0.3 + 0.1)/2 = 0.2 percent], and during expansions it was 2.5 percent [(2.8 + 2.2)/2 = 2.5 percent]. The best estimate of forecast error during expansion was thus positive (reflecting an underforecast) and relatively large, and it was nearly perfect during recessions.

This result was derived despite the unique character of forecasting during periods of extended inflation, as occurred during the 1970s. The magnitude of underforecasts during the 1970s was, on average, larger than during the expansionary period of the mid to late 1980s. This can be attributed, in part, to a rate of inflation which was consistently higher than the forecasts had anticipated during the 1970s and early 1980s. Higher prices during this period immediately translated into increased revenues from sales taxes.

Error of Revenue Forecasts by States: Evidence that there is a consistent tendency for states to underforecast revenues should also be seen in the average forecast error by states over multiple forecast periods. The mean forecast error for each of the 50 states is shown in Table 2. Forecast errors ranged from the underforecast of 12.2 percent in Alaska to the overforecast of -3.5 percent in Rhode Island.
Table 2
Mean Error of Revenue Forecasts by State, FY 1975 to FY 1992


            Percent Error   Forecast Periods    Standard Deviation
Alabama           7.3              8                5.2
Alaska           12.2              7               13.9
Arizona          -3.0              7                 3.5
Arkansas          4.5              5                 3.0
California        0.9             14                 6.4
Colorado          3.2              4                 1.4
Connecticut      -3.0              4                 6.9
Delaware          3.6              7                 4.7
Florida          -0.7             18                 5.7
Georgia           0.4              7                 2.7
Hawaii           11.0              4                 2.8
Idaho             5.9              4                 6.0
Illinois          0.2              8                 2.4
Indiana           1.9             12                 5.5
Iowa              0.4              4                 6.5
Kansas            3.2              6                 3.5
Kentucky          0.3             18                 7.8
Louisiana         3.2              3                 3.2
Maine             0.4              5                11.8
Maryland          0.8              5                 2.7
Massachusetts     2.5              3                 3.8
Michigan          1.7              6                 5.0
Minnesota         1.0              3                 1.5
Mississippi       1.1              5                 2.3
Missouri          2.3              6                 3.2
Montana           0.7              7                 8.6
Nebraska          8.4              6                 4.2
New Hampshire     4.1              7                 6.9
New Jersey        1.6              7                 5.8
New Mexico        0.0             12                 7.6
Nevada            6.9              9                 7.1
New York          0.1              8                 1.6
North Carolina    1.7              2                 2.0
North Dakota      5.5              3                 4.7
Ohio              1.1              9                 4.0
Oklahoma          3.5              4                 3.2
Oregon            1.8              8                 5.3
Pennsylvania      3.1              7                 5.7
Rhode Island      3.5              4                 6.2
South Carolina   -2.8              6                 3.4
South Dakota      4.2              7                 5.2
Tennessee         0.8              7                 3.1
Texas             3.2              9                 3.6
Utah              3.9              7                 4.6
Vermont           3.0              7                 7.8
Virginia          3.6              5                 2.2
Washington        2.1              9                 3.6
West Virginia     0.3              7                 2.7
Wisconsin         1.5              3                 0.6
Wyoming           2.7              3                 2.1


Mean forecast error %                                2.1
Number of forecasts                                  336
Standard deviation of means                          3.3
Percent forecast error = 100 x Actual revenues - Forecasted
revenues
Actual revenues


Notes:
Positive signs represent underforecasts, and negative signs
represent
overforecasts.


The number of forecast periods varies as a function of the
availability of
revenue data for the FY75-FY92 period. Calculation of the mean
percent forecast
errors was based on all of the data that were available for each of
the
states during the 18-year period.


Average forecast errors were positive in 80 percent of the states (40 out of the 50). Strong evidence of overforecasting was found in the northeastern states during the recent recession, with large overforecasts in Rhode Island (-3.5 percent), Connecticut (-3.0 percent), and Massachusetts (-2.5 percent). Northeast states suffered a more severe recession than most of the rest of the country, so revenues were considerably less than econometric forecasts had anticipated.

New York had a tiny overforecast, -0.1 percent, and thus showed an excellent long-run accuracy. Its performance as a revenue forecaster was the second best in the country. New York, however, routinely engages in short-term borrowing to cover operating expenses. With this added flexibility, its revenue collections can be boosted by borrowing. Florida had a mean overestimate of -0.7 percent, but this is less relevant when we consider that Florida was one of two states with a statute that permits only 95 percent of the revenues forecast to be appropriated by the legislature.

The effects of the recent recession on the direction of errors are illustrated in Table 3. Sixteen out of 45 states (or 36 percent) had revenue overforecasts during the partial recession year in FY90. By FY91, the recession was in full force and the proportion of states that had overforecasted revenues increased to 56 percent (27 out of 48 states).
Table 3
Percent Error in Revenue Forecasts by State, FY 1987 to FY 1992


State                 FY92    FY91    FY90    FY89     FY88    FY87
Alabama                         0.9    3.8    14.8     13.9     5.6
Alaska                 13.6    18.7   10.7    18.9     33.5   -15.5
Arizona                3.7     -6.7   -0.7    -3.9     -7.1    -4.4
Arkansas               0.9      4.7    4.6     8.2      5.9
California            -7.5     -8.4    0.9
Colorado               5.5      1.8    2.4     3.1
Connecticut            5.5     13.5   -3.5    -0.7
Delaware               5.4     -6.9    2.2     8.8      4.6     6.4
Florida               -5.9     -9.3   -4.6     1.0      0.7     0.1
Georgia               -2.2     -2.3   -3.3     3.3      2.0     1.9
Hawaii                          7.6    9.6    15.1     11.8
Idaho                 -1.7      2.5   14.0     8.8
Illinois              -3.5     -1.6   -1.3     1.4      1.1     0.6
Indiana               -1.9     -7.2    1.7     4.0      1.0    -2.0
Iowa                  -6.1     -4.7    2.4    10.2
Kansas                 0.5      1.9   -1.4     7.8      7.9     2.5
Kentucky              -3.7     -1.3    0.1     0.4     -5.5    -3.9
Louisiana                       5.3    5.6    -1.3
Maine                  3.8    -14.9   -7.5    15.6     12.7
Maryland              -2.0     -2.2    0.0     3.8      4.1
Massachusetts                   1.1   -7.7     1.0
Michigan              -1.5     -6.7    5.6              3.5     5.1
Minnesota              3.2     -0.4    0.3
Mississippi            2.2     -2.2   -0.7     4.5      1.8
Missouri              -5.7     -5.5   -2.5     0.2     -2.3     2.1
Montana                2.1      6.9   11.1     7.1      0.8   -15.3
Nebraska               3.1     12.9    9.7    11.0     11.6     2.1
New Hampshire          8.5      0.4   -7.0     3.2      1.0    16.7
New Jersey             9.0     -5.5   -7.1    -1.5      5.7     4.9
New Mexico             1.8     -1.3    3.6     5.9      4.5    -7.4
Nevada                -5.6     -0.2    6.4
New York               0.1      1.1   -0.6    -3.5      1.5     0.2
North Carolina         3.7     -0.2
North Dakota(a)        2.7     12.1            1.6
Ohio                   0.9      1.6    1.7             -4.7
Oklahoma               1.0      2.1    6.2     6.8
Oregon                          3.5            5.6              6.6
Pennsylvania          15.5     -4.8    1.5     2.7      1.1     3.4
Rhode Island                   -6.7  -11.1    -1.5      5.5
South Carolina                 -8.7   -3.8     1.3      1.2    -3.2
South Dakota           7.9      4.1    0.5    15.2      1.6     0.0
Tennessee             -0.3     -5.6   -0.3     1.6      2.1     3.2
Texas                           2.9            2.9              0.0
Utah                            4.5    8.7     8.9      1.1     8.3
Vermont                       -12.1    1.7     6.3      12.8   12.0
Virginia               3.9     -0.6            4.6      4.8     5.5
Washington            -0.8      7.5    8.6     0.5      3.8     1.9
West Virginia         -2.0      6.3    1.3     2.0      -3.4   -4.0
Wisconsin              2.2             0.8     1.7
Wyoming                5.5             2.0
Mean forecast error    1.3     -0.6    1.4     4.7      4.4     1.0
Number of forecasts  40         48     45      43       33      30


Percent forecast error = 100 x Actual revenues - Forecasted revenues Actual revenues

Notes: Positive signs represent underforecasts, and negative signs represent overforecasts.

(a) FY92 estimate for North Dakora is for the 91-93 biennium. Sources: Forecasts of revenues and revenue collections were extracted from budget documents and information provided by each state. For selected states, some of the revenue data were taken from published studies.

The effects of the recession hit some states much harder than others. Consider, for example, California's forecasts. The mean revenue forecast in California over a full 14-year period was 0.9 percent (Table 2). The impact of the recession and other economic ills (such as defense cutbacks) on California's economy are clearly evident in Table 3. Even with California's long-run tendency to underestimate revenues, the revenue overforecast was -8.4 percent in FY91 and -7.5 percent in FY92. The recession hit California particularly hard, and the size of these revenue overestimates are a stark reflection of its toll.

Or consider Oregon's experience with forecasting. Oregon has had a "kicker law" since 1979. Under this statute, if actual revenues collected from personal income tax or corporate income tax are 2 percent more (or greater) than the forecast, there is an automatic credit in the amount of the overforecast that is applied against the tax liability of corporate or individual taxpayers in the next tax year. This was Oregon's version (at the state level) of the "tax revolt" that coincided with the more publicized Proposition 13 revolt in California. The law has apparently served to ameliorate the tendency to underforecast. Officials from the Budget and Management Division estimated that over the past 10 years, Oregon has returned approximately 5 percent as a tax credit resulting from revenue overages. (The state legislature suspended the practice for the last two years, even though the forecasting record would have dictated that the tax credits be made.) The mean revenue underforecast in Oregon over the past three biennia alone averaged 5.2 percent for FY87, FY89, and FY91 (Table 3).

Discussion

Studies that have evaluated the influence of economic factors on forecast errors have consistently suggested that forecast error has a strong, persistent relationship with economic conditions. The association between forecast errors and economic conditions found here was also strong, and in a direction which, in light of previous research, was entirely expected. When the economy was strong, the average error in the accuracy of revenue forecasts was large. the economy was weak, the average error was tiny.

The opposite result would have been found if states had consistently used their best estimate of revenues as their official forecast. Overestimates would have been just as likely to occur as underestimates during periods of sustained growth, when there were no abrupt and unanticipated shifts in economic conditions. This means that if forecasts had been based on best estimates of future revenue streams, the average forecast error across the 50 states during periods of prosperity would have been quite small.

We derived a quite different result, however, Forecast errors across periods of economic growth were consistently positive and occasionally quite large. Forecasting officials consistently reported to us that there was no secret to the fact that underforecasting was standard operating procedure. The finding that revenues were underforecast by an average of 2.1 percent is thus entirely consistent with the reports from the various state officials we interviewed who said revenue underestimates in their states ranged from a low of 2 percent to a high of 5 percent.

We found, conversely, that the average forecast during the recession represented nearly perfect predictability. If the official forecast had been based on a best estimate of future revenue streams (that is, if states had not made a conscious choice to underforecast), the average error during recessions would have been negative and quite large--reflecting the expected result for states that had grossly overestimated revenues. The small spread between the revenues that were collected and revenues that were forecast during the recent recessions, only 0.2 percent, can in large part be explained by the revenue cushion that was built into the forecasts in anticipation of just such an event. Similarly, the sizeable revenue underforecast during periods of sustained economic growth (2.5 percent) can be explained by the decision to underforecast revenues that were accurately anticipated.

Correction in the Forecast Error that Was Observed

The particular experience of any one state during any particular year will naturally reflect the influence of many different factors. Table 3 clearly shows evidence of considerable variation in the accuracy of forecasts for most states. Underestimating is a widespread tendency, so researchers who identified and evaluated the influence of factors that explained the variation in forecast errors were in no position to recognize the considerable influence an underestimate as large as 2.1 percent has on the direction as well as the size of forecast errors. The influence of a constant does not affect the variation. This problem, however, can be addressed by correcting for the effects of underestimating before analysis of the many different local factors that affect the revenue forecasts. This correction consists of a three-step procedure.

1. Estimate the size of the underforecast by calculating the mean forecast error over as many forecast periods as possible. This estimate can be derived using data for all states (as done here) or any logical grouping of governmental units. Alternatively, the estimate can be derived for a single jurisdiction over many forecast periods. Given that there are differences from state to state in the size of the mean underforecast, it is recommended that if the data are available, the size of the underforecast be estimated for each jurisdiction. The mean underforecast for a state for many different forecast periods can then be used to correct the forecast error for that state.

2. Correct the observed forecast error by subtracting the observed foilcast error from the best estimate of the underforecast (as derived in step 1).

3. Using the corrected forecast errors, evatuate the influence of a variety of other factors (institutional, economic, managerial, etc.) on forecast errors.

To illustrate how this three-step correction is done, we will use our results for state governments to correct the observed forecast error for periods of recession and expansion.

Step 1

The base against which the forecast error should be evaluated is 2.1 percent. Recall this was the estimate that we derived by averaging 336 forecast errors for all 50 states across 18 fiscal years.

Step 2

Adjustment for Recessions: The forecast error observed during recessions was 0.2 percent. To adjust for the influence of systematic underestimating, the corrected error is equal to the observed error (0.2 percent) less the mean underforecast (2.1 percent), or -1.9 percent.

Adjustment for Economic Expansions: Adjusting for the persistent influence of the 2.1 percent underforecast, the average error of revenue forecasts during economic expansions was only 0.4 percent and thus quite small (2.5% - 2.1% = 0.4%).

Ensuring that revenue forecasts incorporate a cushion is a fundamental strategy of forecasting that had exactly the effect that good common sense would have predicted during recessions. The original forecast anticipated considerably less revenue than was predicted by -1.9 percent. The flip side of the story is that the accuracy of economic forecasts was excellent during periods of sustained growth. The corrected estimate for economic expansions of 0.4 percent was also entirely consistent with good common sense. The corrected revenue forecasts were thus precisely the findings which would have been predicted from simple logic-smaller errors during a strong (i.e., predictable) economy and larger errors during periods with unpredictable downturns.

Step 3

Evaluate the influence of other factors which might explain the variation in forecast errors. This is the focus of the existing literature on forecasting, which has examined a variety of political and organizational factors that influence forecast error, including factors such as political party and the requirement for a consensus forecast to be developed between the legislative and executive branches.

Comparison and Contrast with the Interpretation of Prior Studies

Because underestimating is a persistent and dominant influence on the revenue forecasts for all states, studies that evaluated factors that have accounted for differences in forecast errors during recessions were in no position to identify the true magnitude of the underforecasts.

Take the excellent study by Cassidy et al. (1989; 322) for example. They found that revenues were underestimated by .5 percent, a difference which was described to be statistically "just short of being significantly less than zero." Although they, also, collected forecast error data for periods of high economic growth during FY78-80 and FY84-86, their findings gave much more weight to forecast errors during the recession of FY81-FY83. By considering forecasts made primarily during the recession, they were in no position to disengage the effects of the recession on forecast error from the systematic effects of underestimating forecasts during all budget cycles.

Or, take the excellent study by Bretschneider and Gorr (1992) who found the average forecast error of sales tax revenue was -1.2 percent during the recession of FY81 and subsequent recovery period. After recognizing the influence of factors resulting from political and policy influence, they concluded, as we did, that there was evidence that sales tax forecasts had been underestimated.

Our data included several periods of economic growth and recession. Unlike prior studies, we were in a unique position to recognize that the size of the underestimate during recessions (as well as expansions) was much larger than had been previously recognized the studies of forecasting accuracy.

Conclusion

While the presence of a revenue cushion requires recognition of its influence on forecast errors during all economic conditions, there remains considerable variation in forecast errors across periods and across states. Our results should thus be viewed as complementing the findings of prior studies that have evaluated the influence of various factors that have also been found to affect forecast errors. We leave to future work the evaluation of factors that explain the variation in forecast errors, including the participation of multiple revenue estimating entities, the influence of political party or composition, and the existence of a biennial budget or an annual budget. Our results, however, suggest that the single most important factor that contributes to forecast errors that are relatively large in expansions and small in recessions is the persistent and quite logical tendency of states to underforecast revenues.

This suggests that a change in terminology may be in order. The term typically used to describe underforecasting in the literature is "bias." This term has pejorative connotations that suggest either willful distortion or incompetence. Our results, however, suggest the term "cushion" is a more appropriate descriptor of state forecasting than the term "bias." After all, revenue forecasts were impressively accurate during the period when accuracy was most critical--the FY91 recession.

In a survey of eight companies, Bart (1988) found that private firms also cushion their revenue forecasts, though the practice is discouraged in some companies. Words used to describe the tendency also had negative connotations--words like "slush fund" and "war chest." Bart also found, however, that the firms with senior managers who "tolerated" the practice had better returns on sales and assets. Using different forecasting processes and procedures, state governments have, like private businesses, recognized the importance of long-run fiscal stability as being more important than short-run gimmicks. Most states were able to maintain a reasonable level of service levels during a treacherous recession.

The main implication of our findings is to recognize that the real challenge with forecasting is not technical. It is political. Building up surpluses during periods of expansion in order to save for economic "rainy days," while prudent, can be treacherous. The temptation is strong to either hide the existence of the cushion or spend it. Farther, taxpayers may resent the existence of a surplus of any size, however warranted. The accumulation of large, general fund surpluses during the 1970s is widely acknowledged as a factor in the eventual success of California's Proposition 13 and other tax and expenditure limitations enacted in that state (Vasche & Williams, 1987; 72).

How a surplus resulting from underforecasting is to be managed is thus a question of considerable political consequence. There are a number of plausible and entirely workable solutions, which range from giving the surplus back to the taxpayers (as in Oregon and Alaska), to setting the surplus aside into a rainy day fund. But regardless of how a surplus is managed, the practice of underforecasting needs to be recognized for what it is: a perfectly rational and fiscally responsible (if slightly duplicitous) practice by public officials in the face of enormous uncertainty concerning future revenue streams.

RELATED ARTICLE: Methodology

Data Sources

In January 1990, we contacted the budget offices of all 50 states and began collecting copies of budget documents that contained information on the official projections and subsequent collections of revenues. Contact was made by phone or through the mail, with requests for copies of budget summaries since FY85 (or earlier if available) that contained summary data on total revenue projections for the general fund. Several states contributed historical summaries of forecast errors that were maintained as a matter of public record. Summaries of the official revenue forecasts for upcoming budget periods were collected during the fall of '91, '92, and '93.

For most states, the revenue summary table contained data on general fund revenue collections for the fiscal year just completed, estimates of revenue collections for the current fiscal period, and projections of revenues for the next budget period. In most states, the revenue forecast is periodically updated, though the frequency and timing of updating varies considerably varies considerably across states. Our interest was in using the revenue forecast that is used by the legislature to make decisions on program allocations and expenditures. The forecast used in our analysis was the forecast that was submitted to the legislature by the executive branch, or by a consensus revenue estimating panel. Budgets of the states included three forecasts of revenues.

1. The forecast of revenues for the budget under consideration by the legislature. For example, in January of 1991, the legislature was deciding the budget for FY92 beginning in July 1, 1991. This forecast constituted the revenue baseline that was used by the legislature to appropriate funds to state programs and was thus recognized as the official forecast in our analysis.

2. An estimate of revenues collected in the current fiscal year. For example, in the January 1991 session of the legislature, the estimate of FY91 revenues would be for July 1, 1990, through June 30, 1991. This estimate has the advantage of having information about actual revenue collections for the first six months of the "current" fiscal year, so the estimate for FY91 involves uncertainty over actual revenue collections for only the year (January 1991 through June 30, 1991). This estimate is almost always much more accurate than the forecast for the new fiscal period and, more important for our purposes, was not the revenue base that is used by the legislature in making expenditure allocations for the next year's budget. These data were not used in our analysis.

3. The actual revenues collected in the prior fiscal period. Data on actual general fund revenue collections were used to calculate percentage forecast errors.

For example, the typical report of revenues on a state budget document for FY90 contains the revenue forecast FY90, actual revenue collections for FY88, and estimated revenues for the current FY89 fiscal period. Data on the forecast for FY90 and revenue collections for FY88 were entered in the formula to calculate percent forecast errors. (Revenue collections for FY90 were subsequently reported in the FY92 budget summary.) Summary data taken from budget summaries for two fiscal years were thus always required to calculate one forecast error for a single fiscal year. We were instructed by a few states to use the data for their particular state for revenues that were published by the National Governor's Association in the Survey of the States.

We also asked state budget officials to update the numbers for their states to account for whether tax increases had been enacted after the forecast had been announced, or whether the forecast had assumed a tax increase that was not enacted by the legislature. Using the information provided to us by the states, we corrected the revenue forecast considered by the legislature as a basis for making its budget decisions for (1) tax changes considered in the forecast that were not subsequently enacted by the legislative branch and (2) changes enacted by the legislature that had not been incorporated into the initial forecast. The forecast was thus adjusted to reflect changes in the final tax rate and tax base that were not anticipated in the original forecast.

Forecast error was calculated by multiplying 100 times (actual revenues--projected revenues adjusted for any tax changes that were not anticipated) divided by actual revenue collections. A positive sign means more revenues were collected than forecast, reflecting an underforecast. A negative sign means fewer revenues were collected than forecast, reflecting an overforecast.

Systems for revenue forecasting differ across the states, so we obtained clarifications of the budget processes from most states through written letters or telephone calls. We thus relied on the information provided by the various state forecasting offices to insure that the revenue forecasts were corrected for any tax changes (in either direction) that were unanticipated in the original forecast.

To ensure the forecast data reported were as accurate as possible, we forwarded summary tables of the forecast data in January 1994 to our contact person in each state's budget or finance office. In a forwarding letter to the budget officials and forecasts, we requested that our data be reviewed for accuracy, and that they let us know of any corrections that were required. We received corrections from 11 states and corrected the forecasts, as instructed. Several states provided us with additional data on their forecast errors which were included in the analysis reported here.

Data on the projections of revenues and revenue collections for California from FY74 and FY85 were taken from Vasche and Williams (1987). California's data for the most recent three fiscal periods were obtained directly from the state Department of Finance. After consulting with the Nevada budget office, historical revenue from FY72 to FY80 were taken from Cargill & Walker (1981).

Qualitative Reports from State Budget Offices.

Over the three-year period of collecting data on revenues, we had telephone conversations with forecasters and budget officials in almost all of the states. This gave us the opportunity to learn about the many different methods used to forecast revenues by state governments. At the conclusion of each phone call, the discussion was summarized in writing and placed in a file for that particular state. The comments, insights, and information obtained from these discussions have been incorporated into our discussion.

References

The authors wish to thank the many officials from state budget offices across the country who provided the data used in this study and who described to us the different approaches that they used to forecast revenues. We also wish to thank Brian Mullins and Louis King-Manley who assisted with collecting data for the study during 1990 and 1991.

Albritton, Robert B. and Ellen M. Dran, 1987. "Balanced Budgets and State Surpluses: The Politics of Budgeting in Illinois." Public Administration Review, vol. 47, pp. 143-152.

Baht, Roy W., 1980. "Revenue and Expenditure Forecasting by State and Local Governments." In J. E. Peterson, C. Spain, and M. Laffey, eds., State and Local Government Finance and Financial Management. Washington, D.C.: Finance Research Center.

Bart, Christopher K., 1988. "Budgeting Gamesmanship." The Academy of Management Executive, vol. 11, pp. 285-294.

Belongia, M. T., 1988. "Are Economic Forecasts by Government Agencies Biased? Accurate?" Federal Reserve Bank of St. Louis, November-December, pp. 15-23.

Bretschneider, Stuart I. and Wilpen L. Gorr, 1987. "State and Local Government Revenue Forecasting." In S. Makridakis and S. C. Wheelwright, eds., The Handbook of Forecasting. New York: John Wiley, pp. 118-134.

---, 1992. "Economic, Organizational, and Political Influences on Biases in Forecasting State Sates Tax Receipts." International Journal of Forecasting, vol 7, pp. 457-466.

Bretlcsneider, Stuart I., Wilpen Gorr, Gloria Grizzle, and Earle Yday, 1989. "Political and Organizational Influences on the Accuracy of Forecasting State Government Revenues." International Journal of Forecasting, vol. 5, pp. 307-319.

Cargill, Thomas F. and James L. Walker, 1981. "Forecasting Nevada State Governmental Revenues." Nevada Review of Business & Economics, (Spring/Summer), pp. 2-6.

Cassidy, Glenn, Mark S. Kamlet, and Daniel S. Nagin, 1989. "An Empirical Examination of Bias in Revenue Forecasts by State Governments." International Journal of Forecasting, vol. 5, pp. 321-33 1.

Frank, Howard A. and Gerasimos A. Gianakis, 1990. "Raising the Bridge Using Time Series Forecasting Models." Public Productivity & Management Review, vol. 14(2), pp. 171-188

Gentry, William M., 1989. "Do State Revenue Forecasters Utilize Available Information?" National Tax Journal vol. 42, pp. 429-439.

Heins, A. James, 1975. "The Politics of Revenue Forecasting." Proceedings of the National Tax Association, November, pp. 29-33.

Kamlet, Mark S., David C. Mowery, and Tsai-Tsi Su, 1987. "Whom Do You Trust? An Analysis of Executive and Congressional Economic Forecasts." Journal of Policy Analysis and Management, vol. 6(3), pp. 365-384.

Klay, William Earle, 1983. "Revenue Forecasting: An Administrative Perspective." In Jack Rabin and Thomas D. Lynch, eds., Handbook on Public Budgeting and Financial Management. New York: Marcel Dekker, pp. 287-315.

Roberds, William, 1988. "Forecast Accuracy and the Performance of Economic Policy: Is There a Connection?" Economic Review, (October), pp. 20-32.

Rubin, Irene S., 1987. "Estimated and Actual Urban Revenues: Exploring the Gap." Public Budgeting and Finance,(inter), pp. 83-95.

Schroeder, Larry, 1982. "Local Government Multi-year Budgetary Forecasting: Some Administrative and Political Issues." Public Administration Review, vol 42, (March/April), pp. 121-127.

Shkurti, William J. and Darrell Winefordner, 1989. "The Politics of State Revenue Forecasting in Ohio, 1984-1987: A Case Study and Research Implications." International Journal of Forecasting, vol. 5, pp. 361-371.

Vasche, Jon David and Brad Williams, 1987. "Optimal Governmental Budgeting Contingency Reserve Funds." Public Budgeting & Finance, (Spring), pp. 66-82.

Robert Rogers is a professor at the Martin School of Public Policy and Administration at the University of Kentucky. His research interests include the application of meta-analysis to identify policies that can enhance the efficiency and productivity of public organizations.

Philip Joyce is an assistant professor of public administration at the Maxwell School of Citizenship and Public Affairs, Syracuse University. From 1991 to 1,995, he was on the staff of the Congressional Budget Office. His research focuses on government budget processes, performance measurement, and intergovernmental fiscal relations.
COPYRIGHT 1996 Wiley Subscription Services, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Rodgers, Robert; Joyce, Philip
Publication:Public Administration Review
Date:Jan 1, 1996
Words:8165
Previous Article:Administrative interpretation of statutes: a constitutional view on the "new world order" of public administration.
Next Article:Does it make any difference anymore? Competitive versus negotiated municipal bond issuance.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters