Printer Friendly

Can nominal GDP targeting rules stabilize the economy?

The Federal Reserve has shown that it would support making price stability the explicit goal of monetary policy.(1) How to accomplish this, however, is a matter of considerable discussion. Some economists have suggested that the best way to ensure that price stability is the foremost goal of monetary policy is to adopt a monetary policy rule. Such a rule would be a verifiable program of action designed to maintain price stability without constricting long-term economic growth. As long as the Federal Reserve faithfully implemented the rule's prescriptions, the public would have cause to believe that prices, once stabilized, would remain stable.

One way to achieve price stability in a growing economy is to have nominal gross domestic product (GDP) grow at the same rate as potential output.(2) One monetary policy rule, proposed by McCallum (1987), provides a systematic way for the Federal Reserve to adjust the monetary base as nominal GDP deviates from desired levels.(3) Simulations of this rule, presented in McCallum (1987, 1988) and Judd and Motley (1991), appear to suggest that the monetary base can be manipulated to keep nominal GDP close to a path consistent with price stability. In these simulations McCallum's rule proves to be robust to a variety of empirical models that relate changes in the monetary base to resulting changes in nominal GDP: Keynesian, Real Business Cycle and atheoretical vector autoregression models. Each empirical specification, however, confronts McCallum's rule with a world in which the structure of the economy is stable: the model's coefficients are held constant.

This article broadens the set of empirical models used to evaluate McCallum's rule to include one in which the relationship between base growth and nominal GDP growth is subject to structural change that takes the form of stochastic changes in the model's coefficients. Such a time-varying parameter (TVP) model presents a new environment in which the properties of McCallum's rule have not yet been examined. Simulation results from the TVP model indicate that McCallum's rule is more prone to the problem of instrument instability than simulations from constant-coefficient models have suggested. The instrument instability can be remedied, however, by targeting nominal GDP less stringently than McCallum's original rule had specified.(4)



Simulations present an opportunity to learn how closely nominal GDP can be expected to adhere to its target level and how variable the monetary base will have to be under McCallum's rule. As we will see, McCallum's rule specifies a rate of growth for the monetary base, given the level of nominal GDP relative to its target. Simulations of McCallum's rule require a model of how the monetary base is related to nominal GDP, which can be summarized by the income velocity of the monetary base. McCallum (1987) provides a simple model relating changes in the base to nominal income, where MB is the monetary base and e is a mean-zero random disturbance with variance [sigma.sub.e.sup.2.]

(1) [delta lnGDP.sub.t] = [alpha] + [ lnGDP.sub.t - 1] + [ lnMB.sub.t] + [e.sub.t.],

or, restating the model in terms of velocity growth,

(2) [delta lnGDP.sub.t] - [delta lnMB.sub.t] = [alpha] + [ lnGDP.sub.t - 1] + (b - 1)[delta lnMB.sub.t] + [e.sub.t].

This model illustrates the way in which velocity is generally modeled in simulations of McCallum's rule: the percentage change in the velocity of the monetary base is modeled as a function of time t-1 variables, base growth at time t and a random disturbance. The model also raises questions about the constancy of the parameters in the model of velocity growth: [alpha],[rho],b,[sigma.sub.e.sup.2]. Simulations using a calibrated version of a constant-coefficient model will represent the economy's behavior under the rule only to the extent that the coefficients do not change in the long time span the rule is to be in effect. As an alternative, this article posits a simple short-run forecasting model of velocity with time-varying parameters and tests the restriction that the coefficients are constant over the sample period. Then simulations of McCallum's rule are run using a calibrated time-varying parameter model of velocity growth. The article next discusses the role of velocity forecasts in formulating McCallum's rule, in contrast to the foregoing paragraphs which discussed their role in simulating the rule.



McCallum's Rule

McCallum (1987) proposes a monetary policy rule that uses the monetary base to target nominal GDP. The rule employs a four-year moving average of past growth in base velocity to forecast its growth in the coming quarter. Based on this forecast, the rule then specifies the percentage of the gap between target and actual levels of nominal GDP that policymakers should try to close in the coming quarter.

Specifically McCallum's rule takes the following form:

[Mathematical Expression Omitted]

where MI is the monetary instrument, V is the income velocity of the monetary instrument, GDP[.sub.t] is the target level of nominal GDP at time t, and GDP[.sub.t] is the actual level of GDP at time t. Also, [lambda.sub.O] O and [lambd.sub.1] > O. The second term on the right-hand side of equation (3) is the average velocity growth in the previous 16 quarters. The rule calls for the monetary authority to adjust the growth in the monetary instrument according to this velocity forecast. The third term represents the percentage gap between target and actual nominal GDP and thereby provides the feedback. When the gap is positive, the rule seeks but does not guarantee (because of surprise changes in velocity) GDP growth greater than the growth rate of target GDP ([lambda.sub.0]).

McCallum uses average velocity growth because trends in velocity growth can shift over time, but not every change in base velocity represents a long-lasting shift in the trend. McCallum's velocity forecast, however, uses only the past 16 values of velocity. In the next section an alternative monetary rule is described. This rule differs from McCallum's in that it uses explanatory variables to help forecast velocity; it also uses a time-varying parameter model. By allowing for time-varying coefficients, the forecasting model will be less prone than fixed-coefficient models to breaking down as time passes.

A Forecasting-Based Monetary


A short-run velocity forecasting model with time-varying parameters offers a possible source of one-step-ahead velocity forecasts required by a monetary rule such as McCallum's. This type of model would adapt in a systematic way to structural changes, that is, to changes in the relationships between velocity and the variables used to forecast velocity, such as interest rates.

The forecast-based feedback rule considered in this paper takes the form

[Mathematical Expression Omitted]

where the variables are as defined in equations (3) and (4), and the second term on the right-hand side of equation (5) is the forecast of velocity growth for period t based on information available through period t-1. This rule differs from McCallum's rule in that it uses an explicitly derived forecast of velocity growth, rather than an average of past velocity growth. The next section details the velocity forecasting model.

A Forecasting Equation

This article reports results on one of many possible velocity forecasting equations. The velocity forecasting model employed here allows for time-varying coefficients on the explanatory variables, which are the lagged change in the three-month Treasury bill rate and lagged growth in the monetary instrument. Velocity growth should be positively related to the lagged change in the Treasury bill rate, because this short-term interest rate indicates the opportunity cost of money; velocity growth should be negatively related to lagged growth in the monetary instrument, because if nominal GDP is somewhat sluggish, part of additional money growth will lead to decreased velocity in the short run. The velocity forecasting equation employed here uses the Kalman filter and generalizes Bomhoff's (1991) velocity forecasting equation in three ways: it includes lagged money growth, lets the interest elasticity vary over time, and allows the variance of the error term to change.

Figure 1 shows squared deviations from the mean in the quarterly percentage change in the velocity of the St. Louis monetary base. The figure suggests that the volatility of velocity is not constant. This is not too surprising: economists believe that velocity is related to interest rates and expected inflation. Research has found that interest rates and inflation do not have constant volatilities, so we might expect velocity to share this property.(5)

The particular specification used to generate short-run forecasts is

[Mathematical Expression Omitted]

where V stands for the velocity of the monetary instrument, MI, and TB3 is the three-month Treasury bill rate.(6) The errors in equation (7), [e.sub.t), have time-varying volatilities in that their variance is assumed to switch between a low and high level according to a first-order Markov process.(7)

With time-varying coefficients, equation (7) will be estimated using the Kalman filter under the assumption that the state variables, [beta][sub.t], follow random walks:(8)

[Mathematical Expression Omitted]

In a short-run forecasting context, the assumption that the coefficients follow random walks suggests that people need new information before changing their views about the relationships among variables. This is essentially why Engle and Watson (1985) advocate the view that time-varying coefficients should have unit roots. The innovations to the coefficients, v, are assumed to be uncorrelated, so the covariance matrix Q is diagonal. Kim (forthcoming, b) discusses the specific form the Kalman filtering takes for this model and the evaluation of the likelihood function, which is maximized with respect ([sigma.sub.0.sup.2], [sigma.sub.1.sup.2], p, q, Q), where [Q.sub.ii] = [], i = 1, 2, 3. The appendix also includes a summary of the filtering algorithm used in simulations.

By construction, this model allows for two sources of forecast error: error in predicting the value of the coefficients and the heteroscedastic random disturbance. In general, in a model with time-varying coefficients

(9) [Y.sub.t] = [X.sub.t-1] [beta.sub.t] + [e.sub.t],

the one-step-ahead forecasts are

(10) [Y.sub.t] = [X.sub.t-1] [beta.sub.t/t-1].

Thus the forecast errors have two components which equal [X.sub.t-1] ([beta.sub.t] - [beta.sub.t/t-1]) + [e.sub.t]. if the variance of ([beta.sub.t] - [beta.sub.t/t-1]) [R.sub.t/t-1]) and var([e.sub.t] [sigma.sub.e.sup.2], the one-step-ahead forecast error variance is

[Mathematical Expression Omitted]

The first component ([H.sub.1t]) is called the variance due to time-varying parameters (TVP); the second ([H.sub.2t]) is simply the variance of the random disturbance e. Inferences about the relative sizes of the two sources of forecast error variance play an important role in updating the coefficients. Using the Kalman filtering equations in the appendix, one can write the forecast [Y.sub.t + 1/t] as

(12) [Y.sub.t + 1/t] = [X.sub.t] [beta.sub.t/t-1] + [Z.sub.t] [eta.sub.t/t-1]

where [X.sub.t] are the explanatory variables, [eta.sub.t/t-1] is last period's forecast error (and is thus the new information available), and [Z.sub.t] is proportional to

[Mathematical Expression Omitted]

If [H.sub.2t] is large relative to [H.sub.1t], observers would attribute less of a forecast error to a change in coefficients; instead, they would believe that it was probably an outlier. A large value of [H.sub.2t] then implies that last period's forecast error would play a relatively small role in determining next period's forecast.


The forecasting model was estimated for quarterly data from III/1959 to II/1992 on the velocities of the following monetary aggregates: the St. Louis measure of the monetary base, the Board of Governors monetary base, Ml and M2. The latter three measures are included to provide some context for the St. Louis base results. Tables 1 through 4 contain parameter estimates of the forecasting model of equations (7) and (8) for each monetary aggregate.

For the two measures of the monetary base and M1, the coefficient with the most significant variation is the interest rate elasticity. Because McCallum's rule is written for the St. Louis base, specification tests are done for the St. Louis base. The log-likelihood for the TVP model with Markov switching is - 167.8. The log-likelihood with Markov switching and constant coefficients is - 175.1. This implies a likelihood-ratio statistic of 14.6, which is rejected as a [X.sub.3.sup.2] variable at the 99 percent confidence level. Thus, while the variance due to time-varying parameters in figure 2 appears to account for a relatively small portion of the overall forecast error variance for St. Louis base velocity, the model's parameters exhibit statistically significant variation. The log-likelihood for OLS is - 184.4, so we can similarly reject homoscedasticity in the error term in an OLS regression. This means that [sigma.sub.e.sup.2] does not remain constant throughout the sample period.

The Q-statistics test for serial correlation, and all are insignificant as are the [Q.sup.2]-statistics, which test for serial correlation in the squared forecast errors. (The distribution of the Q- and [Q.sup.2]-statistics is [X.sub.24.sup.2] under the null hypothesis of no serial correlation; the 5 percent critical value is 36.4.) The lack of serial correlation indicates that the model avoids making persistent errors in its forecasts. Significant [Q.sup.2]-statistics would indicate that the Markov model of heteroscedasticity is an inadequate model of the persistence in the variance of the error terms. The sum p + q indicates the persistence of the volatility of the error term. If p + q > 1, the Markov process is called persistent. Interestingly, M2 has the most persistent volatility states with p + q = 1.85, which is not far from the upper bound of 2. This finding suggests that when policymakers are finding relatively large forecast errors in M2 velocity, they will likely continue to be plagued with large forecast errors (in either direction) in the near term.

Table 5 compares the relative importance of the two sources of forecast uncertainty: the variance due to coefficient variation and the variance of the disturbance term, [e.sub.t]. (Because of the great similarity between the results for the two measures of the monetary base in tables 1 and 2, only results for the St. Louis monetary base will be presented hereafter.)


Even though the numbers in table 5 cannot be directly compared across monetary instruments, they do illustrate that M2 has the most stable coefficients among the three monetary aggregates, measured as a percentage of total forecast variance. By this measure, M1 has less stable coefficients than the monetary base, so the narrowness of the monetary aggregates is not necessarily inversely related with coefficient stability.

Figures 2 through 5 divide the conditional forecast error variance into its two components, [H.sub.1t], and [H.sub.2t], for the four monetary aggregates examined in this paper. As the figures show, the relative sizes of [H.sub.1t] and [H.sub.2t] are not constant over time. One should point out that, if the magnitude of the variance of the random disturbances, [H.sub.2t], is generally large relative to the variance caused by time-varying coefficients, [H.sub.1t], it does not mean that [H.sub.1t] is too close to zero to be important: the likelihood-ratio test reported previously rejects the hypothesis that the forecast error variance due to time-varying parameters is equal to zero for the velocity of the St. Louis base. The velocities of all four aggregates show heightened forecast error variance due to time-varying coefficients from 1979 to 1982, the period of nonborrowed reserves targeting and financial deregulation. For reference the time-varying coefficients for St. Louis base velocity are shown in figures 6 through 8. The estimated coefficients generally have their expected signs: a positive interest rate elasticity and a negative money growth elasticity. Dickey-Fuller unit root tests do not reject the hypothesis that each of these three coefficients follows a random walk; thus the inferred coefficient values do not contradict the assumed random walk specification.

Given that two monetary rules, which differ only in their velocity forecasts, will be simulated, it is useful to compare the forecast errors from the forecasting equation with time-varying parameters and McCallum's 16-quarter moving average. As table 6 shows, the 16-quarter moving average is close to the TVP model in mean squared forecast error only for the velocity of the St. Louis base. For the broader aggregates, the mean squared errors are at least 33 percent higher for the moving-average forecast than for the TVP model. If the forecast errors are persistent, they can compound errors in targeting nominal GDP. Thus, we also report Q-statistics which test for serial correlation in the forecast errors. With a [X.sub.24.sup.2] critical value of 36.4 at the 5 percent significance level for the Q-statistics, the 16-quarter moving average forecast errors are significantly serially correlated for all three monetary aggregates.


Estimating a velocity forecasting equation with time-varying coefficients (equations (7) and (8)) not only provides a way to modify McCallum's rule (equation (5)), it also provides estimates of the variances of the coefficients that can be used to calibrate a data-generating process for velocity to be used in simulations of McCallum's rule. We also run simulations on the forecast-based rule to learn about its properties. The object here is to learn something about the feasibility of nominal GDP targeting when velocity's relationship with other variables is subject to structural change.


All of the velocity models employed in simulations of McCallum's rule in McCallum (1987, 1988), Judd and Motley (1991, 1992), Rasche (1993) and Thornton (forthcoming) have assumed constant coefficients. This paper takes a different tack by estimating time-varying parameter models of velocity growth. A data-generating process with stochastic coefficients is then used to generate data in simulations. In this way, we attempt to study how a monetary rule would perform when the velocity relationship is subject to unpredictable structural change.

Simulations were run for a data-generating process calibrated to the velocity growth of the St. Louis base. The modifications to the forecasting model of equations (7) and (8) are the following:

1. Short-term interest rates are dropped as an explanatory variable and the model is then re-estimated. This approach is adopted because we have no good way to determine interest rates using any of the equations we have estimated. In effect, we are forecasting with a smaller information set, which will make the forecast error variance larger.

Without interest rates in the forecasting equation, the actual increase in the forecast error variance is less than 7 percent, so the quantitative effect of this change should be small.

2. The error term [e.sub.t] is assumed to be homoscedastic for simplicity. This allows us to drop Markov switching from the simulations.

3. The coefficient on lagged base growth, [beta.sub.2t], is no longer assumed to have a unit root; instead it is modeled as an autoregressive process with a near-unit root: [beta.sub.2t] = .95 [beta.sub.2t-1] + [v3.sub.t]. When running the simulation for 400 quarters, it is not realistic to allow [beta.sub.2t] to become less than negative one indefinitely, though it is allowed to do so for lengthy periods.(9)

4. The starting values for [beta.sub.t=0] are randomized from their calibrated values to reduce dependence on a particular choice of starting values.

Details on this simulation are in the appendix. The other choices to be made in the simulation are the parameters in the monetary rule of equation (7). The target for quarterly nominal GDP growth was set to [lambda.sub.0] = .00985, which corresponds with 4 percent annual growth. The value of [lambda.sub.1] determines how much of the gap between the target and actual levels of nominal GDP policymakers should try to eliminate in the coming quarter. For [lambda.sub.1], we follow McCallum's (1987) suggestion by setting it equal to 0.25.

The exercise consists of simulating particular monetary rules 200 times for periods of 400 quarters each. To reiterate, the important point of this exercise is to study the performance of a monetary policy rule under a data-generating process for velocity that includes unpredictable structural change. The desired information is how closely nominal GDP might be kept to its target path and how variable the growth rate of the monetary instrument would have to be. The numbers in table 7 represent averages across the 200 simulated 400-quarter periods for the forecast-based rule.


The results in table 7 show that simulated nominal GDP in levels is on average 1.7 percent below its target, with extreme deviations of 2.5 percent above and 6 percent below the target. Considering that the simulations ran for 400 quarters, the differences between target and actual GDP are small. The simulated rate of base growth averages 4.7 percent per year across the 200 replications, with extremes of 15.7 percent and -2.2 percent annual growth. The latter figure should be small in absolute value, because of the political difficulty in selling a monetary rule that would potentially call for substantial decreases in the monetary base for as long as a year. The former figure suggests that double-digit base growth would occasionally occur under a policy of nominal GDP targeting.

In contrast, McCallum's rule, which uses moving-average forecasts of velocity growth, proved to be unstable with [lambda.sub.1] equal to 0.25. (Average base growth was negative 6 percent per year.) The results for McCallum's rule presented in table 8 are for simulations run with [lambda.sub.1] equal to 0.10, so the rule attempts to close gaps between target and actual nominal GDP more slowly to prevent instrument instability.


McCallum's rule no longer displays instrument instability once the feedback parameter, [lambda.sub.1], is reduced: the average gap in levels between actual and target nominal GDP is 3.8 percent. The mean square error of the gap between actual and target nominal GDP is higher than that of the forecast-based rule, however. Nevertheless, McCallum's rule appears to be robust to a world in which the growth rate of base velocity is subject to structural change, albeit with a lower value on the feedback parameter, [lambda.sub.1], which means that nominal GDP cannot be targeted as stringently period-by-period as it can with the forecast-based rule.


This paper confronts McCallum's nominal GDP targeting rule in simulations with a world in which coefficients in the velocity equation for the monetary instrument are subject to unpredictable stochastic change. Hypothesis tests on the estimated model of the velocity of the St. Louis base reject coefficient stability. To account for unstable coefficients, a time-varying parameter model of velocity is estimated and used to calibrate the data-generating process used in simulations. These simulations suggest that McCallum's rule can stabilize nominal GDP growth in a time-varying parameter framework. Nominal GDP cannot be targeted as closely as when an alternative forecast-based monetary rule is used, however. In addition, nominal GDP cannot be targeted as closely as previous studies that simulated McCallum's rule using constant-coefficient models of velocity have suggested.

Overall, McCallum's approach to nominal GDP targeting proves to be simple yet robust to velocity behavior that is quite complex. The alternative forecast-based rule performed somewhat better in simulations in which velocity was generated in a time-varying parameter model, but it has the disadvantage of being more difficult for the public to verify.[10] Given that it would be easier for the public to verify that the Fed is following McCallum's rule, relative to the forecast-based rule, the former may garner the Fed more credibility, even though it is technically less able to stabilize nominal GDP growth.


Kalman Filtering

The Kalman filter is a set of recursive equations that determine how the inferred regression coefficients are updated as new observations are added. The Kalman filtering without Markov switching used in the simulations consists of the following equations:

[Mathematical Expressions Omitted]

The term [K.sub.t], called the Kalman gain, determines how much new information, summarized by the latest forecast error [eta.sub.t/t-1]' is allowed to affect the inferred [beta] coefficients. Equation (18) shows that the inferred coefficients are updated using the product of the Kalman gain and the latest forecast error. Thus the inferred coefficients themselves are functions of past values of the explanatory variables and the dependent variable. In this way the current forecasts in a timevarying parameter model that uses the Kalman filter are based on a larger information set than just last period's values of the explanatory variables.

Combining the equations for [K.sub.t] and [beta.sub.t/t] and multiplying through by [X.sub.t-1] shows how new information, [eta.sub.t/t-1]' is used in updating forecasts of the dependent variable:

[Mathematical Expression Omitted]

This relation demonstrates the assertion that the relative sizes of [H.sub.1t] and [H.sub.2t] determine the weight put on new information when updating the inferred coefficient values.

Calibrating the Simulations

As discussed in the text, the forecasting equations were estimated for base growth without interest rates as an explanatory variable. The only explanatory variables with time-varying coefficients were the intercept and lagged base growth. In the simulations we need to specify starting values for the true parameter values, the inferred parameter values and the variances of [v.sub.t], where [beta.sub.t] = G[beta.sub.t-1] + [v.sub.t]. G is a (2 x 2) diagonal matrix with [G.sub.11] = 1 and [G.sub.22] = .95. The coefficient variances were set to 1E-05 for the intercept and .05 for lagged base growth. The variance of [e.sub.t], the disturbance term, was set to 1.08. These values come from the estimated forecasting model, where the value of [sigma.sub.e.sup.2] is placed near the value of the estimated unconditional value between [sigma.sub.0.sup.2] and [sigma.sub.1.sup.2]. Finally the starting values for the inferred coefficient values were randomized by adding noise to the true starting values. This was done to reduce dependence on particular initial values in the Kalman filter and also to mimic uncertainty that would pertain to the initiation of a new monetary policy regime, the rule. Thus the simulations should roughly resemble the data-generating process governing the growth of base velocity, including changes in the structural coefficients.

(1) See Chairman Alan Greenspan's statement to Congress [Greenspan (1989)].

(2) Because of difficulties in allowing for quality changes and other imperfections in currently available price indices, many economists believe that 1 or 2 percent annual inflation in a measure like the consumer price index is actually consistent with price stability. In this case nominal GDP should grow slightly faster than potential output.

(3) Bradley and Jansen (1989) discuss possible rationale for nominal GDP targeting.

(4) See McCallum (1987).

(5) References are Bollerslev (1986) for inflation and Engle, Lilien and Robins (1987) for interest rates.

(6) Only one lag of each explanatory variable appears in equation (5), but, unlike a constant-coefficient model, the time-varying parameter model uses past values of the explanatory variables and forecast errors in generating its forecast. The appendix describes how the inferred coefficients embody past information.

(7) The combination of time-varying parameters and this type of heteroscedasticity was introduced by Kim (forthcoming, b). Kim (forthcoming, b) also illustrates that this model of heteroscedasticity is quite similar in practice to the well-known autoregressive conditional heteroscedastic (ARCH) model of Engle (1982). Basically, the Markov model tries to match the persistence of periods of high and low volatility in the data, where persistence of high and low volatility states is increasing in p and q, respectively.

(8) Bomhoff (1991) and Hein and Veugelers (1983) also use the Kalman filter to forecast velocity. Bomhoff (1991) holds the interest elasticity ([beta.sub.1t]) constant and restricts [beta.sub.2t] to equal zero, so past money growth is not included in the set of information used in his forecasts; Hein and Veugelers (1983) restrict both [beta.sub.1t] and [beta.sub.2t] to equal zero, further restricting the information set used for forecasting.

(9) This is somewhat analogous to models of nominal interest rates that assume unit roots. Random walk behavior might provide a very close approximation to interest rate behavior in the short run, but long-run simulations cannot plausibly assume a unit root, or negative nominal interest rates would eventually result.

(10) Until the public was able to observe low inflation and relatively stable nominal GDP growth for a considerable length of time, the credibility of a rule-based policy would likely depend on the public's ability to verify that the monetary authority was actually following the rule when setting targets for money growth.


Barro, Robert J., and David B. Gordon. "A Positive Theory of Monetary Policy in a Natural Rate Model," Journal of Political Economy (August 1983), pp. 589-610.

Bean, Charles R. "Targeting Nominal Income: An Appraisal," The Economic Journal (December 1983), pp. 806-19.

Bollerslev, Tim. "Generalized Autoregressive Conditional Heteroskedasticity," Journal of Econometrics (April 1986), pp. 307-27.

Bomhoff, Eduard J. "Predicting the Income Velocity of Money: A Kalman Filter Approach," unpublished manuscript, Erasmus University, Netherlands (June 1991).

Bradley, Michael D., and Dennis W. Jansen. "Understanding Nominal GNP Targeting," this Review (November/December 1989), pp. 31-40.

Dickey, David A., Dennis W. Jansen and Daniel L. Thornton. "A Primer on Cointegration with an Application to Money and Income," this Review (March/April 1991), pp. 58-78.

Engle, Robert F. "Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation," Econometrica (July 1982), pp. 987-1007.

_____, David M. Lilien and Russell P Robins. "Estimating Time-Varying Risk Premia in the Term Structure: The ARCH-M Model," Econometrica (March 1987), pp. 391-407.

_____, and Mark Watson. "Kalman Filter: Applications to Forecasting and Rational Expectations Models," Advances in Econometrics, Fifth World Congress (Volume 1, 1985), pp. 245-81.

Garbade, Kenneth. "Two Methods for Examining the Stability of Regression Coefficients," Journal of the American Statistical Association (March 1977), pp. 54-63.

Greenspan, Alan. "A Statement before the Subcommittee on Domestic Monetary Policy of the Committee on Banking, Finance and Urban Affairs, U.S. House of Representatives, October 25, 1989," Federal Reserve Bulletin (December 1989), pp. 795-803.

Hein, Scott E., and Paul T.W.M. Veugelers. "Predicting Velocity Growth: A Time Series Perspective," this Review (October 1983), pp. 34-43.

Judd, John P., and Brian Motley. "Controlling Inflation with an Interest Rate Instrument," Federal Reserve Bank of San Francisco Economic Review (Number 3, 1992), pp. 3-22.

_____. "Nominal Feedback Rules for Monetary Policy," Federal Reserve Bank of San Francisco Economic Review (Summer 1991), pp. 3-17.

Kim, Chang-Jin. "Dynamic Linear Models with Markov Switching," Journal of Econometrics (forthcoming, a).

_____. "Sources of Monetary Growth Uncertainty and Economic Activity: The Time-Varying Parameter Model with Heteroscedastic Disturbances," The Review of Economics and Statistics (forthcoming, b).

Koenig, Evan F. "Nominal Feedback Rules for Monetary Policy: Some Comments," Federal Reserve Bank of Dallas, Working Paper No. 9211 (July 1992).

Kydland, Finn E., and Edward C. Prescott. "Rules Rather than Discretion: The Inconsistency of Optimal Plans," Journal of Political Economy (June 1977), pp. 473-91.

McCallum, Bennett T. "The Case for Rules in the Conduct of Monetary Policy: A Concrete Example," Federal Reserve Bank of Richmond Economic Review (September/October 1987), pp. 10-18.

____. "Robustness Properties of a Rule for Monetary Policy," Carnegie-Rochester Conference Series on Public Policy (Autumn 1988), pp. 173-203.

Mehra, Yash P. "In Search of a Stable, Short-Run M1 Demand Function," Federal Reserve Bank of Richmond Economic Review (May/June 1992), pp. 9-23.

Poole, William. "Monetary Policy Lessons of Recent Inflation and Disinflation," The Journal of Economic Perspectives (Summer 1988), pp. 73-100.

Rasche, Robert H. "Monetary Aggregates, Monetary Policy and Economic Activity," this Review (March/April 1993), pp. 1-35.

Stone, Courtenay C., and Daniel L. Thornton. "Solving the 1980s' Velocity Puzzle: A Progress Report," this Review (August/September 1987), pp. 5-23.

Taylor, John B. "What Would Nominal GNP Targeting Do to the Business Cycle?" Carnegie-Rochester Conference Series on Public Policy (Spring 1985), pp. 61-84.

Thornton, Saranna R. "Can Forecast-Based Monetary Policy Be More Successful than a Rule?" Journal of Economics and Business (forthcoming).
COPYRIGHT 1993 Federal Reserve Bank of St. Louis
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Dueker, Michael J.
Publication:Federal Reserve Bank of St. Louis Review
Date:May 1, 1993
Previous Article:Rules and discretion in monetary policy.
Next Article:The FOMC in 1992: a monetary conundrum.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters