Printer Friendly


William Poole has made a number of fundamental contributions to the theory and practice of monetary policy during his long and productive career. Among other things, Poole has long emphasized the importance of uncertainty in shaping monetary policy. Uncertainty takes many forms. The central bank must act in anticipation of future conditions, which are currently unknown. Because economists have not formed a consensus about the best way to model the monetary transmission mechanism, policymakers must also contemplate alternative theories with distinctive operating characteristics. Finally, even economists who agree on a modeling strategy sometimes disagree about the values of key parameters. Thus, central bankers must also confront parameter uncertainty within macroeconomic models.

Addressing all these sources of uncertainty is a tall order, but economists have made considerable progress. Lars Svensson and Noah Williams are in the vanguard. In a series of important papers, they adapt and extend Markov jump-linear-quadratic (MJLQ) control algorithms so that they are suitable for application to monetary policy. (1) Among other things, they extend MJLQ algorithms to handle forward-looking models and show how to design optimal policies under commitment. Their contribution to this volume (Svensson and Williams, 2008) provides a concise technical introduction to their work and also describes a pair of thoughtful and well-designed examples that illustrate how uncertainty about the monetary transmission mechanism influences optimal policy. One lesson that emerges from their examples is that the benefits of learning are often substantial but that the gains from deliberate experimentation are slight. In their parlance, an "adaptive optimal policy" is almost as good as the fully optimal Bayesian policy.


My comment focuses on the role of experimentation. A natural way to address parameter and/or model uncertainty is to cast an optimal policy problem as a Bayesian decision problem. The decisionmaker's posterior distribution over unknown parameters and/or model probabilities becomes part of the state vector, and Bayes's law becomes part of the transition equation. Because Bayes's law is nonlinear, this breaks certainty equivalence, (2) making the decision rule nonlinear. A Bellman equation instructs the decisionmaker to vary the policy instrument in order to generate information about unknown parameters and model probabilities. Hence, policymakers have an incentive to experiment to tighten that posterior in the future. Although experimentation causes near-term outcomes to deteriorate, it speeds learning and improves outcomes in the longer run. Whether the decisionmaker should experiment a little or a lot is unclear, but it is clear that a Bayesian policy should include some deliberate experimentation.

Yet there is much aversion to deliberate experimentation among macroeconomists and policymakers. For instance, Robert Lucas (1981, p. 288) writes,
 Social experiments on the grand scale may be
 instructive and admirable, but they are best
 admired at a distance. The idea ... must be to
 gain some confidence that the component parts
 of the program are in some sense reliable prior
 to running it at the expense of our neighbors.

Alan Blinder (1998, p. 11) concurs, asserting that
 while there are some fairly sophisticated techniques
 for dealing with parameter uncertainty
 in optimal control models with learning, those
 methods have not attracted the attention of ...
 policymakers. There is a good reason for this
 inattention, I think: You don't conduct policy
 experiments on a real economy solely to
 sharpen your econometric estimates.

One way to make sense of these conflicting attitudes is to invoke Milton Friedman's precept that the best should not be an enemy of the good. According to Svensson and Williams, a good baseline policy involves learning but not deliberate experimentation. In principle, optimal experiments can improve on this baseline policy, but optimal experiments are hard to design because the policymaker's Bellman equation is difficult to solve, the chief obstacle being the curse of dimensionality. Because Bellman equations for policy-relevant models are hard to solve, actual policy experiments are unlikely to be optimal. And although optimal experiments are guaranteed to be no worse than the "learn but don't experiment" benchmark, suboptimal experiments are not. Indeed, they might be much worse. Perhaps this is what Lucas had in mind when he deprecated "grand" policy experiments. (3)

Svensson and Williams have made substantial progress improving algorithms for solving Bayesian optimal policy problems. Without disparaging this contribution, my sense is that the curse of dimensionality will continue to be a significant barrier in practice. In view of this, their finding that the maximum benefit of experimentation is slight takes on greater importance, for it strengthens the case in favor of adaptive optimal policies. Their findings are example specific, but they are consistent with other examples in the literature. More examples would help clinch the argument.


Cogley, Colacito, and Sargent (2007; CCS) examine a central bank's incentive to experiment in the context of two models of the Phillips curve. One model follows Samuelson and Solow (1960) and assumes an exploitable inflation-unemployment tradeoff. The other is inspired by Lucas (1972 and 1973) and Sargent (1973) and represents a rational expectations version of the natural rate hypothesis. Based on data through the mid-1960s, CCS estimate the following two specifications:

Samuelson and Solow:

[U.sub.t] = .0023 + .7971[U.sub.t-1] - .2761[[pi].sub.t] + .0054[[eta].sub.1,t]

[[pi].sub.t] = [v.sub.t-1] + .0055[[eta].sub.3t].

Lucas and Sargent:

[U.sub.t] = .0007 + .8468[U.sub.t-1] - .2489([[pi].sub.t] - [v.sub.t-1]) + .0055[[eta].sub.2,t]

[[pi].sub.t] = [v.sub.t-1] + .0055[[eta].sub.4t].

The variable [U.sub.t] represents the unemployment gap (i.e., the difference between actual unemployment and the natural rate), [[pi].sub.t] is inflation, [v.sub.t-1] is programmed or expected inflation for period t conditioned on t-1 information, and [[eta]], i = 1, ..., 4, are standard normal innovations.


CCS assume that one of these specifications is true but that the central bank does not know which one. As in Svensson and Williams (2008), the central bank formulates policy by solving Bayesian and adaptive optimal control problems. The key unknown parameter is the posterior probability, [alpha], on the Samuelson and Solow model. This probability is updated every period in accordance with Bayes's law. The cental bank minimizes a discounted quadratic loss function subject to the "natural" transition equations for the two models and also a transition equation for [alpha]. The state vector consists of lagged unemployment and the posterior model probability, [alpha], and the control variable is programmed inflation.

For the adaptive policy, the central bank updates [alpha] every period, but then treats the current estimate as if it would remain constant forever. Thus, for adaptive control problem, the a transition equation is

[[alpha].sub.t+j] = [[alpha].sub.t][for all]j [greater than or equal to] 0.

Because the other transition equations are also linear and the loss function is quadratic, it follows that certainty equivalence holds and that the policy rule is linear. The thin gray lines in Figure 1 illustrate how programmed inflation is set as a function of [alpha] and lagged unemployment. Each panel refers to a different value for [alpha], model uncertainty being most pronounced for [alpha] [approximately equal to] = 0.4. Lagged unemployment is shown on the x-axis in each panel, and programmed inflation is on the y-axis. Except when [alpha] is close to zero (the central bank puts high probability on the Lucas and Sargent model), programmed inflation is countercyclical.


For the Bayesian optimal policy, the central bank recognizes that actions taken today influence future beliefs about the two models. Hence the [alpha]-transition equation is governed by Bayes's law,

[[alpha].sub.t] = B([alpha].sub.t-1], [s.sub.t]),

where [s.sub.t] represents the "natural" state variables for the two models. The thick blue lines in Figure 1 depict the Bayesian decision rule. For the most part, they differ only slightly from the adaptive optimal policy. The chief difference is that the Bayesian policy is cyclically opportunistic when there is a lot of model uncertainty. When [alpha][approximately equal to] 0.4, the Bayesian policy calls for higher (lower) programmed inflation relative to the adaptive optimal policy when unemployment is high (low). In other words, a recession is the best time to experiment with Keynesian stimulus and a boom is the best time to experiment with disinflation.

Because the two policy functions are so alike, it is not surprising that the benefits of deliberate experimentation are small. Figure 2 portrays the value functions associated with the adaptive and Bayesian policy rules. Because the adaptive policy is not optimal, it follows that [V.sub.B](s, [alpha])[greater than or equal to] [V.sub.B](s; [alpha]), with the discrepancy measuring the benefits of deliberate experimentation. However, the differences are so slight that they cannot be detected in the figure. Thus, the results of CCS agree with those of Svensson and Williams. (4)


Deliberate experiments are substitutes for natural experiments. Hence, the incidence of natural experiments arising from exogenous shocks influences the value of intentional experiments. In the CCS example, one reason why the adaptive policy well approximates the Bayesian policy is that enough natural experimentation occurs for the central bank eventually to learn the true model under the adaptive policy. (5) Deliberate experimentation would speed learning, but not alter the limit point. In other models, such as Kasa (1999), there isn't enough natural experimentation to learn the truth in the absence of intentional experimentation. In those environments, deliberate experimentation would alter not only the transition path but also the limit point of the learning process. Presumably that would enhance the value of deliberate experimentation, for in that case the central bank would collect dividends on experimentation forever.

Another reason why the benefits of experimentation are small is that Bayesian updating makes posterior model probabilities a martingale (Doob, 1948), implying [E.sub.t]([[alpha].sub.t+j]) = [[alpha].sub.t]. Thus, the adaptive transition equation well approximates the center of the Bayesian predictive density for [[alpha].sub.t]. The adaptive model poorly approximates its tails, however, because it disregards uncertainty about future model probabilities. Nevertheless, when precautionary motives are weak, decisions depend mostly on the mean, and errors in approximating the tails don't matter much. In these examples, the central bank's loss function is quadratic, so precautionary motives do not enter through preferences. Precautionary behavior comes in only because of nonlinearity in the transition equation. Accordingly, motives for experimentation might be strengthened by altering the central bank's objective function.

In principle, one way to reinforce precautionary motives is by introducing a concern for robustness. Building on work by Hansen and Sargent (2007), Cogley et al. (2008) replace the expectations operators that appear in a Bayesian value function with a pair of risk-sensitivity operators. One risk-sensitivity operator guards against misspecification of each of the submodels, and the other guards against misspecification of the central bank's prior. The two risk-sensitivity operators can be interpreted as ways of seeking robustness with respect to forward- and backward-looking features of the model, respectively. Applying these operators to the Phillips curve models examined in CCS, Cogley et al. find that the forward-looking risk-sensitivity operator strengthens experimental motives, whereas the backward-looking operator mutes them. The combined effect is ambiguous and depends on the relative weight placed on the two operators.


Designing an optimal policy is substantially more complex when experimental motives are active. That easy-to-compute, nonexperimental policies well approximate hard-to-compute, fully-optimal policies is an important result. If this conclusion holds up to further scrutiny, the analysis of monetary policy under model uncertainty will be greatly simplified. In this instance, it seems that "the good" is an excellent substitute for "the best."


Beck, Gunter and Wieland, Volker. "Learning and Control in a Changing Environment." Journal of Economic Dynamics and Control, November 2002, 26(9/10), pp. 1359-77.

Blinder, Alan S. Central Banking in Theory and Practice. Cambridge, MA: MIT Press, 1998.

Cogley, Timothy; Colacito, Riccardo and Sargent, Thomas J. "Benefits from U.S. Monetary Policy Experimentation in the Days of Samuelson and Solow and Lucas." Journal of Money, Credit, and Banking, February 2007(Suppl.), 39, pp. 67-99.

Cogley, Timothy; Colacito, Riccardo; Hansen, Lars P. and Sargent, Thomas J. "Robustness and U.S. Monetary Policy Experimentation." Journal of Money, Credit, and Banking, 2008 (forthcoming).

Doob, Joseph L. "Application of the Theory of Martingales." Colloques lnternationaux du Centre National de la Recherche Scientifique, 1948, 36, pp. 23-27.

El-Gamal, Mahmoud A. and Sundaram, Rangarajan K. "Bayesian Economists ... Bayesian Agents: An Alternative Approach to Optimal Learning." Journal of Economic Dynamics and Control, May 1993, 17(3), pp. 355-83.

Hansen, Lars P. and Sargent, Thomas J. "Robust Estimation and Control without Commitment." Journal of Economic Theory, September 2007, 136(1), pp. 1-27.

Kasa, Kenneth. "Will the Fed Ever Learn?" Journal of Macroeconomics, Spring 1999, 21(2), pp. 279-92.

Lucas, Robert E. Jr. "Expectations and the Neutrality of Money." Journal of Economic Theory, April 1972, 4(2), pp. 103-24.

Lucas, Robert E. Jr. "Some International Evidence on Output-Inflation Trade-Offs." American Economic Review, June 1973, 63(3), pp. 326-34.

Lucas, Robert E. Jr. "Methods and Problems in Business Cycle Theory," in Robert E. Lucas Jr., ed., Studies in Business-Cycle Theory. Cambridge, MA: MIT Press, 1981.

Samuelson, Paul A. and Solow, Robert M. "Analytical Aspects of Anti-Inflation Policy." American Economic Review, May 1960, 50(2), pp. 177-84.

Sargent, Thomas J. "Rational Expectations, the Real Rate of Interest, and the Natural Rate of Unemployment." Brookings Papers on Economic Activity, 1973, Issue 2, pp. 429-72.

Svensson, Lars E.O. and Williams, Noah. "Monetary Policy with Model Uncertainty: Distribution Forecast Targeting." Working paper, Princeton University, 2007a; www.princeton.edn/~svensson.

Svensson, Lars E.O. and Williams, Noah. "Bayesian and Adaptive Optimal Policy Under Model Uncertainty." Working paper, Princeton University, 2007b;

Svensson, Lars E.O. and Williams, Noah. "Optimal Monetary Under Uncertainty: A Markov Jump-Linear-Quadratic Approach." Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 275-93.

Wieland, Volker. "Monetary Policy, Parameter Uncertainty, and Optimal Learning." Journal of Monetary Economics, August 2000a, 46(1), pp. 199-228.

Wieland, Volker. "Learning by Doing and the Value of Optimal Experimentation." Journal of Economic Dynamics and Control, April 2000b, 24(4), pp. 501-34.

(1) See Svensson and Williams (2007a,b and 2008).

(2) Certainty equivalence would hold if the central bank's objective function were quadratic and the transition equation were linear. The presence of Bayes's law as a component of the transition equation makes it nonlinear and hence breaks certainty equivalence.

(3) One of the initial objectives of Cogley, Colacito, and Sargent (2007) was to assess whether the Great Inflation could be interpreted as an optimal experiment. We found that it could not. At least in our model, optimal experiments did not generate a decade-long surge in inflation. On the contrary, they generated small, cyclically opportunistic perturbations of inflation relative to an adaptive, non-experimental policy. Whether the Great Inflation was initiated by a suboptimal policy experiment remains an open question.

(4) Other aspects of monetary policy experimentation are examined by Wieland (2000a,b) and Beck and Wieland (2002).

(5) El-Gamal and Sundaram (1993) highlight the importance of natural experiments.

Timothy W. Cogley is a professor of economics at the University of California, Davis.
COPYRIGHT 2008 Federal Reserve Bank of St. Louis
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:models of monetary policy
Author:Cogley, Timothy W.
Publication:Federal Reserve Bank of St. Louis Review
Geographic Code:1USA
Date:Jul 1, 2008
Previous Article:Optimal monetary policy under uncertainty: a Markov jump-linear-quadratic approach.
Next Article:Commentary.

Related Articles
Indicators of monetary policy: the view from implicit feedback rules.
Some not-so-unpleasant monetarist arithmetic.
A guide to normal feedback rules and their use for monetary policy.
A close look at model-dependent monetary policy design.
The importance of being predictable.
Monetary policy under uncertainty.
The case for price stability with a flexible exchange rate in the New Neoclassical Synthesis.
Interest rate projections in theory and practice.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters