Printer Friendly

Poor hand or poor play? The rise and fall of inflation in the U.S.

Introduction and summary

Figure 1 shows the level of inflation in the U.S. economy (measured as the percentage growth in the gross domestic product or GDP deflator over the previous four quarters) from 1951 to 2003. The general pattern is familiar to many of us: The level of inflation was successively low and not very variable (in the 1950s and 1960s), high and variable (in the 1970s), and low and not variable again (since the 1980s).

[FIGURE 1 OMITTED]

The graph is divided into five sections, by tenure of the chairmanship of the Board of Governors of Federal Reserve System: William McC. Martin (1951-70), Arthur Burns (1970-78), G. William Miller (1978-79), Paul Volcker (1979-87), and Alan Greenspan (since 1987). While the exact degree of control of a central bank over the level of prices is a matter of debate, the conventional wisdom assigns a major role to these individuals in the rise and fall in inflation. In particular, the fact that inflation has been low since the 1980s has been credited to the efforts of Paul Volcker and Alan Greenspan.

This article surveys the recent literature motivated by the following question: To what extent does the pattern of figure 1 reflect the actions of these individuals? In particular, what caused the high inflation in the 1970s, and is the low inflation of the 1990s due to a change in policy, as the conventional wisdom suggests?

I analyze the competing stories in the literature along one particular dimension. One story, which I call the bad-policy story, blames the policymakers for the inflation of the 1970s, and sees a decisive (and permanent) break around 1980. It is an optimistic story that relies on errors made and lessons well learned, and it reflects the conventional wisdom. This would be the "poor play" in the title above. Against this story of successful learning, I place an array of alternatives under the label of bad-luck stories: They share less emphasis on learning, ranging from imperfect learning by policymakers to no learning at all. Correspondingly, they place more emphasis on the role of (bad) luck in shaping the pattern of inflation in the past 50 years. This is the "poor hand" scenario.

So far, the analysis of the evidence for the bad-policy stories has taken place along one dimension, namely, the time-series properties of inflation and other macroeconomic series. Furthermore, the debate has turned around the effort to detect a change in policy. The competing hypothesis (emphasizing the role of luck) is that the nature of the randomness affecting the economy, and not the behavior of the central bank, is what has changed over time.

The empirical debate is not settled, but some common ground appears to be emerging, allowing for a measure of both changes in policy and changes in the luck faced by policymakers. We still have some way to go in understanding the quantitative importance, and the sources, of both types of changes.

The bad-policy story: Narrative and a subtext

I first present an exaggerated version of the narrative in DeLong (1997). The force of the bad-policy story, as it accounts for the rise and fall of inflation, is that policy was poor, then improved. Thus, the narrative relies strongly on learning over time and on the power of ideas, to which I turn later.

A narrative

DeLong notes that the U.S. has known inflation at various times in its history, but that the 1970s was its only peacetime inflation. Wars and inflation have been associated for centuries, because printing money is a cheap way for governments to raise revenues during a major fiscal emergency without raising taxes explicitly. No such emergency seems to explain the "Great Inflation," as DeLong calls the inflation of the 1970s. In other words, it cannot be excused as part of a time-honored tradition in public finance. What, then, explains it?

DeLong's narrative unfolds in three acts. In Act I (the 1950s), the Fed, newly liberated from its wartime obligation to support the Treasury's debt-management policy by the Treasury Accord of 1951, follows a prudent policy and maintains relatively low inflation after the lifting of wartime price controls and the pressures of financing the Korean War. A sense of foreboding haunts the scene, because of the shadow cast by the great macroeconomic event that dominates the twentieth century (and indeed, gave birth to macroeconomics as a field of economics), the Great Depression. Unemployment reached such unprecedented levels that it ceased to be tolerated as an inevitable side effect of business cycles. Unemployment, except perhaps for 1 percent or so of frictional unemployment, came to be viewed as both cyclical and, perhaps, curable. At the close of Act I, enter the villains, carrying with them the promise of a cure for unemployment, namely inflation. The villains, in this narrative, are Samuelson and Solow, whose 1960 article held out the tantalizing possibility of achieving lower unemployment at the cost of apparently modest permanent increases in inflation.

In Act II, Fed Chairman Martin (2) ceded to the temptation to use inflation, and over the course of the 1960s unemployment fell and inflation rose. However, by 1969 unemployment had only fallen to 4 percent, while inflation was reaching 6 percent, somewhat worse terms than those promised by Samuelson and Solow. The next year, Martin left Burns with a set of unpleasant choices, among which Bums couldn't or wouldn't make the harder one. Burns appears like a figure from a Greek tragedy, aware of his situation but unable to resolve it. Christiano and Gust (2000) cite Burns recognizing in 1974 that "policies that create excess aggregate demand, and thereby drive up wage rates and prices, will not result in any lasting reduction in unemployment." Thus, Burns was arguing against the Samuelson-Solow remedy--yet he did not take action to prevent the 1970s inflation.

DeLong cites a number of extenuating circumstances in favor of Burns: political pressures from the White House, difficulties in appreciating the inflation problem due to the price controls of the early 1970s, and pervasive failures to forecast inflation, including on the part of the private sector. But Burns and other policymakers simply were not willing to accept the costs of disinflation. Christiano and Gust (2000) again cite Burns fearing "the outcry of an enraged citizenry" in response to attempts at stabilizing inflation. Taylor (1997) adds that, by the late 1970s, the costs of disinflation appeared too high to policymakers. He cites Perry (1978) showing that a 1 percent fall in inflation would require a 10 percent fall in GDP and concluding, "whatever view is held on the urgency of slowing inflation today, it is unrealistic to believe that the public or its representatives would permit the extended period of high unemployment required to slow inflation in this manner."

Act III brings redemption: Inflation reaches such heights in 1979-80 that the newly appointed Volcker has what none of his predecessors did, a political mandate to stop inflation. The rest is well known: A first attempt at raising rates was reversed with the onset of the 1980 recession, but the second attempt, initially met with incredulity, succeeded in purging the economy of its inflationary expectations. This coincided with the 1982 recession, which, costly as it was, did not reach the depths Perry might have expected on the basis of his estimates. In the event, a committed central banker whose anti-inflationary stance became trusted could permanently alter inflationary expectations, without having to purchase low inflation at the cost of high unemployment.

The crisis of Act II, America's peacetime inflation, carries an air of inevitability, but this was only because of the intellectual climate shared by economists and politicians. Policymakers worked with an incorrect model of the economy, and the consequences of their actions led them to recognize the error of their ways. Having acquired a correct model of the economy, policymakers proceeded to implement an optimal policy. In this view, the Great Inflation of the 1970s is simply a result of poor policy.

Subtext: The power of learning

The bad-policy narrative is all about learning from past mistakes. It relies on policymakers' beliefs, but the driving force is ultimately located in academia, perhaps not surprisingly for a story told by academics. (3) I review the concomitant intellectual developments, using progress in the discipline of economics as a gauge of social learning.

Policymakers, in this view, had been searching for an appropriate monetary policy in the absence of the strict constraints that the gold standard had imposed until 1934. In the aftermath of World War II, the Bretton Woods system had been created, and it still meant to impose some constraints, albeit weaker than the pure gold standard. In the 1950s, policymakers still viewed price stability as their main objective, even if they were conscious of the possible stimulus that they could deliver via inflation.

Then the academic climate changed. A. W. Phillips discovered his famous curve (the Phillips curve) by plotting a century's worth of wage growth data against unemployment in the UK. Samuelson and Solow (1960) reproduced the plot with U.S. data. They stylized the rather nebulous scatter-plot into a neat downward-sloping graph of inflation against unemployment (by subtracting average productivity growth from wage growth) and suggested a "menu of choice between different degrees of unemployment and price stability." One could pick price stability with 5.5 percent unemployment, or one could go for the "nonperfectionist's goal" of 3 percent at the cost of 4 percent to 5 percent unemployment. The lesson that policymakers took from this work is that permanent increases in inflation of a moderate magnitude could purchase significant reductions in unemployment.

Samuelson and Solow have become the villains of the story. It is true that they propose this menu, but they are also insistent that it is only for the short term, and recognize that the terms of the trade-off could shift over time. (4) Conversely, in DeLong's narrative, the Great Depression made people think of all unemployment as curable. That is, monetary policy's failure to act in the 1930s convinced a later generation of monetary policy's power to act. This somewhat paradoxical view may have something to do with the considerable influence of Friedman and Schwartz (1963), who made a strong case for the Fed's responsibility in worsening, if not causing, the Great Depression. When it comes to looking for accomplices in the Great Inflation, these authors are not rounded up with the usual suspects.

Be that as it may, the late 1960s and early 1970s saw an acceleration in inflation with no permanent reduction in unemployment, and ultimately the attempt to exploit the trade-off embodied in the Phillips curve resulted in stagflation, an incomprehensible combination of high unemployment and high inflation. Adverse supply shocks such as the oil price shocks of 1973 and 1979 compounded but did not create the problem. For DeLong, the exact timing is rather immaterial: The Great Inflation was an "accident waiting to happen," once the Great Depression had revealed the disease and Samuelson and Solow had revealed the cure.

Meanwhile, in academia, the foundations were laid for the next stage. Once again, academics led the way, first with the rebuttal of the traditional Phillips curve by Phelps (1968) and Friedman (1968), who insisted that, in the long run, there could be no trade-off, only varying levels of inflation with the same "natural" unemployment rate. The argument was that only unanticipated inflation could have real effects: Perfectly anticipated inflation would simply be built into nominal wage growth, the way it would be built into nominal interest rates. Attempts at exploiting the illusory tradeoff would only achieve the natural rate, but with high levels of inflation. The argument was formalized by Lucas (1972).

Empirical tests of the natural-rate hypothesis took the form of the expectations-augmented Phillips curve. The Phillips curve equation now related the growth of wages with expected inflation, in addition to unemployment. The test was as follows: If the coefficient on expected inflation was found to be less than one, then the Friedman theory would be rejected. Since inflation would not feed in one-for-one into wage growth and neutralize the Phillips curve, there would still be room for nominal wage growth to decrease unemployment.

For the purpose of empirically testing the natural-rate hypothesis, expectations (which are not measured directly) were represented as a distributed lag (or weighted average of past values) of inflation. This presented an identification problem, however: Without further assumptions, there is no way to disentangle the weights on past values of inflation from the coefficient on expected inflation that multiplies them all. In other words, if regressing wage growth on past inflation gave a low value, one could not determine if this reflected a low impact of inflation expectations on wage growth, or a low impact of past inflation on inflation expectations.

An additional assumption was justified by the following reasoning. Suppose that the government set a permanent level of inflation. Over time, one would expect agents to adjust their expectations to that permanent level. This meant that the sum of weights on past inflation should equal one. With this assumption, researchers such as Solow and Tobin empirically found a coefficient on expected inflation less than one and rejected the natural-rate hypothesis.

Then, in the early 1970s, two things happened. First, Sargent (1971) pointed out how the identifying assumption was valid only for certain inflation processes and not others. If the inflation process is highly persistent (for example, when inflation is constant), then expected inflation, under rational expectations, can indeed be approximated by a distributed lag with coefficients summing to one. With one lag, for example, the coefficient is one: If inflation is extremely persistent, agents expect inflation tomorrow to be very much like inflation today, and lagged inflation represents expected inflation adequately. If, however, the government tends to fight inflation when it arises, then higher inflation today signals lower inflation tomorrow. With one lag, the coefficient would be less than one, and might even be negative. Put simply, how agents form expectations about inflation depends on how inflation behaves, and if the behavior of inflation changes, so will their expectations.

Second, as if on cue, the data began to change. As shown in figure 1, inflation became more persistent. This led to different results, and the coefficient on inflation came closer to one, making the natural-rate hypothesis more plausible even to Solow and Tobin. Ironically, inflation expectations now appeared to be persistent or "inertial." This led to the notion that they could only be reduced by a very prolonged bout of disinflation, which the expectations-augmented Phillips curve predicted would be very costly in terms of output. The "sacrifice ratio" (cost of disinflation in terms of lost output) was by the late 1970s estimated to be prohibitively high. This, argues Taylor (1997), contributed to policymakers' willingness to tolerate increasing levels of inflation.

Sargent's point relates to a key step in the history of ideas, namely the Lucas (1976) critique of econometric policy evaluation as currently practiced. One cannot evaluate alternative policies on the basis of outcomes achieved under a particular policy, unless one explicitly takes into account how that policy was incorporated in private agents' decisions. A change in policy will lead to changes in agents' behavior that may well invalidate the econometric model that recommended the change in the first place. The Lucas critique taught policymakers that their actions could alter the terms of a trade-off they imagined were fixed.

The Lucas critique, among other things, forcefully directed attention to the role of expectations, especially private agents' expectations of future government policy. The recognition that expectations can't be systematically fooled or manipulated places great discipline on economic theory. The consequences were drawn in many different settings, and one of those was government policy as a control problem in the face of rational expectations (Kydland and Prescott, 1977, and Calvo, 1978). The expectations-augmented Phillips curve still left central bankers with the possibility of stimulating the economy with surprise inflation. But these models warned central bankers that the public was well aware of this temptation, and that, unless they could find a credible way to resist it, they would always be expected to cede to it. Well-meaning central bankers could find themselves with high inflation but nothing better than the natural rate of unemployment.

Alternative stories: Bad luck, traps, imperfect learning

I now present alternative stories that have been provided, or could be provided, for the events under discussion. These alternatives draw from some of the work that I reviewed above.

Expectations and the trap of time-inconsistency

One line of thought stems from the Kydland and Prescott (1977) analysis. In the bad-policy analysis, the goal has been to alert central bankers to a temptation they face--the rationale being that by becoming conscious of the temptation, they are somehow better placed to resist it. Yet the analysis itself is essentially time-invariant: It describes a temptation that was always there, always will be there, and cannot be resisted.

The ingredients of the model are a government and a private sector. The private sector makes forecasts of the government's inflation policy and has rational expectations. The government has a correct model of the economy (an expectations-augmented Phillips curve) and tries its best to minimize both inflation and unemployment. (5)

The expectations-augmented Phillips curve says that the central bank can lower unemployment by engineering a surprise inflation, that is, choosing a level of inflation higher than the one the private sector anticipated. But, in the analysis, the private sector is well aware of that temptation--hence its expectations of inflation will be higher. In an equilibrium where the private agents have rational expectations (a property of equilibrium that is seen as requisite since Lucas's 1976 critique), the central bank's attempt to set inflation high will be forecasted, and there will be no surprise--hence an unemployment rate no lower than the natural rate, but a higher level of inflation. How is that level of inflation determined? It must be such that the central bank has no incentive to deviate from what the public expects: In other words, the inflation rate must be high enough to make the benefits of even higher inflation, in terms of unemployment, unworthwhile. (6) This level of inflation will depend on the natural rate of unemployment: The higher the natural rate of unemployment, the higher inflation must be to dissuade the central banker from trying to reduce unemployment

This prediction of the model has been used to explain the rise and fall of inflation as merely mirroring the rise and fall of the natural rate of unemployment, a phenomenon itself purely driven by factors outside the Fed's control, such as changes in the structure of the economy (for example, demographic changes or changes to the labor markets). The idea was proposed by Parkin (1993) and tested empirically by Ireland (1999). Ireland draws the implications of the model for the comovements of inflation and the natural rate and finds that, at least in terms of their long-run relationship, they are supported by the data. The short-run implications fare less well, a failure that can plausibly be assigned to the extreme simplicity of the model. (7)

In this story, then, nothing has been learned: Inflation is lower not because central bankers do a better job of resisting the temptation to inflate, but rather because the equilibrium level of inflation that results from their yielding to the temptation is lower, for reasons outside their control. The Kydland-Prescott model implicitly points to institutional changes, in a rather vague way, as the only solution to the dilemma. If central bankers have the ability to commit, or tie their hands, they can deprive themselves of the option to yield to the temptation, much as Ulysses fled to the mast of his ship (at his request) is unable to jump out of the ship and follow the call of the Sirens. As DeLong (1997) has noted, while talk of central bank independence has gained importance because of the Kydland-Prescott arguments, no institutional change can be identified that has given the Fed a better ability to commit since 1979.

There is a variant of the Kydland-Prescott story, starting with Barro and Gordon (1983), that uses reputation as an ersatz commitment mechanism. What a commitment mechanism achieves is to narrow the expectations of the private sector down to a unique, and desirable, action by the central bank. Game theory suggests that, in repeated situations, there are other (noncooperative) ways to support a narrow set of expectations. The private sector's behavior now takes the following form: As long as the central bank conforms to its expectations and behaves well (by not inflating), those expectations will be continued. But if the central bank deviates and allows itself to cede to the temptation of a surprise inflation only once, then the private sector will expect it henceforth always to cede. And, given such expectations, the central bank has no incentive to refute them, because doing so would be costly in terms of the Phillips curve. Economists call "reputation" a set of expectations, consistent with past behavior, that creates incentives for future behavior. Should the reputation be lost, the private sector's expectations would coerce the central bank into the high-inflation outcome forever. The very threat of such a dire punishment can be sufficient to keep the central bank in the desirable outcome.

This story alone, focusing as it does on sustaining the good outcome, will not explain bad outcomes such as America's peacetime inflation. But it can be modified to do so, because as it turns out, the threat of losing one's reputation can maintain all sorts of behavior, not just the best. In this spirit, Chari, Christiano, and Eichenbaum (1998) have proposed a model (extended in Christiano and Gust 2000; see also Leduc 2003) where the behavior supported by the fear of losing one's reputation can be quite arbitrary. They illustrate this with a "sunspot" equilibrium, which they think can be used to explain the rise of inflation in the 1970s. The private sector's behavior now has two components: One is that the central bank's deviations from the private sector's expectations toward high money growth are "punished" by a loss of reputation as before; the other is that these expectations are now assumed to be driven by "sunspots," that is, random events that have no direct relevance for the economy. Thus, for random reasons, the private sector suddenly believes that the central bank will increase inflation this period, and sets prices in advance accordingly. Once those expectations are in place, the central bank has no choice but to validate them: Producing lower inflation would be costly in terms of output, producing higher inflation would be costly in terms of reputation. (8) Chari, Christiano, and Eichenbaum call such equilibria "expectation traps."

Such models suffer from some limitations. Precisely because the threat of losing one's reputation is a powerful incentive, the model has weak predictions. A given strategy of the central bank will be an equilibrium of the model as long as the pay-off to the central bank of sticking with the strategy is greater than the pay-off of deviating once and being punished thereafter. Since the latter pay-off is just a number, many strategies will be equilibria for the central bank, and a whole range of behavior is potentially predicted by the model. Moreover, the model makes statements about outcomes, not about specific strategies or beliefs. In particular, it shows that if any deviation from the private sector's expectations on the part of the central bank is punished by a loss of reputation, then those expectations will be fulfilled. But it does not say where those expectations come from in the first place. Finally, the rise and fall of inflation is explained, in such a model, by rising and falling expectations of what the central bank will do. Many other patterns of inflation could have been explained in just the same way.

The Lucas critique taken seriously

Just as Kydland and Prescott's paper suggested one alternative story, another key development in macroeconomics suggests the second, namely the Lucas critique, taken to its logical conclusion. Lucas critiqued the then-current practice of using past data to estimate the response of the economy to past policies and then using these numbers to evaluate its response to alternative, future policies. He argued that one ought to take expectations into account explicitly: The past behavior of the economy was premised on the belief that particular policies were being followed. If new policies were substituted, the beliefs would change, and the response of the economy would be different. Only more careful modeling of the economy, based on "deep" parameters invariant to policy changes and on correct modeling of expectations, can be logically coherent. Once the deep parameters are estimated, an alternative policy can be evaluated, with a new set of expectations on the part of the private sector governing the new response of the economy.

But, as Sargent (1984) and Sims (1988) pointed out, there is an inconsistency in this procedure. In the estimation phase, it assumes that agents took past policies as fixed forever, and in the evaluation phase, it assumes that they will take the new policies as fixed forever. The mooted change in policy is thus totally unanticipated ex ante, but entirely credible ex post. Shouldn't the logic of the Lucas critique be carried to its conclusion? If so, the change in policy itself should be modeled, and agents assumed to assign some probability to the change taking place. How do agents assign a probability to various policy changes? Knowing their policymaker, they should be figuring out what he intends to do: Thus, agents should have a model of the policymaker's choice of policy. But this has the effect of sucking the policymaker, the economic adviser, the econometrician, and ultimately, the modeler into the model.

Sims (1988) argues for a route out of this conundrum, essentially by modeling the policymaker as an optimizing agent with somewhat less information than the private sector at each point in time and, therefore, committing slight policy errors each time. This leaves room for econometric estimation of the economy's response, and policy advice predicated on this estimation, that is logically consistent. The government acts, making slight mistakes: This generates outcomes that the government observes and uses to refine its estimate of the parameters of the economy. But this view does not allow major changes in policy, and ultimately any reasonable econometric procedure will rapidly lead to a good estimate of the parameters, with no further learning taking place. Using this view to look at the inflation of the 1970s leads one to a slightly Panglossian (9) but coherent view. The Fed, whatever it did, was doing the best it could. Inflation in the 1970s and the early 1980s might seem high to us, but it could have been worse. (10)

Another potential escape from the conundrum is to model the policymaker as randomly switching between regimes, with the probabilities of switching between regimes fixed and known to the private sector (Cooley, LeRoy, and Raymon, 1984). Leeper and Zha (2001) develop this idea to model "modest" policy interventions as small but significant actions that do not lead agents to revise their beliefs as to which regime is currently in place. In the context of our question, this variant does leave room for major changes in policy, but now only as unexplainable random changes.

None of these theories are really proposed as explanations for the behavior of U.S. inflation. I present them because they play an important role in the debate on the empirical evidence described below. In particular, they underlie several researchers' thinking about monetary policy. Anyone who tends to subscribe to these views will be inclined to look for empirical evidence that sustains the bad-luck view, since policy is never bad.

Imperfect learning

Both the Kydland-Prescott view and the Sims--Leeper--Zha view leave no room for learning--either because the central bank is trapped in its dilemma, or else because the central bank has always been doing the best it could, or because it is just randomly changing policy. The two views, of course, are mutually compatible: It may well be that the best the central bank can do is the outcome dictated by the Kydland-Prescott analysis. Both views ultimately will rely on some external variation in the economy (like a rising, then falling, natural rate, or a sequence of bad shocks in the 1970s and 1980s) to account for the rise and fall of inflation. To generate the rise and fall within the model itself, without overly counting on external factors, Sargent (1999) reintroduces some amount of learning, but not a lot; not enough, at any rate, to restore the bright optimism of the bad-policy story. This is our third alternative story.

Sargent starts from the Kydland-Prescott model, but rather than viewing the government as having a correct model but less information than the private sector as Sims does, he views it as observing all relevant information, but having an incorrect model. The policymaker tries to do his best, based on his beliefs about the parameters of his (incorrect) model. His actions then generate outcomes, which he observes and uses to update his estimates of the parameters. Furthermore, the policymaker tends to pay more attention to more recent observations. (11) Sargent studies the dynamics of the economy and finds that the policymaker's beliefs about an exploitable Phillips curve oscillate over time. There are periods where the parameters that he estimates makes him think that there is no trade-off to exploit, followed by periods where normal random fluctuations in the data suddenly open up the possibility of a trade-off: The policymaker attempts to exploit it, raising inflation without lowering unemployment. This creates new data, which again dissuades the policymaker from the idea of a trade-off. Because the policymaker tends to discount more distant observations, this can occur again and again.

The degree of persistence of inflation plays a particular role in Sargent's story, one that connects to its role in the 1970s tests of the Phillips curve. Beliefs about the natural-rate hypothesis derive from the degree of persistence--the more persistence in inflation, the more readily policymakers will accept the natural-rate hypothesis and give up on their attempts to exploit the trade-off. That is why measuring the variation of persistence over time is a key element to make Sargent's learning story plausible.

The evidence

What does the evidence say? In recent years, a body of literature has emerged that attempts to address a preliminary question. To assert that either policy or luck explains the rise and fall in inflation, a necessary condition would be to determine that either policy or luck has changed over the relevant period. Two important concepts underlie almost all of the empirical work that has been carried out to make that determination. I first review the tools and then present what has been found. Did policy change, or luck, or both?

The tools

The first important concept is the "Taylor rule." This is a formulation of an interest-rate setting policy, introduced by Taylor (1993a, 1993b), who argued that it was both desirable for a central bank to follow and a good approximate description of policies followed in practice. The basic Taylor rule describes the interest rate as a linear function of deviations of output and inflation from some prescribed target. Although Taylor (1993b) showed that it was a good first-order approximation of the Fed's actual behavior, it has since been recognized that, in practice, the Fed engages in more "interest-rate smoothing" than can be accounted for with a simple Taylor rule, which would predict a more variable level of interest rates. Consequently, current formulations are as follows. First, it is assumed that the Fed has at all times a target for the fed funds rate but does not act to reach that target immediately; rather, it adjusts at some speed toward that target. The actual rate is somewhere between that target rate and an average of its recent values. This assumption captures the Fed's interest rate smoothing behavior. How is this target determined? The target rate is a linear function of the deviations of expected inflation and the output gap from their own targets. In the literature I present, monetary policy is viewed in terms of a Taylor rule.

The second concept is vector autoregression (VAR), which is used to analyze the dynamic relationships between stochastic, or random variables (Sims, 1980). To motivate its use, consider the problem of estimating the statistical relationship between two variables, say inflation n and output Y. We suppose that inflation in the current period is related to the output gap in the same period, but we imagine (say, because of inertia) that it also depends on inflation and output last period (this dependence over time is what the term "dynamic" refers to). So we have in mind a linear relation of the form:

1) [[pi].sub.t] = a [Y.sub.t] + b [[pi].sub.t-1] + c[Y.sub.t-1] + [u.sub.t],

where we assume that [u.sub.t] is unrelated to anything in the past. The problem with any attempt at measuring a and b is that the variable [Y.sub.t] itself may be related to the error term [u.sub.t], either because inflation also affects output or because of some common factor affecting both. This endogeneity induces a simultaneous equation problem. It means that a variable appearing on the right-hand side should also appear on the left-hand side of another equation:

2) [Y.sub.t] + d [[pi].sub.t] + e [[pi].sub.t-1] + f[Y.sub.t-1] + [v.sub.t].

We still have variables appearing on both sides of the equal sign, but a little algebra fixes the problem: multiply equation 2 by a, subtract from equation 1, and divide by 1 - ad to get

3) [[pi].sub.t] = (b + af)/(1 - ad) [[pi].sub.t-1] + (c + ae)/(1 - ad) [Y.sub.t-1] + ([u.sub.t] + a [v.sub.t])/(1 - ad).

A similar manipulation will give

4) [Y.sub.t] = + (f + bd)/(1 - ad) [[pi].sub.t-1] + (e + cd)/(1 - ac) [Y.sub.t-1] + (d[u.sub.t] + [v.sub.t])/(1 - ad).

We now have inflation and output expressed as linear functions of past inflation and output, which are independent of [u.sub.t] and [v.sub.t]. The combined system (equations 3 and 4) is called a vector autoregression, because it expresses the dependence of the vector [X.sub.t] = [[[pi].sub.t] [Y.sub.t]] on its past value [X.sub.t-1]:

5) [X.sub.t] = A [X.sub.t-1] + [w.sub.t],

where A is a 2-by-2 matrix of coefficients. The error term is also called the VAR's innovation: It has a certain variance-covariance structure represented by a matrix Q. The system in equation 5, which is called the reduced form of the structural system in equations 1 and 2, can be used to represent the relation between the variables in the vector X.

Almost all of the literature focuses on the matrices A and Q of an autoregression of variables such as inflation, output, and the interest rate. The matrix A is the systematic component, while the matrix Q corresponds to the disturbances affecting the system. One representation of the information contained in A is the "impulse response function," which traces out the response of a variable in the vector X to a (by definition unexpected) movement in the corresponding element of the vector w.

Thinking about monetary policy as set by a Taylor rule fits well with the VAR framework, because monetary policy is simply one of the equations in the VAR, as long as interest rates, output, and inflation are among the variables in the vector. Questions about monetary policy are framed as questions about the parameters of a Taylor rule, or the corresponding equation of a VAR, without trying to model the motives or behavior of the central bank. (12)

A major difficulty in using the VAR framework is the interpretation of the innovation term. Suppose that [u.sub.t] and [v.sub.t] are "true" exogenous disturbances, say, the former shocks to policy (shifts in policymakers' preferences, mistakes in execution) and the latter shocks affecting the economy's structure. We are really interested in the properties of u and v, not those of w. Unfortunately, we cannot recover u and v from w. The reason is that the structural model, equations 1 and 2, has more parameters (the coefficients a, b, c, d, e, and f, the variances of u and v, and the covariance of u with v; a total of nine) than the reduced form model in equation 5 (the four coefficients of the matrix A and the three coefficients of the matrix Q).

To identify the VAR, one can make assumptions about the structural relations between the variables, that is, about the parameters of the structural model. There are a variety of possible identification schemes, but they all tend to equate the number of estimated parameters with the number of structural parameters. In this instance, we might decide that inflation does not react contemporaneously to output (a = 0) and that u is not correlated with v. This reduces the number of unknowns to seven, for which we have seven estimated parameters. Having solved for the parameters, we can recover the history of disturbances u and v.

With an identified VAR, it becomes possible to do more than summarize the dynamic relations between the variables. If one is confident of having correctly identified the fundamental disturbances, one can speak of causation and of policy responses to exogenous shocks. One can also evaluate the importance of changes in policy, by carrying out counterfactual exercises: go back in time, replace the actual matrix A with the changed matrix A, and compute the resulting response of the variables to the known history of disturbances. Such a procedure runs afoul of the Lucas critique, but is nevertheless used in the literature to give a quantitative idea of how much a change in policy can explain a change in a variable's behavior.

Has policy changed?

Taylor rules

The most straightforward way to approach the question of whether policy has changed is to estimate a Taylor rule and examine if the coefficients have changed. This is what Taylor (1999) did when he analyzed a century of U.S. monetary policy. For the postwar period, Taylor's ordinary least squares (OLS) regressions of the fed funds rate on output gap and inflation showed that there was a substantial difference in policy before and after 1980, with coefficients on output and inflation being 0.25 and 0.81, respectively, before 1980, and 0.76 and 1.53 after 1980. The coefficients on each deviation represent how sensitive the Fed is to variations in inflation and output.

A variant appears in Favero and Rovelli (2003), who estimate a model in which the central bank has a quadratic loss function in inflation, the output gap, and interest rates (a form of preferences that is known to simply generate a Taylor rule policy function), which it minimizes subject to the constraints posed by two reduced-form equations embodying the economy's behavior. They estimate the inflation target to have fallen by half after 1980 and also find an increase in preference for smooth interest rates; but they do not find the relative weight of the output gap to have changed significantly.

It is not enough to find that policy has changed: Has it changed in a way that explains the rise and fall of inflation? Here, the sensitivity to inflation is key, because the instrument (the fed funds rate) is a nominal rate. If the Fed's reaction to expected inflation is more than one-for-one, then the real short-term rate (the fed funds rate less inflation) will rise when inflation rises, thereby curbing real activity and pushing inflation down. Such a reaction function is stabilizing. But if the reaction is less than one-for-one, as Taylor found to be the case before 1980, the Fed ends up stimulating the economy even as inflation is expected to rise, leading to further rises in inflation and potential instability. This mechanism provides a way for a change in the Taylor rule to explain the movements in inflation.

Clarida, Gali, and Gertler (2000) pursue this lead, in two ways. They assess the change in monetary policy in a more satisfactory formulation and rigorously formalize the intuition that certain policies can lead to instability. The Taylor rule they estimate is an interest rate smoothing, forward-looking Taylor rule. The rule is forward-looking because expected inflation rather than current inflation is being monitored. This, however, makes it depend on something that we do not observe, namely inflation expectations. How can one estimate a rule that depends on an unobservable? We do observe what the rule prescribed each time, and we also observe what inflation turned out to be each time. So, for any choice of parameters (targets, sensitivity to deviations), we can compute what rate the rule would have prescribed had actual inflation been known in advance. Assuming that the Fed makes the best possible forecast of inflation, its inflation forecast errors should be unpredictable (otherwise it is not making the best available forecast); so the deviations of the rule's prescription from what it should have been, had the Fed known actual inflation in advance, should also be unpredictable. We can then look for the parameter values that make the rule's prescription (had inflation been known) deviate in the least predictable way from what the Fed actually did.

Clarida, Gali, and Gertler proceed to estimate the Fed's policy rule separately over two samples, before and after Volcker's appointment in 1979. Their results are striking--the sensitivity to inflation nearly triples after Volcker's appointment. Furthermore, that coefficient was slightly less than one before, and about two after. This difference, which they find to be robust to various modeling changes, is significant in the context of an economic model that they develop to formalize the intuition presented above. Sufficiently reactive monetary policy stabilizes the economy, while a passive policy (a coefficient less than one) leaves the economy open to self-fulfilling prophecies. Suppose that the public's inflation expectations can be a "sunspot;" then, with a passive policy, higher expected inflation leads the Fed to stimulate the economy, leading to a confirmation of the expectation. (13)

The argument is still not conclusive, however. The Taylor rule has changed from passive to active, and there is a plausible model in which such a change would eliminate instabilities. But surely such instabilities would have observable consequences? There is much information in the data that is not used by Clarida, Gali, and Gertler's univariate estimation. Lubik and Schorfheide (2003) use it by explicitly taking into account the possibility of instabilities with Bayesian methods. (14) They also emphasize that a passive monetary policy has two sorts of consequences relative to an active one: It opens up the possibility of self-fulfilling prophecies, but it can also modify the way in which shocks are propagated. They estimate the parameters of Clarida, Gali, and Gertler's economic model, allowing a priori for determinacy or indeterminacy. They broadly confirm their findings, although they are unable to resolve the question of whether the post-Volcker stability resulted from a change in the response of the economy or in the elimination of sunspots, both potential consequences of the change to an activist policy.

The Clarida, Gali, and Gertler (2000) result has prompted a number of responses. One, by Orphanides (2003), was to repeat the exercise, but with a different dataset, namely the data available at the time to policymakers, as found in the Federal Reserve Board staff analyses (the "green books"). He finds broad similarities in policy over the two periods. In particular, the coefficient on inflation in the Taylor rule is not much changed. Instead, he finds that policy was too activist in response to an output gap that was itself systematically mismeasured. The mismeasurement is apparent when we compare the green book output gaps with an output gap computed from the whole sample now available to us, as a deviation from trend (obviously information that policymakers did not have in real time). As it turns out, the (mismeasured) output gap was negatively correlated with the inflation forecast (-0.5), but the (true) output gap is not. The Fed was actively trying to stimulate an economy that it saw as under-performing, but doing so typically when inflation was already high. As a result, in the Clarida-Gali-Gertler exercise with actual data, reactions to the output gap are misattributed as wrongheaded reactions to inflation, and the Fed ends up looking passive with respect to inflation even though it was not. This, argues Orphanides, also explains the stop-go cycles of the 1970s as pursuit of an overoptimistic output target fueled inflation, leading to sudden tightening in response.

There are two difficulties with these findings, both related to the output gap. One is Taylor's (2002) claim that policymakers in practice did not pay much attention to this measure of the output gap. The other, which may be related, is that such mismeasurements of the output gap (by up to 10 percent) persisting for years throughout the 1970s are difficult to believe. True, the U.S. economy underwent a productivity slowdown in the late 1960s and early 1970s, which no one anticipated and anyone computing output gaps based on the earlier, higher productivity trend would have over-estimated. But it would not take ten years or more to realize the mistake, and the slowdown was already being debated in the late 1960s and early 1970s (see Nordhaus, 1972).

VARs

The other series of findings for changes in policy comes from the VAR literature. One broad approach is to estimate coefficients in a VAR and look for changes or breaks between periods of time. Another approach is to build a statistical model that explicitly allows for changes in the coefficients and see how much change transpires in the data.

Ahmed, Levin, and Wilson (2002) study the question of a change in parameters for both output (15) and the Consumer Price Index (inflation). They perform a battery of tests. First, they use procedures to detect multiple breakpoints in time-series, that is, points in time where the mean of the series could have changed. For inflation, they find breaks in 1973, 1978, and 1981. (16) They analyze several vector autoregressions, varying by the list of variables included as well as the frequency of the data. In unrestricted VARs, they test for coefficient stability and for constancy of error variances. In identified VARs, (17) they once again test for changes in coefficients as well as changes in the variance of the innovations. They find strong evidence of structural breaks in all three models and reduced volatility of monetary policy shock and fundamental output shock. The volatility of inflation shock remains the same in the quarterly model but decreases in the monthly model.

Cogley and Sargent (2001) represent the next stage. Rather than estimating a model with constant coefficients and determining whether it is rejected by the data, they try to model the extent to which there has been variation in the parameters. Their inspiration is Lucas's (1976) discussion of drift or repeated changes made in the supposedly stable parameters of the large-scale macroeconomic models that were in use at the time and Sargent's (1999) interpretation of that drift. Consequently, they consider a Bayesian VAR of output, unemployment, and the short-term nominal interest rate, in which some parameters are explicitly allowed to vary over time. The coefficients of the VAR are allowed to vary over time as random walks, and the stochastic structure of the innovations is kept invariant. They find significant changes in the coefficients. In particular, they examine the coefficients of the policy equation, and, in the spirit of Clarida, Gali, and Gertler, they compute a measure of activism, reflecting whether the stance of monetary policy is accommodative or not. They find that it is accommodative from 1973 to 1980, but not in the earlier and later periods. However, their measure of activism reflects dispersed beliefs on those periods, indicating a fair amount of uncertainty as to the degree of activism.

Has luck changed?

Early proponents of the bad-luck (or Princeton (18)) view have proposed specific candidates for the sources of the bad luck. Blinder (1982) and Hamilton (1983) point to the importance of oil shocks to the economy in the 1970s. But DeLong (1997) and Clarida, Gali, and Gertler (2000) have cast doubt on their importance for inflation itself, pointing to the timing discrepancies. Inflation took off well before the 1973 oil shock and again before the 1979 shock; conversely, it fell drastically while oil prices remained very high until 1985. They also doubt that the oil shock of 1973 on its own could have caused sustained inflation for a decade without major help from an accommodative monetary policy; if anything, the oil-induced recessions dampened the inflationary effect of the oil shocks themselves. Also, as DeLong notes, unlike the GDP deflator, wage growth is not affected by oil price shocks. Barsky and Kilian (2001) have noted the dramatic surge in the price of other industrial commodities that preceded the 1973 oil shock and have shown that a model can explain the bulk of stagflation by monetary expansions and contractions without reference to supply shocks.

The more recent work does not try to identify what exact piece of bad luck is to blame. Instead, it tries to identify changes over time in the exogenous sources of fluctuations that affect the economy. One way to do this is to estimate a statistical model that posits no change and test for changes in the parameters. Another way is to explicitly model the process of change in the shocks.

The first type of analysis is exemplified by Bernanke and Mihov (1998a, 1998b), who offered indirect evidence by arguing that parameters of the Fed's reaction function did not change. The general aim of their paper is to provide a useful statistical model of the Fed that spans the whole period, and in particular to determine which choice of policy variables (fed funds rate, nonborrowed reserves, total reserves) and which model of the relations between these variables best represent the Fed's behavior. They test for breaks, or abrupt changes in the coefficients of the VAR, and don't find any, although they do find changes in the variance-covariance matrix of the innovations to the policy variables. (19)

The Cogley-Sargent results suggest strongly that changes in policy were substantial. In his discussion of their paper, Sims (2001b) argued that estimated changes in the coefficients may in fact result from a misspecification of the innovations as having a constant variance. What happens if one explicitly models changes in luck, that is, in the stochastic nature of the shocks affecting the economy? This approach has been taken in a series of papers by Sims (1999, 2001a) and Sims and Zha (2002). In the context of a VAR, the papers ask how well a model of monetary policy (and eventually the economy) will fit the data when its parameters are explicitly allowed to vary over time in a stochastic way. All three papers share a common modeling of this stochastic dependence. (20)

In Sims (1999), the model posits the short-term interest rate as a function of six of its lags and those of a commodity price index. The parameters of the model (coefficients, intercept, and variance of error term) are allowed to change over time in the following way. Sims posits three possible regimes, to which three sets of parameters correspond. Transition from one regime to the next is random and follows a Markov process, in which the probabilities of being in any regime next month are only a function of the current regime. The process is restricted in that state 1 can only be followed by state 1 or state 2, and state 3 by state 2 or state 3. He then estimates the three sets of parameters and the probabilities of switching from one regime to the other. (21) Of course, the statistical procedure is free to produce no differences between the three regimes or differences only in the coefficients of the linear model.

As it turns out, Sims finds significant differences among the three regimes. One regime has a high average level of interest rates, but policy is not very responsive to inflation and the shocks have a low variance. The next regime has a lower average level of rates but more responsive rates and greater variance in the shocks; likewise the third regime. The third regime, with the coefficient on commodity prices eight times higher than in the first regime, is found to occur rarely and not to last very long. Policy doesn't react much to temporary movements in prices in normal times, but as they appear more likely to be persistent it reacts more strongly.

In Sims (2001a), the model is extended to include a measure of output (industrial production) alongside inflation in the estimated reaction function. The short-term interest rate depends on six lags of itself and on the three-month change in prices and output. The exogenous variation in coefficients and variances of innovations is restricted: It takes the form of two independent Markov chains, one for the coefficients and one for the variances. The model with the best fit shows monetary policy alternating randomly between two states, a "smoothing" regime and an "activist" regime. The activist regime occurs throughout the sample period, not more often before or after 1980; and it lasts only a few months.

Sims and Zha (2002) considerably generalize the statistical model. They use 12 lags and more variables: In addition to the fed funds rate and the commodity price index, they include the Consumer Price Index, GDP, unemployment, and M2 (a broad measure of money), a broader set than most of the other work in this literature. In terms of the time variation of the coefficients, they consider a variety of models (still driven by a Markov process), whose fit they compare. The best reported fit is for a model where all variances change, but only the coefficients in the monetary policy equation change. They find, in line with Sims (1999, 2001a) that one state corresponds to a highly active Fed, but occurs only in the middle of the period (particularly during 1979 to 1982 when the Fed was targeting reserves). They find little difference between the other two regimes, whether by looking at the impulse response functions or running counterfactuals.

A related line of work has analyzed the statistical properties of inflation alone, in particular its degree of persistence. Sargent (1999) attaches a good deal of importance to persistence of inflation, where it plays a key role in the learning story, although it is not central to the contention that policy (as a response to other variables) has changed over time. Nevertheless, Pivetta and Reis (2003) try to revisit Cogley and Sargent (2001) with a univariate model of inflation, which is a VAR with only one variable. In such a framework, "persistence" (or long-run predictability) comes down to some function of the impulse response function, which describes the impact of a shock on all subsequent values of a variable. They find that their different measures of persistence do not vary much over the period. They also compute the Bayesian analogue of confidence intervals, which they find to be very wide. Classical (non-Bayesian) methods yield similar results. Their results are difficult to compare because of the univariate framework they adopt, but have been taken as evidence for the bad-luck view.

Have both policy and luck changed?

It is possible that both policy and luck changed to some extent. Some of the most recent research seems to tend in that direction.

In response to criticisms leveled by Stock (2001) and Sims (2001b) to their earlier work, Cogley and Sargent (2003) allow for both forms of variations, coefficients and stochastic structure. Specifically, the variances of the VAR innovations follow a geometric random walk. (22) As for the first question, they find changes over time both in the stochastic disturbances affecting the system and in the coefficients of the system. The variance of the VAR innovations rises to 1981 and falls thereafter. But the coefficients change as well, and their earlier results are qualitatively the same.

Cogley and Sargent also attempt to respond to the evidence of the opposite camp, and ask: How can the Bernanke-Mihov results of no change in coefficients be reconciled with their findings? Bernanke and Mihov test a null hypothesis of no change in policy against an alternative of a sudden break. As in all statistical tests, a failure to reject the null hypothesis is strong evidence against an alternative only if the statistic's behavior would be markedly different under that alternative. The Cogley-Sargent alternative is not one of a one-time break, but rather of drift in coefficients. Using simulated data, they show that the test used by Bernanke and Mihov has low power against their alternative, that is, the test cannot distinguish their alternative of slow change from the null hypothesis of no change. They examine several other tests, and the one that does well against their alternative simulations rejects the null hypothesis of no change in the data.

In broad outline, then, the literature is beginning to find common ground, in that both luck and policy have changed. There remain a number of unresolved questions. Even if both changed, which change is more significant? And, if policy did change, why did it do so?

How much?

The quantitative question of which change is more significant can be addressed in the VAR framework with counterfactuals. Suppose one has estimated an identified VAR, that is, one where "Nature's hid causes" are known. (23) A counterfactual exercise can help assess what would have happened if the policy of one period had been confronted with the luck of another period. This is done by applying the estimated coefficients of one period in response to the shocks of the other period and seeing how unconditional variance has changed. Ahmed, Levin, and Wilson (2001) do carry out counterfactuals, and find that 85 percent to 90 percent of the decline in volatility in inflation comes from change in coefficients.

Boivin and Giannoni (2003) use a VAR methodology as well, but their VAR representation of the variables derives from an economic model, so that the coefficients and innovations of the VAR can be related to the parameters of the economic model. Furthermore, they distinguish parameters of the central bank's policy function (which they posit to be a forward-looking Taylor rule) from policy-invariant parameters (preferences and technology) and try to find out which changed. The estimation method is one of in-direct inference, in which the parameters are selected so as to make the behavior of the model economy's variables mimic as closely as possible that of the data (as represented by the impulse response functions). With this different approach, Boivin and Giannoni find that the responsiveness of monetary policy to inflation has increased by 60 percent after 1980, but they also find that the non-policy parameters have changed. Using their economic model, they perform counterfactual exercises and find that the fall in responsiveness is due to changes in policy rather than changes in the economy.

Consensus has not been achieved, however. Primiceri (2003) uses a structural VAR approach that extends Cogley and Sargent by allowing for time-varying correlations between the innovations in the VAR. He finds (as Cogley and Sargent did) that the variances of the disturbances changed considerably over time, rising in the 1970s and early 1980s and then falling. The long-run cumulative response of interest rates to inflation shocks, while showing some variation over time, does not confirm the Clarida Gali and Gertler result of a pre-Volcker unstable Taylor rule. Finally, he conducts a counterfactual and finds that using the policy parameters of the Greenspan era would have made virtually no difference to inflation in the 1970s.

The work of Clarida, Gali, and Gertler (2000) points to an interesting interaction between luck and policy. As they show, bad (that is, passive monetary) policy can lead to instability in the economic system, partly because the reaction of the economy to shocks will be weaker, partly because of the possibility of sunspots. The switch from passive to active policy may change the behavior of the economy, it may also prevent extraneous randomness from affecting it; in other words, it may reduce the part that bad luck can play. This poses some challenges for any attempt at quantifying the relative contributions of bad policy and bad luck; Lubik and Schorfheide (2003) make significant progress in addressing those challenges, but are unable to determine unambiguously how much of the reduction in volatility comes from the elimination of sunspots. (24)

Why did policy change?

The deeper, and in some sense more qualitative, question, is: Why did policy change? Here again, consensus remains elusive, because there is no agreed empirical model of government behavior that would provide an explanation. Although essentially statistical in nature, the Cogley-Sargent and Sims-Zha papers reveal very different theories of government behavior.

Recall that Cogley and Sargent (2003) estimate a statistical model of change in policy and luck, which allows them to describe the pattern of change over the course of time. Although their model is purely statistical (and in particular does not articulate a theory of government behavior), it allows them to measure the evolution of policy over time and characterize it in ways that make contact with the theories of Sargent (1999). They do so by presenting the behavior over time of several statistical objects that a government in Sargent's (1999) model would care about. Such are long-run forecasts of inflation and unemployment at each point in time, which the authors interpret as "core" measures of these variables. They also measure how much of the variation in inflation comes from short-run versus long-run variations.

These objects display a similar pattern, roughly timed with the three acts of DeLong's drama. Inflation and unemployment are low in the 1960s, rise to the late 1970s, and fall again. Inflation persistence rises and falls in the same way. This lends general support to the bad-policy view. Cogley and Sargent also perform Solow-Tobin tests of the natural rate hypothesis at various points in time. They find that it is rejected until 1972 and accepted after that, with however a margin of acceptance slowly declining since 1980, a trend that underlines the risk of "recidivism" or ceding again to temptation of high inflation. This may appear broadly consistent with the partial learning story of Sargent, although Sims (2001b) notes that the early date at which the natural rate hypothesis ceases to be rejected poses a difficulty: Bad policy should not have lasted until 1980.

The Sims-Zha line of work appears to agree that there was a change in policy. The type of change, however, is of a peculiar nature. When regimes are governed by a Markov chain of the kind they use, fundamental or permanent change such as the one suggested by the conventional story is, in effect, ruled out. If the model fits (and, naturally, the data is always free to reject it), it will represent changes in policy as back-and-forth fluctuations between regimes, with any regime likely to return at some future point. This explains how Sims (1999) can find that there has "by and large been continuity in American monetary policy, albeit with alternation between periods of erratic, aggressive reaction to the state of the economy and periods of more predictable and less aggressive response" (see a similar conclusion in Sims and Zha, 2002, where the aggressive policy is concentrated in the 1979-82 period).

Sims and Zha do not justify their Markov structure in terms of a model of government behavior, although one is tempted to think back to the "Lucas critique taken seriously" line of reasoning. Primiceri (2003) has argued that restricting changes in parameter values to take the form of sudden jumps may not be suitable where aggregation takes place over large numbers of agents, and where expectations and learning may play a role in agents' behavior. Such factors tend to smooth the observable responses, even if the underlying changes are abrupt.

Conclusion

The conventional view of the rise and fall of inflation in the U.S. is based on central bankers making mistakes and learning from them. This view has been supported with considerable narrative and anecdotal evidence, but providing an empirical confirmation has proven difficult. Considerable statistical expertise has been brought to bear on the question. It seems well established now that policy has changed significantly over time, but that the shocks buffeting the U.S. economy were of a different nature in the 1970s.

How much of the inflation of that decade is attributable to changes in policy versus changes in luck is not settled, although the evidence so far leans toward the former. The Taylor rule approach has emphasized the potential for destabilizing monetary policy and found evidence that policy was indeed too passive in the 1970s. As to the reasons for the changes in policy, there are intriguing theories, although none has reached the point where it can confront the data. Yet if we are to apportion blame (or praise) among the central bank administrations in figure 1, we need a method for doing so. But, whereas macroeconomics has developed standard ways to model the private sector, we lack an agreed framework in which to model how policy is made.

NOTES

(1) The Vietnam War and various social programs launched at the time may have contributed some fiscal pressure to the monetary loosening that followed.

(2) I use Federal Reserve Board chairmen as eponyms of the successive monetary and fiscal policies. This is rather unfair, and a reading of Romer and Romer (2002) leaves one less than convinced that Martin actually believed in the Samuelson-Solow menu.

(3) Romer and Romer (2002) also search for clues in official pronouncements as well as in the deliberations of the Federal Open Market Committee.

(4) See a more detailed discussion of these points by Professors X and Y, cited in Sargent (2002).

(5) The Humphrey-Hawkins Act, passed at about that time, directed the Fed to "promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates."

(6) Remember that the trade-off between surprise inflation and unemployment is constant.

(7) Cogley, Morozov, and Sargent (2003), however, find an inverse correlation between their measures of core inflation and the natural rate in the UK.

(8) Why doesn't the central bank keep disinflating to acquire a reputation as an inflation fighter? In this model, in any given equilibrium, there is no room for the central bank to change the private sector's expectations. They are what they are. If the central bank keeps disinflating, it will keep being punished for violating expectations; since that is too costly, the central bank doesn't disinflate, and the expectations are validated.

(9) Pangloss was Voltaire's satirical incarnation of Leibniz, who argued that our world is the outcome of a maximization problem under constraints we don't know, and however bad the outcome seems to us at times, we should trust the Great Maximizer in the Sky.

(10) See, for example, Velde and Veracierto (2000) to see how much worse.

(11) This is modeled by having the policymaker use something other than least-squares estimation. The reason for this departure is that least-squares estimation will bring the policymaker right back to an analogue of the Kydland-Prescott story, systematically trying to exploit the Phillips curve and systematically expected to do so by the private sector.

(12) An exception is Favero and Rovelli (2003).

(13) The simulations with the calibrated model show a positive correlation between output and inflation. That is, the stimulus provided by the passive Fed generates inflation, but it also generates higher output. To generate something that looks like the stagflation of the 1970s, the authors change tack. They first note that, for some specifications, the confidence intervals around their estimates of the coefficient on expected inflation in the pre-Volcker years do not rule out values above but close to one. Then, they show that, in a version of their model without sunspots, adverse supply shocks lead to lower output but relatively high inflation. Christiano and Gust (2000) present a different model of the economy, which does generate stagflation as a result of self-fulfilling prophecies, relying on a different mechanism for generating real effects of monetary policy.

(14) Bayesian methods allow the econometrician to express beliefs about a range of possible parameter values (including passive and active policy) and to formulate to what extent the data confirm or revise these beliefs.

(15) A related issue is the fact that the business cycle seems to have changed since the 1980s: Expansions appear to be longer and recessions shallower. The same question, bad luck or bad policy, is being examined by a growing literature, for example, McConnell and Perez-Quiros (2000) and Blanchard and Simon (2001).

(16) Levin and Piger (2002) perform similar tests for a sample of 12 industrial countries and find breaks in inflation in the late 1980s and early 1990s.

(17) They use the VAR identification scheme of Christiano, Eichenbaum, and Evans (1998).

(18) Sargent (2002) labeled the bad-policy view the "Berkeley story." Symmetry requires, and the affiliation of Bernanke, Blinder, Sims, and Watson justifies, a counterpart for the poor-luck view.

(19) Hanson (2001) reports similar findings.

(20) The models also have in common that they use monthly data reaching back to 1948 (in contrast to most of the literature, which uses quarterly data beginning in the 1950s).

(21) The coefficients on lagged values are the same across regimes; only the scale of the lags of the price index changes.

(22) They also allow for the possibility of unit roots in inflation.

(23) Virgil (Georgics, Book 2, line 490).

(24) The problem of extraneous sources of uncertainty, for which Primiceri does not allow, may explain the apparent conflict between his results and the rest of the literature.

REFERENCES

Ahmed, Shaghil, Andrew Levin, and Beth Anne Wilson, 2002, "Recent U.S. macroeconomic stability: Good policies, good practices, or good luck?," Board of Governors of the Federal Reserve System, International Financial Discussion Paper, No. 730.

Barro, Robert, and David B. Gordon, 1983, "Rules, discretion, and reputation in a model of monetary policy," Journal of Monetary Economics, Vol. 12, July, pp. 101-121.

Barsky, Robert B., and Lutz Kilian, 2001, "Do we really know that oil caused the great stagflation?" NBER Macroeconomics Annual, Vol. 16, pp. 137-83.

Bernanke, Ben S., and Ilian Mihov, 1998a, "The liquidity effect and long-run neutrality," in Carnegie-Rochester Conference Series on Public Policy, Vol. 49, Bennett T. McCallum and Charles I. Plosser (eds.), Amsterdam: North-Holland, pp. 149-194.

--, 1998b, "Measuring monetary policy," Quarterly Journal of Economics, Vol. 1l3, August, pp. 869-902.

Blanchard, Olivier, and J. Simon, 2001, "The long and large decline in U.S. output volatility," Brookings Papers on Economic Activity, Vol. 1, pp. 135-164.

Blinder, Alan, 2003, "Has monetary policy become more effective?" NBER, working paper, No. 9459.

--, 1982, "The anatomy of double-digit inflation in the 1970s," in Inflation: Causes and Effects, Robert. E. Hall (ed.), Chicago: University of Chicago Press.

Boivin, Jean, and Marc Giannoni, 2003, "Has monetary policy become more effective?," National Bureau of Economic Research, working paper, No. 9459.

Calvo, Guillermo A., 1978, "On the time consistency of optimal policy in a monetary economy," Econometrica, Vol. 46, No. 6, November, pp. 1411-1428.

Chari, V. V., Lawrence Christiano, and Martin Eichenbaum, 1998, "Expectation traps and discretion," Journal of Economic Theory, Vol. 81, No. 2, August, pp. 462-492.

Christiano, Lawrence, and Christopher Gust, 2000, "The expectations trap hypothesis," Economic Perspectives, Federal Reserve Bank of Chicago, Second Quarter, pp. 21-39.

Christiano, Lawrence, J., Martin Eichenbaum, and Charles L. Evans, 1999, "Monetary policy shocks: What have we learned and to what end?," in Handbook of Macroeconomics, Vol. 1A, Handbooks in Economics, Vol. 15, Amsterdam, New York, and Oxford: Elsevier Science, North-Holland, pp. 65-148.

Clarida, Richard, Jordi Gali, and Mark Gertler, 2000, "Monetary policy rules and macroeconomic stability: Evidence and some theory," Quarterly Journal of Economics, Vol. 115, No. 1, pp. 147-180.

Cogley, Timothy, and Thomas J. Sargent, 2003, "Drifts and volatilities: Monetary policies and outcomes in the post-WWII U.S.," New York University, mimeo.

--, 2001, "Evolving post-World War II U.S. inflation dynamics," NBER Macroeconomics Annual, Vol. 16, pp. 331-373.

Cogley, Timothy, Sergei Morozov, and Thomas J. Sargent, 2003, "Bayesian fan charts for UK inflation: Forecasting and sources of uncertainty in an evolving monetary system," Stanford University, mimeo.

Cooley, Thomas F., Stephen F. LeRoy, and Neil Raymon, 1984, "Econometric policy evaluation: Note," American Economic Review, Vol. 74, No. 3, June, pp. 467-470.

DeLong, J. Bradford, 1997, "America's peacetime inflation: The 1970s, " in Reducing Inflation: Motivation and Strategy, C. Romer and D. Romer (eds.), NBER Studies in Business Cycles, Vol. 30, Chicago: University of Chicago Press.

Favero, Carlo A., and Riccardo Rovelli, 2003, "Macroeconomic stability and the preferences of the Fed. A formal analysis, 1961-98," Journal of Money, Credit and Banking, Vol. 35, No. 4, August, pp. 545-556.

Friedman, Milton, 1968, "The role of monetary policy," American Economic Review, Vol. 58, No. 1, March, pp. 1-17.

Friedman, Milton, and Anna Schwartz, 1963, A Monetary History of the United States, 1867-1960, Princeton, NJ: Princeton University Press.

Hamilton, James D., 1983, "Oil and the macroeconomy since World War II," Journal of Political Economy, April, Vol. 91, No. 2, pp. 228-248.

Hanson, Michael S., 2001, "Varying monetary policy regimes: A vector autoregressive investigation," Wesleyan University, mimeo.

Ireland, Peter, 1999, "Does the time-consistency problem explain the behavior of inflation in the United States?," Journal of Monetary Economics, Vol. 44, No. 2, pp. 279-292.

Kydland, Finn E., and Edward C. Prescott, 1977, "Rules rather than discretion: The inconsistency of optimal plans," Journal of Political Economy, Vol. 85, No. 3, June, pp. 473-492.

Leduc, Sylvain, 2003, "How inflation hawks escape expectations traps," Business Review, Federal Reserve Bank of Philadelphia, First Quarter, pp. 13-20.

Leeper, Eric M., and Tao Zha, 2001, "Toward a theory of modest policy interventions," Indiana University, Department of Economics, and Federal Reserve Bank of Atlanta, Research Department, mimeo.

Levin, Andrew T., and Jeremy M. Piger, 2002, "Is inflation persistence intrinsic in industrial economies?," Federal Reserve Bank of St. Louis, working paper, No. 2002-023C.

Lubik, Thomas A., and Frank Schorfheide, 2003, "Testing for indeterminacy: An application to U.S. monetary policy," Johns Hopkins University, mimeo.

Lucas, Robert E., Jr., 1976, "Econometric policy evaluation: A critique," in The Phillips Curve and Labor Markets, Karl Brunner and Alan Meltzer (eds.), Carnegie-Rochester Series on Public Policy, Vol. 1, pp. 19-46.

--, 1972, "Expectations and the neutrality of money," Journal of Economic Theory, Vol. 4, No. 2, April, pp. 103-124.

McConnell, Margaret M., and Gabriel Perez-Quiros, 2000, "Output fluctuations in the United States: What has changed since the early 1980s?," American Economic Review, Vol. 90, No. 5, December, pp. 1464-1476.

Nordhaus, William D., 1972, "The recent productivity slowdown," Brookings Papers on Economic Activity, 1972, Vol. 0, No. 3, pp. 493-536.

Orphanides, Athanasios, 2003, "The quest for prosperity without inflation," Journal of Monetary Economics, Vol. 50, April, pp. 633-663.

Parkin, Michael, 1993, "Inflation in North America," in Price Stabilization in the 1990s, K. Shigehara (ed.), London: MacMillan.

Perry, George L., 1978, "Slowing the wage-price spiral: The macroeconomic view," Brookings Papers on Economic Activity, Vol. 2, pp. 259-291.

Phelps, Edmund, 1968, "Money-wage dynamics and labor-market equilibrium," Journal of Political Economy, Vol. 76, No. 4, part 2, July-August, pp. 678-711.

Phillips, A. W., 1958, "The relation between unemployment and the rate of change of money wage rates in the United Kingdom, 1861-1957," Economica,

Vol. 25, November, pp. 283-299.

Pivetta, Frederic, and Ricardo Reis, 2001, "The persistence of inflation in the United States," Harvard University, mimeo.

Primiceri, Giorgio E., 2003, "Time varying structural vector autoregressions and monetary policy," Princeton University, mimeo.

Romer, Christina D., and David Romer, 2002, "The evolution of economic understanding and postwar stabilizing policy perspective," in Rethinking Stabilization Policy, Federal Reserve Bank of Kansas City, pp. 11-78.

Samuelson, Paul A., and Robert M. Solow, 1960, "Analytical aspects of anti-inflation policy," American Economic Review, Vol. 50, No. 2, May, pp. 177-184.

Sargent, Thomas J., 2002, "Commentary: The evolution of economic understanding and postwar stabilizing policy perspective," in Rethinking Stabilization Policy, Federal Reserve Bank of Kansas City, pp. 79-94.

--, 1999, The Conquest of American Inflation, Princeton, NJ: Princeton University Press.

--, 1984, "Autoregressions, expectations, and advice," American Economic Review, Vol. 74, No. 2, May, pp. 408-415.

--, 1971, "A note on the 'accelerationist' controversy," Journal of Money, Credit, and Banking, Vol. 3, No. 3, August, pp. 721-725.

Sims, Christopher A., 2001a, "Stability and instability in U.S. monetary policy behavior," Princeton University, mimeo.

--, 2001b, "Comment on Sargent and Cogley's 'Evolving post World War II U.S. inflation dynamics'," NBER Macroeconomics Annual, Vol. 16, pp. 373-379.

--, 1999, "Drifts and breaks in monetary policy," Princeton University, mimeo.

--, 1988, "Projecting policy effects with statistical models," Revista de Analysis Economico, Vol. 3, pp. 3-20.

--, 1980, "Comparison of interwar and postwar business cycles: Monetarism reconsidered," American Economic Review, Vol. 70, No. 2, May, pp. 250-257.

Sims, Christopher A., and Tao Zha, 2002, "Macro-economic switching," Princeton University, mimeo.

Stock, James H., 2001, "Discussion of Cogley and Sargent 'Evolving post-World War II U.S. inflation dynamics," NBER Macroeconomics Annual, Vol. 16, pp. 379-387.

Taylor, John B., 2002, "A half-century of changes in monetary policy," remarks delivered at the Conference in Honor of Milton Friedman, University of Chicago, November 8.

--, 1999, "An historical analysis of monetary policy rules," in Monetary Policy Rules, John B. Taylor (ed.), Chicago: University of Chicago Press, pp. 319-347.

--, 1997, "Comment on America's only peacetime inflation: The 1970's," in Reducing Inflation, Christina Romer and David Romer (eds.), NBER Studies in Business Cycles, Vol. 30.

--, 1993a, Macroeconomic Policy in a Worm Economy: From Econometric Design to Practical Operation, New York: W. W. Norton.

--, 1993b, "Discretion versus policy rules in practice," Carnegie-Rochester Conference Series on Public Policy, Vol. 39, December, pp. 195-224.

--, 1979, "Estimation and control of a macroeconomic model with rational expectations," Econometrica, Vol. 47, No. 5, September, pp. 1267-1286.

Velde, Francois R., and Marcelo Veracierto, 2000, "Dollarization in Argentina," Economic Perspectives, Federal Reserve Bank of Chicago, First Quarter, pp. 24-35.

Francois R. Velde is a senior economist at the Federal Reserve Bank of Chicago. The author thanks Tim Cogley. Tom Sargent, Mark Watson, and his colleagues at the Chicago Fed for their comments.
COPYRIGHT 2004 Federal Reserve Bank of Chicago
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:united states
Author:Velde, Francois R.
Publication:Economic Perspectives
Geographic Code:1USA
Date:Mar 22, 2004
Words:13098
Previous Article:May 5-7, 2004 how do banks compete? Strategy, regulation, and technology.
Next Article:The acceleration in the U.S. total factor productivity after 1995: the role of information technology.
Topics:


Related Articles
Inflation reaches 2.7% - highest rate under Blair; Market watch.
Inflation setback.
Asia must tighten monetary, fiscal policy to tame inflation: ADB.
6% inflation surges to 16yr high; CREDIT CRUNCH CRISIS.
Pensioners 'are hit the hardest by inflation'.
Soaring inflation in the spotlight.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |