Printer Friendly

Calibrated models.

Calibrated models have become an important tool for economic research and policy analysis. This paper discusses and illustrates the methodology of calibration. It also describes the range of questions that have been addressed with calibrated models and considers the problem of evaluation of calibrated models.


Economists, in common with natural scientists, use theory in two ways: we use it to understand and explain observed features of the world and we use it to infer features of the world that we ought to observe. Indeed, one of the meanings of the word theory is that it is a set of idealized or hypothetical facts. In the early days of the development of economics, theories were established by verbal argument. Eventually it became clear that there are diminishing returns to verbal theorizing because it is difficult to be sufficiently precise about important questions in economics using only verbal expression. Economists began to find it more productive to describe theories mathematically, because mathematics offered greater precision and clarity. That transition took place more than 50 years ago. Increasingly, it has become apparent that there are limitations to mathematical argument as well. Economists have responded to this by turning to computation as a way of theorizing. Developments in economic theory and technological advances in computing have made it feasible to obtain precise quantitative answers to important questions in economics by studying the behaviour of calibrated model economies. Calibration and quantitative theory are simply the natural evolution and extension of an economist's ability to theorize about the nature of the economic world.

Any change in the methods that economists use to address questions of importance inevitably leads to confusion and resistance. The increasing use of calibrated models and quantitative theory has been greeted with scepticism by many economists who are used to dealing with model economies that have been constructed using traditional econometric techniques, but view calibration with suspicion. In this paper I address three important questions concerning calibrated models: (i) What is calibration and how should it be done? (11) What is the relation between calibrating a model and the more familiar approach of estimation? (iii) How are calibrated models and quantitative theory used and how should the results be judged? There is much ongoing research on the evaluation of calibrated models and there is a huge and growing body of literature using calibrated dynamic general equilibrium models to address questions in every substantive area of economics. There is still only a small literature on the methodology of calibration. Accordingly, this paper is devoted mostly to a discussion of what calibration is and how one goes about it. Most of the exposition proceeds by example. I also discuss briefly the relation between calibration and estimation and the use and evaluation of calibrated models.

The practice of calibrating models is controversial, and it is regarded by many as sloppy empirical practice. A standard view of calibration is that the researcher chooses a set of parameters for preferences and technology simply by taking them from the papers of others. For many, calibration connotes a cavalier approach to data or, worse, the blatant misuse of data. I want to counter this view by first discussing calibration in general terms and then describing in some detail a simple example. This is followed by a section which treats some extensions of the simple example involving more complicated calibration issues.


The process called calibration has a long tradition in economics. It has found widespread use in computable general equilibrium models of public finance and international trade as described in Shoven and Whalley (1984) and Auerbach and Kotlikoff(1987). Calibration is a strategy for finding numerical values for the parameters of artificial economic worlds. The use of calibrated models and quantitative theory has grown rapidly in the past decade and practitioners are struggling to define and refine the methods just as the followers of the Cowles Commission programme did with estimation and inference in the early days of their programme. Calibration, at its current stage of development, seems not to be well understood. Kevin Hoover (1995), for example, describes it thus: A model is calibrated when its parameters are quantified from casual empiricism or unrelated econometric studies or are chosen to guarantee that the model mimics some particular feature of the historical data.'

He goes on to describe Kydland and Prescott's (1982) choice of parameters as 'casual' and their checks on robustness as `perfunctory'.(2) Hoover's description is off the mark; for example, there should be nothing casual about the empiricism. But, his description is probably representative of what many people believe to be the practice. Accordingly, it seems useful to try to set down more precisely what calibrators are doing.

Calibration uses economic theory extensively as the basis for restricting a general framework and mapping that framework into the measured data. How does it use the theory and what is the role of the data? Tjalling Koopmans (1947) and the Cowles Commission economists made it clear that measurement without theory was a very limited enterprise. The principle of identification is to use economic theory to be able to extract more information from the data. Calibration certainly encompasses that idea, but there is more to it. It also incorporates the idea that the relationship between theory and measurement is not unidirectional. In the calibration approach, measurements are used to give content to theory; in addition, the theory helps us to focus on what to measure and how to measure it. This symbiotic relationship between theory and measurement, more than anything else, distinguishes the quantitative theory approach from the standard econometric approach.

One way to think about calibration is the following. There is a famous theorem in economics referred to as the Sonnenschein-Mantel-Debreu-Mas-Colell result. This result says (to a rough approximation): given any set of prices and allocations (excess demand functions in the usual endowment economy) there will exist an economy (a set of preferences and technology) with some number of consumers for whom these prices are equilibrium prices and the allocations are equilibrium allocations at those prices. The implication of this is that the notion of a Walrasian competitive equilibrium is not restrictive. For economists this is a powerful reminder of the limitations of economic theory. Loosely speaking, it says that there will be some equilibrium theory that explains any set of observed outcomes. Being able to explain anything is tantamount to having a theory of nothing. How do we get beyond this? The answer is that we need to impose restrictions on the kinds of theories we will entertain. This means imposing some restrictions on the preferences and technologies that we will consider. Calibration is a procedure that restricts the mapping between competitive equilibria and data (prices and allocations), such that the equilibria display certain properties. For most applications of calibration in business cycle theory, for example, the properties that we impose are those associated with balanced growth. The reason for this is that we know most developed economies display the characteristics of balanced growth. Since both growth and fluctuations are features of the data for all economies, we would like any theory of the latter to be consistent with the former. This strongly suggests that we do not want to have separate models for growth and fluctuations.

Restricting the mapping from competitive equilibria to data enables us then to compute the equilibria of model economies that display the desired properties. In this context, computing the equilibria means solving the optimization problems faced by households and firms, and determining the equilibrium path of consumption, investment, output, and hours for this model economy. The model economy will display as many prescribed properties as there are calibrated parameters and choices of functional form. Accordingly, a model economy cannot help us to learn about those features of an economy that are used to calibrate it. The output of a calibration exercise is the answer to the question the model economy was designed to answer; e.g. what is the welfare cost of inflation? (Cooley and Hansen, 19 89) or how important are fluctuations in capacity utilization for business cycle fluctuations? (Greenwood et al., 1988; Cooley et al. 1995a). Finally, these model economies will also typically display other properties. These are a bonus. They are somewhat like over-identifying restrictions in that they can help us to discriminate between competing models designed to answer the same question.

Once it is possible to compute the equilibria of these economies and study them, we have created a laboratory within which we can ask well-defined questions. The hope is that the answers convey information about actual economies that are characterized by the prescribed properties. Advances in mathematics and numerical methods, combined with the dramatic decline in the costs of computing, have made it possible to construct model economies that are ever richer in terms of their characteristics, and use them as laboratories to address a broad range of important questions.

(i) Some Rules for Calibration

The description just given is an idealization. I will try to make it more concrete by setting down some rules that one might follow to achieve this mapping between theory and measurements -- rules for 'good' calibration. I illustrate the rules by describing, in detail, a simple example.

Do not justify parameter choices by referring to prior studies

Choosing parameters simply because they have been used by others is a very common practice. For example, Prescott,(1986) gave a good, but not a complete, account of the set of parameter values he used to calibrate a very elementary one-sector neoclassical model, to be used to study the business cycle. That environment was designed to answer a very specific question: how much of the fluctuations associated with the business cycle could be associated with technology shocks? Many researchers use the same set of parameters Prescott used. Is that good practice? Economic environments designed to answer the same question and display the same balanced growth properties as Prescott's might use the same parameters. Nevertheless, it is possible that, if the economy being studied differs even slightly from the environment he considered, it would require a different calibration. Every economic environment has some unique features that are motivated by the questions to be addressed. Those features will usually alter the appropriate calibration.

Honour the theory

An economic environment, together with a definition equilibrium, define a frame work that one can use to address questions about the behaviour of economies. Different questions may require alterations in the framework that can take the form of changes in the environment and/or changes in the equilibrium concept. Appendix I provides a detailed example that describes a framework for business cycle analysis.

This is a theoretical framework. The framework is consistent with many different equilibrium processes for the variables of interest, which are output, consumption, employment, investment, and so on. To go from a general framework to a calibrated model that moves beyond the Sonnenschein-Mantel-Debreu-Mas-Colell anything goes' proposition is a three-step process.

Respect the measurements

The first step in actually calibrating any framework is to restrict these equilibrium processes to a parametric class. The framework described in the example in Appendix I is a neoclassical growth model. Accordingly, we would want to restrict it further to make it consistent with observations about long-term growth. This requires the use of more economic theory and some observations. This process is described in Appendix 2.

Match the measurements to the model.

An important part of the calibration process, and one that is little discussed, is to align the theoretical framework with observations on real economies. With enough theory and observations to define a parametric class of models, we can establish the correspondence between this class and the observed data for the US (or any other) economy. Establishing this correspondence may well require that one reorganize the measured data for the economy in ways that make it consistent with the class of model economies. The example, continued in Appendix 3, illustrates this by describing the match between the measurements for the US economy and the simple neoclassical growth framework.

The process described in the example uses economic theory to help define a consistent set of measurements. The measured data are rearranged and realigned to correspond to the structure of the model economy. The parameters we used to do this, however, depend only on information in the National Income and Product Accounts (NIPA), and are not specific to the model economy being studied.

Match the model to the measurements

Aligning a theoretical framework with measurements is a two-way street. There are some features of the data from which we cannot abstract. The general procedure is to set parameter values so that the behaviour of the model economy matches features of the measured data in as many dimensions as there are unknown parameters. We observe over time that certain ratios in actual economies are more or less constant. This suggests that we choose parameters for the model economy so that it mimics the actual economy on these dimensions associated with long-term growth. Then it can be used to addressed other well posed questions. The example, which continues in Appenidx 4, describes how the model economy can be matched to the features of population growth and real growth. In addition, the example describes how microeconomic observations are brought to bear in completing the calibration (see Appendix 4).

One of the standard meanings of the word calibration is 'to standardize as a measuring instrument'. This definition applies to the calibration of the stochastic growth model. Since the underlying structure is the neoclassical growth framework, the choice of parameters and functional forms ensures that this economy will display balanced growth.

(ii) Extensions

The example that accompanied the previous section underlies most studies of 'real' business cycles. It has been exhaustively explored and extended. Extensions of the basic framework are so numerous that to survey them would be a major undertaking, but many applications and issues are described in Cooley (1995). There are a few more rules that become apparent when we think about extending the basic framework to answer specific questions.

Do not proliferate free parameters

There are many ways in which more detail has been added to the basic neoclassical growth framework described above: richer descriptions of the labour market, the introduction of money, a role for the government and government policy, and so on. Usually one adds these details either to answer new questions or to provide improved answers to questions addressed with simpler models. Some researchers add details simply to have a model economy that more closely resembles the actual economy in some dimension. This amounts to adding free parameters: more parameters ensure a better fit to observations. A principle of good calibration is that one should not add parameters without having a particular question in mind and without having some way to pin down values for the parameters.

One example of this is in the labour market. The basic neoclassical growth framework considered above views all variations in aggregate hours of work as being on the Intensive' margin; that is, variations in average hours of work. An important and well-known improvement on this was to view all variations as taking place on the 'extensive margin' -- that is, being movements into and out of employment, with average hours of employment held fixed. Hansen (1985) suggested this because, for the US economy, about two-thirds of the variation in total hours of work is of the latter type. A model economy with that feature displays fluctuations of employment in response to technology shocks that closely match the fluctuations in output, as we observe in US data. An obvious improvement that makes the model resemble the US economy more directly is to incorporate both margins of adjustment. Two approaches have been suggested to do this. One of them introduces additional parameters that are difficult to pin down, the other introduces parameters that can be estimated from microeconomic observations. But neither adds parameters simply to fit the data better. Indeed, adding the parameters does not lead to a better fit but does lead to a better answer to the question, How much of output variation may be caused by technology shocks?'

Kydland and Prescott (1991) consider an economy where both the number of workers, n,, and the hours per worker, h,, can vary. The technology has the form

[y.sub.t], = [z.sub.t][h.sub.t]f ([n.sub.t], [k.sub.t]).

If the population is normalized to one, then [n.sub.t], is the fraction of people who work [h.sub.t], hours. Resources are used up in moving people from the non-market sector to the market sector. This leads to an aggregate resource constraint of the following sort:(3)

[c.sub.t] + [x.sub.t] + [m.sub.t] [is less than or equal to]

([z.sub.t][h.sub.t][n.sup.1-[theta].sub.t] [k.sup.[theta].sub.t]),

where m is the cost of moving people to the market sector. Kydland and Prescott further assume that the costs are quadratic: [m.sub.t] = [Mu] ([n.sub.t] - [n.sub.t-])(2). The structure Kydland and ]Rrescott proposed is theoretically elegant but it has a serious drawback: it is very difficult to think how one might calibrate m or correspondingly it. Adjustment costs are an example of a parameter that is difficult to calibrate. It so happens that the answers to some kinds of questions are largely invariant to this parameter, but for some it is crucial. Kydland and Prescott fix the parameter to match the variation in hours per worker and employment. They also perform a clever sensitivity analysis by asking how big adjustment costs would have to be to reproduce the relative volatility of hours and employment. If the answer was that adjustment costs would have to be absurdly high, they would be led to question the whole framework. The adjustment cost parameter is not quite a `free' parameter for them because it is included to yield a more precise estimate of the contribution of technology shocks to output fluctuations.

An alternative approach to the same problem is offered by Cho and Cooley (1994). They imbed the two margins of adjustment in preferences and justify the set-up on the basis of an implicit household production economy. Cho and Cooley assume the representative household has preferences of the form:

U(c,l,e) = u(c) - v(h)e - [Psi]/(e)e

where h represents the hours of work per day, e is fraction of days worked in a period (in equilibrium this will be the employment rate), and [Psi] is a function that also measures the `fixed' cost of giving up household production and engaging in market work. Cho and Cooley show that these preferences can be viewed as a stand-in for a household production economy and V captures the costs of replacing household production. If we assume a parametric form


then the first-order conditions for the household's problem yield an expression that relates employment to hours:


This equation has the prospect of being estimated from microeconomic observations on households. In particular, it can be estimated from data in the Panel Study on Income Dynamics (PSID). The two estimated parameters, together with the employment/population ratio and the steady-state fraction of total hours of work, can be solved for all four preference parameters. This is a good illustration of another rule.

Calibration and estimation are complements not substitutes

We have emphasized parameters being chosen so that a model economy displays long-run or growth features of an actual economy. Nevertheless, some of what is embodied in preference specifications reflects individual choices over time. As the above example illustrates, estimation based on micro-economic observations on individual behaviour can help to determine such parameters.


There is a view (I do not know how widely it is held) that quantitative theory, as represented by the study of calibrated models, and traditional statistical estimation and inference are competing methodologies. A recent discussion of calibrated models by Hansen and Heckman (1996), Sims (1996), and Kydland and Prescott (1996) would reinforce this view. The example of the previous section shows how estimation can serve the needs of calibration by uncovering certain parameters of preferences from cross-sectional a on individuals. That does not resolve the issue, however, because the estimated parameters are certainly not used in the conventional way. Some people view statistical econometrics as less casual and more precise because it relies on well-known parametric forms and produces not only point estimates of parameters, but measures of uncertainty about these statistics. Nevertheless, the use of point estimates and the neglect or informal treatment of uncertainty is as common in statistical econometrics as it is in calibration.

The econometric approach that dominated research from the 1940s until the 1980s (and perhaps still dominates) takes the observed data as given and uses them to determine the structure of economic models. For this reason, econometrics is often described as a search for the data generation process'. In this approach, the researcher conditions on the data and searches for the economic world most likely to have generated them. The calibration approach, in contrast, views the appropriate data or measurements as something to be determined in part by the features of the theory. Some of the parameter values are chosen based on observed features of actual economies, as in the traditional methods, but the determination of others may be based heavily on the theory. In this process, calibration and estimation are complements, not substitutes. The following two examples illustrate ways in which they complement each other.

Household production

Model economies that incorporate household production have been used to study business cycles and fiscal policies. Explicitly accounting for the fact that a lot of economic activity takes place in the home and that the allocation of effort between home and market is affected by the business cycle and by fiscal policies, has been shown to be important for answering certain questions in business cycle theory and public finance. In economies with this feature, households, have preferences defined over consumption and leisure:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] but consumption is an aggregate of home produced goods, [C.sub.H], and market produced goods, [C.sub.M],




where [h.sub.M] is labour time spent in market work, and [h.sub.H] is labour time spent in household work. Production of the home good uses household capital, [k.sub.H], and labour and is subject to technology shocks:


The market good and new units of capital are produced using the market technology:


where Y represents the aggregate output of the market sector and [Z.sub.M] represents the technology shock. A key feature of this environment is the extent to which agents substitute between household and market goods. The elasticity of substitution between household and market goods is 1/(1 - e). Benhabib et al. (1991) and Greenwood et al. (1995) proceed by examining the performance of a household production economy under various assumptions about the parameter e. If e = 1, home and market goods are perfect substitutes, if e = 0, they are perfect complements. This is important because, if they turn out to be perfect substitutes, household production has no important implications for the market variables.

There have been two attempts to establish values for the parameter e based on different kinds of information. McGrattan et al. (1993) follow a traditional econometric approach based on maximum likelihood estimation and inference. To implement the maximum likelihood procedure requires assumptions about the presence of measurement error as well as specific parametric forms and distributional assumptions. The estimated value of e is 0.385. The procedure also produces estimates of the other parameters of preferences and technology that appear reasonable when compared to the calibrated values of such parameters.(4) A major drawback of the maximum likelihood approach appears in the estimation of the parameters of the technology process. To capture the movements of labour between the home sector and the market sector, the maximum likelihood procedure has to force (estimate) that covariance between home and market technology shocks, [z.sub.H] and [z.sub.M], to be strongly negative. Greenwood and Hercowitz (1991) discuss in some detail why this is inconsistent with getting the correct comovement between household and market capital. Advocates of the traditional estimation and inference approach cite the importance of having something other than point estimates of parameters and of having a metric for assessing the distance between a model and the data. But McGrattan et al. do not find it worthwhile to present such measures, nor do they make any use of the estimated standard errors of their estimated parameters in the simulations they report.

Rupert et al. (1994) take a different approach to determining the substitution elasticity that is more in the spirit of calibration. They use microeconomic empirical evidence from households to estimate the elasticity of substitution between market goods and household goods. They begin by assuming a very general form for preferences and then derive the first-order conditions that relate household hours to market hours, consumption of the market good and the market wage for both single-person and two-person households. This provides equations of the form


which can be estimated on the basis of microeconomic data.

The PSID asks people how much time they spend on market and home work each period. The parameters of this equation (with two-person households it is a pair of equations) can be estimated from the PSID and the key substitution elasticity can be inferred from the estimates. Rupert et al. report estimates of the elasticity that are different from zero for females and married couples. These results are broadly consistent with the estimates found by maximum likelihood estimation of the whole economy. If the evidence obtained from these very different approaches were wildly inconsistent, it would be a signal that something is amiss. That the estimates are similar is somewhat reassuring and the finding that household production decisions are important for the behaviour of market variables represents progress in understanding the behaviour of the economy.

Capacity utilization

Robert Hall (1990) offered important evidence against the theory that underlies the basic real business cycle model. He showed that estimated Solow residuals are not orthogonal to inputs in the production process. This undermines their interpretation as productivity shocks. Moreover, he finds evidence of increasing returns-to-scale. There have been two responses to Hall's findings that illustrate well the complementarity between quantitative theory and empirical research.

Mary Finn (1994) addressed the issue using a calibrated general equilibrium model in which the utilization of capital is endogenous. In her model, which is related to one described in the appendices, capital utilization and depreciation are related to energy consumption. When she incorporates shocks to energy prices and government spending shocks, along with other productivity shocks, she finds that invariance of the Solow residuals found by Hall is accounted for, and that all three shocks together account for as much as 89 per cent of output variability.

Burnside et al. (1995) apply this insight to the econometric estimation of production functions. They estimate production functions and replicate Hall's analysis using a measure of the flow of capital services based on electricity consumption rather than capital stocks. With procyclical capacity utilization they find no evidence against constant returns-to-scale, and they find that the residuals of their estimated production functions can be consistently interpreted as technology shocks. Again, if the quantitative theory and the estimation had produced different findings, that would be a puzzle. The fact that they agree increases our confidence in both approaches to answering the question.


Most discussions of the use and evaluation of calibrated models centre on their application to business cycle research. This is understandable since that is where most of the action has been for the past dozen years. Gradually, however, the focus has shifted and calibrated models have been directed to asking questions in a number of areas that are well removed from the study of business cycles:

* Public finance questions: What are the welfare consequences of various fiscal policy choices?

* Optimal policy questions: What do optimal fiscal and monetary policies look like?

* Growth questions: What are the factors that distinguish fast-from slow-growing economies?

* Industry dynamics: How do new firms decide to enter an industry? How does the size distribution of firms evolve over time?

* Firm and plant dynamics: How do firms decide to adopt new technologies? What is the time path of employment at the plant level?

* Political economy: Why do we observe the policy choices we see in actual economies?

These are a few examples of questions that have been addressed with calibrated models. As I noted earlier, the tremendous growth in the use of calibrated models is due in large part to dramatic decreases in the costs of computing, combined with technological progress in mathematics and numerical methods. Together, these advances make it possible to compute the equilibria of increasingly sophisticated artificial economies. These economies provide laboratories in which economists can carry out their work. An important aspect of the answers to these questions is that they are quantitative. We can now say something about the magnitude of the welfare cost of capital income taxation or the welfare loss due to a pay-as-you-go social security system.

How should we judge the answers that We obtain from applying quantitative theory? There is a strong feeling that the methods currently used are too informal compared to standard statistical econometrics. The latter provides measures of goodness of fit and a framework for testing models. The recommendation is that current practice should be improved by including sensitivity analyses and measures of goodness of fit, and by testing models to see if they are rejected by the data.

It is clear that we need to improve the methods for appraising the results of calibration exercises. What is questionable is whether we would improve much on current practice by trying to force it into the standard estimation and hypothesis-testing paradigm. This risks losing one of the important benefits of quantitative theory: that it helps to tell us what to measure and sometimes how to measure it.

(i) Sensitivity Analysis

One issue that is frequently raised about the use of calibrated models is that practitioners do not acknowledge the uncertainty in their calibrated parameters and do not report sensitivity analysis. Canova (1990), Canova et al. (1994), Kim and Pagan (1993), and Gregory and Smith (1990, 1991, 1993), discuss this issue. Sensitivity analysis is important, particularly when one is comparing moments of data generated by calibrated economies to moments of data for an actual economy. Nevertheless, given the broad range of questions addressed with calibrated economies, it is not clear how important this issue is in practice. Nor is it clear that the traditional statistical approach is to be preferred to the more eclectic approach to sensitivity analysis that is employed in many studies.

Consider the attempts to assess the welfare costs of inflation using calibrated artificial economies. Cooley and Hansen (1989) calibrate a cash-in-advance version of a real business cycle model and use it to measure the welfare cost of inflation. They find that the welfare cost of 10 per cent inflation is about 0.4 per cent of GNP. Econometric estimates of the welfare cost by Lucas (1981) and Fischer (1981), based on computing the triangle under an estimated money demand curve find similar magnitudes. Imrohoroglu (1992) studies a calibrated model economy where money is held purely for precautionary motives to smooth consumption (and where there is no insurance for income variability) and finds the welfare cost to be 1.07 per cent of GNP. This is nearly three times the deadweight loss from the pure transaction motives for holding money. These seem like more meaningful bounds on an estimate of the true deadweight loss of inflation than a sensitivity analysis based on sampling uncertainty in the calibrated parameters. Moreover, in keeping with the theme that experiments with calibrated models provide guidance about measurement, these results tell us precisely how we could improve our measurements of the actual economy. We have pretty good measurements of the transactions demand for money but we need to measure the extent of the precautionary motive for holding cash balances in order to get a good fix on the welfare cost of inflation.

The search is on for more informative measures of fit for calibrated models. What makes this a difficult problem is that it is multidimensional; we usually evaluate models on a number of dimensions. This is an area where progress is being made and more useful measures of fit are appearing. Watson (1993), and Diebold et al. (1995) have proposed measures that appear extremely promising.

The idea that we want to measure somehow the `distance' between artificial economies and actual economies along a number of dimensions is not too controversial. But the notion that we should test models in the spirit of classical inference, with the idea of rejecting false models, is very controversial. Calibrated models are often subjected to some form of testing. Failures of models in important dimensions do not lead to their rejection. The distribution theory for standard statistical tests of models is derived under the null hypothesis and is unlikely to offer much guidance about how to proceed if a model is rejected. It is more likely in studies using calibrated models that failures lead to refinements of the models or to improvements in the measurements.

One of the widely noted `failures' of the standard real business cycle model described earlier in the paper is that it does not contain a business cycle in the sense that it produces the higher moments of output only as a property of the technology shocks that are fed into them. See, for example, Canova et al. (1994) and Cogley and Nason (1995). This comes as no surprise to users of real business cycle models since the models do not incorporate any features which would produce those dynamics. For exactly that reason this finding does not lead to their rejection. Models which incorporate richer endogenous dynamic structure are beginning to emerge. Examples are vintage capital models of the sort considered by Cooley et al. (1995b). The implied dynamics of such models are very different from the neoclassical growth model. What we learn from studying the quantitative behaviour of such models is not whether they are true or false but that we do not have the right kinds of measurements to assess whether they are useful abstractions. The reason is that vintage capital models of this type imply a pattern of employment at plants of a particular vintage that is increasing when a plant is new (at a rate governed by the extent of learning-by-doing), reaches a peak, and then steadily decreases until the plant is scrapped or renovated. Available data cannot easily show this because of the selectivity caused by business failures. Taking the data as given and trying to test such a model in the classical sense would involve buying into a huge set of assumptions and evaluating the model in terms applying it to data that is contaminated. Quantitative


An example of a framework that has been used to study business cycles, and is most often associated with the idea of calibration, is the neoclassical growth model with labour-leisure choice. This is an artificial world populated by infinitely many identical households that will live forever. Each household has an endowment of time each period which it must divide between leisure, [l.sub.t], and work, [h.sub.t], and owns an initial stock of capital, [k.sub.o], which it rents to firms and may augment through investment.

Households have utility each period defined over stochastic sequences of consumption and leisure,


theory provides us with a laboratory to study the role of vintage capital in a perfect environment.


In the 1980s the computational approach to studying business cycle fluctuations gathered many adherents. It did so because this approach offered a new, constructive method for studying fluctuations that was logically consistent and easier to implement than the complicated econometric approaches that evolved in answer to the rational expectations revolution and Robert Lucas's critique of conventional econometric models. It is also the case that the real business cycle models succeeded where purely monetary explanations of business cycles had failed. Since that time, the methods of quantitative general equilibrium have spread into many other areas of economics. Models developed on these principles are contributing to discussions of economic policy that go well beyond the business cycle issues discussed above. The use of calibrated applied general equilibrium models is only going to grow in the future. With that growth will come improved methods for calibrating and evaluating these models.

where c(.), h(.) represent the sequences of Arrow-Debreu event-contingent consumptions and labour supplies. The households in this economy supply capital and labour to firms which have access to a technology described by the function F([K.sub.t], [H.sub.t]);R + [right arrow] R.

Aggregate output is determined by the production function,


where [z.sub.t] is a random productivity parameter. This productivity shock is the source of uncertainty in the economy. We assume that the capital stock depreciates exponentially at the rate [Delta], and that consumers add to the stock of capital by investing some amount of the real output each period. Investment in period t produces productive capital in period t + 1, so that the law of motion for the aggregate capital stock is:


The firms of this economy rent capital and hire labour each period. We can treat this as a single firm that solves a period-by-period profit maximization problem. All relative prices are in terms of output and we can write the firm's period and problem as:


This optimization problem yields factor prices (stated in terms of the price of output):


Given constant returns-to-scale, profits will be zero in equilibrium.

Households choose consumption, investment, and hours of work at each date to maximize the expected discounted value of utility, given their expectations over future prices:


subject to sequences of budget constraints and the law of motion for the household's capital stock:


The prices, w, and r, depend on the economy-wide state variables (z, K). Moreover, the decision on quantities depend on the individual level state variables (z, k, K).

A definition of equilibrium

To complete the framework, we need a definition of equilibrium. This says that the prices and quantities for this economy clear the labour market and the goods market, and constitute solutions to the household's problem and the firm's problem. The notion of equilibrium commonly employed is a recursive competitive equilibrium.


Suppose, in keeping with the Solow tradition, we restrict our attention to economies that display balanced growth. In balanced growth consumption, investment and capital all grow at a constant rate while hours stay constant. The basic observations about economic growth suggest that capital and labour shares of output have been approximately constant over time, even while the relative prices of these inputs are changed. This suggests a Cobb-Douglas production function of the form:


The Cobb-Douglas assumption defines a parametric class of technologies for this economy.

As with the technology, certain features of the specification of preferences are tied to basic growth observations. We restrict our attention to the US economy where we know there is evidence that per-capita leisure increased steadily until the 1930s. Since that time, and certainly for the post-Second-World-War period, it has been approximately constant. We also know that real wages have increased steadily in the post-war period. Taken together, these two observations imply that the elasticity of substitution between consumption and leisure should be near unity. We consider the general parametric class of preferences of the form:


where 1/[Sigma] is the intertemporal elasticity of substitution and [Alpha] is the share parameter for leisure in the composite commodity. The parameter [Alpha] is an example of one that is difficult to calibrate. Variations in the intertemporal elasticity of substitution affect transitions to balanced growth paths but not the paths themselves.(5) Often, these preferences are restricted further to the limiting case where [Sigma] = 1, which is u([c.sub.t], 1 - [h.sub.t]) = (1 - [Alpha])log [c.sub.t] + [Alpha]log(1 - [h.sub.t]). Many researchers have also tried to estimate this parameter.

How has this parameter been treated in the literature? Researchers have considered a wide range of values consistent with independent attempts to estimate it from data. The general conclusion is that business cycle features are not very sensitive to [Sigma]. Nevertheless, it is sloppy practice simply to refer to previous studies to justify this parameter choice because, as Stokey and Rebelo (1995) point out, the answers to some kinds of questions are very sensitive to it.


The neoclassical growth framework emphasizes the central role of capital in determining long-term growth in output. Consequently, the first thing to consider is the match between the capital as it is conceived in this parametric class of economies and capital as it is measured and as it is conceptualized in the NIPA.(6)

The model economy is very abstract: it contains no government sector, no household production sector, no foreign sector, and no explicit treatment of inventories. Accordingly, the capital stock for the model economy, K, includes capital used in all of these sectors plus the stock of inventories. Similarly, output, Y, includes the output produced by all of this capital. The NIPA for the United States are somewhat inconsistent in their treatment of these issues, in that the output of some important parts of the capital stock are not included in measured output. For example, the NIPA do not provide a consistent treatment of the household sector. The accounts do include the imputed flow of services from owner-occupied housing as part of GNP. But, they do not attempt to impute the flow of services from the stock of consumer durables. The NIPA lump additions to the stock of consumer durables with consumption rather than treating it as investment. Because the model economy does not treat the household sector separately, when we deal with. the measured data, we will add the household's capital stock -- the stock of residential structures and the stock of consumer durables -- to producers equipment and structures. To be consistent, we also have to impute the flow of services from durables and add that to measured output.

In a similar vein, although there are estimates of the stock of government capital and estimates of the portion of government consumption that represents additions to that stock of capital, the US NIPA make no attempt to impute the flow of services from the government's capital stock and include it as part of output. Nor do the US NIPA include government investment as part of measured investment. Because our model economy does not have a government sector, we will add the government capital stock to the private capital stock and the capital stock in the household sector. We will also impute the flow of services from this capital stock and add it to the measured output.

Finally, the technology described above makes no distinction between the roles of reproducible capital, land, and inventories. There are many examples in the literature that assign a different role to inventories, but here they are treated identically with other forms of capital. When we consider the mapping between the model economy and measured data, it will be important to include the value of land and the value of the stock of inventories as part of the capital stock. The Flow of Funds, Balance Sheets for the US Economy is the source for estimates of the value of land. The stock of inventories is reported in the NIPA.

The measurement issues discussed above are central to the task of calibrating a model economy because a consistent set of measurements is necessary to align the model economy with the data. For example, in order to estimate the crucial share parameter in the production function, [Theta], it is important to measure all forms of capital and to augment measured GNP to include measures of all forms of output. Similarly, when we treat aggregate investment, it will be necessary to include in investment additions to all forms of capital stock. For this model economy, the concept of investment that corresponds to the aggregate capital stock includes government investment, 'consumption' of consumer durables, changes in inventories, gross fixed investment, and net exports. Since there is no foreign sector in this economy, net exports are viewed as representing additions to or claims on the domestic capital stock depending on whether they are positive or negative. Making sure that the conceptual framework of the model economy and the conceptual framework of the measured data are consistent, in the manner just described, is a crucial step in the process of calibration. A detailed description of this process for the US economy can be found in Cooley and Prescott (1995).


I calibrate the remaining parameters by choosing them so that the balanced growth path of the model economy matches certain long-term features of the measured economy. Substituting the constraints into the objectives and deriving the first-order condition for k yields:


In balanced growth this implies:


The first order condition for hours, h, on a balanced growth path implies:


Finally, the law of motion of the capital stock in steady state implies


The aggregate depreciation rate for this economy, [Delta], can be calibrated from this last equation. It is seen to depend on the aggregate investment/capital ratio. The steady-state investment/capital ratio is 0.076. Given the values for [Gamma] and [Eta], the parameter [Delta] is calibrated to match this ratio. This implies an annual depreciation rate of 0.048 or a quarterly rate of 0.012. This number depends on the real growth rate, [Gamma], and the population growth rate for the economy. In an economy that abstracts from growth, this number must be larger to match investment.

Once [Delta] is calibrated, equation (E9) provides the basis for determining [Beta]. Given values for [Gamma], [Delta], and [Eta], [Beta] is chosen to match the steady-state output/capital ratio. Under the broad definition of output and capital that are consistent with our model economy, the capital/output ratio is 3.32. This implies an annual value for [Beta] = 0.947 which implies a quarterly value of about 0.987.

Given an estimate of h, the fraction of time devoted to market activities, equation (E10) provides the basis for calibrating the preference parameter [Alpha] based on the steady-state output/consumption ratio.

The determination of h is one of the examples where we turn to microeconomic evidence on the allocation of time to choose a parameter. Ghez and Becker (1975) and Juster and Stafford (1991) find that households allocate about one-third of their discretionary time -- i.e. time not spent sleeping or in personal maintenance -- to market activities. The specific value we use for h is 0.31. Given the broad definition of consumption and output appropriate for this model economy the steady-state ratio-of output to consumption is 1.33. This implies a value of [Alpha]/(1 - [Alpha]) = 1.78.

Finally, a complete calibration of this model economy requires parameters of the process that generates the shocks to technology. One approach to calibrating this process would be to do as Robert Solow did and calculate technological change as the difference between changes in output and the changes in measured inputs (labour and capital) multiplied by their shares. Using equation (E2) we have:


These are the Solow residuals for this economy. Using our estimate of [eta] = 0.4 and observations on measured output, given a measure of the labour input one can generate a series for the [z.sub.t] and their difference. We use a quarterly hours series based on the Establishment Survey for the labour input. An alternative would be use the hours series based on the Household Survey. The other decision one faces is whether to use the broad definition of capital stock consistent with our model economy, and whether to use the broad definition of output, including the imputed service flows from consumer durable and government capital. We elect to use simply measured output (real GNP) and the measured labour input taking quarterly variations in the capital stock to be approximately zero. We choose this alternative because the capital stock is only reported annually. Consequently, the imputed service flows that we described above are also annual. One can interpolate quarterly versions of these, but any procedure for doing so is essentially arbitrary and may add to the variability of both output and the residuals. The residuals computed using measured real GNP are highly persistent and the autocorrelations are quite consistent with a technology process that is a random walk. We assume a value of [Rho] = 0. 95, in the law of motion for the technology, and use this to define a set of innovations to technology. These innovations have a standard deviation of about 0.007 which is similar to the value calibrated in Prescott (1986).


Auerbach, A., and Kotlikoff, L. (1987), Dynamic Fiscal Policy, Cambridge, Cambridge University Press.

Benhabib, J., Rogerson, R., and Wright, R. (1991), 'Homework in Macroeconomics: Household Production and Aggregate Fluctuations', Journal of political Economy, 99(6), 1166-87.

Burnside, C., Eichenbaum, M. S., and Rebelo, S. T. (1995), 'Capital Utilization and Returns to Scale', NBER Macro Annual, 1995.

Caballe, J., and Santos, M. (1993), 'On Endogenous Growth with Physical and Human Capital', Journal of Political Economy, 101, 1042-67.

Canova, F. (1990), 'Simulating General Equilibrium Dynamic Models Using Bayesian Techniques', reproduced.

_____ Finn, M., and Pagan, A. R. (1994), 'Evaluating a Real Business Cycle Model', in C. P. Hargreaves (ed.), Nonstationary Time Series Analysis and Cointegration, Oxford, Oxford University Press, 225-56.

Cho, J.-O., and Cooley, T. F. (1994), 'Employment and Hours over the Business Cycle', Journal of Economic Dynamics and Control,

Cogley, T.,and Nason, J.M. (1995), 'Output Dynamics in Real-Business-Cycle Models', American Economic Review, 85(3),492-511.

Cooley, T. F. (ed.) (1995), Frontiers of Business Cycle Research, Princeton, NJ, Princeton University Press.

_____ Hansen, G. D. (1989), 'The Inflation Tax in a Real business Cycle Model', American Economic Review, 79(4), 733-48.

_____ Prescott, E. C. (1995), 'Economic Growth and Business Cycles', in T. F. Cooley (ed.), Frontiers of Business Cycle Research, Princeton, NJ, Princeton University Press.

_____ Hansen, G., and Prescott, E. C. (1995a), 'Equilibrium Business Cycles with Idle Resources and Variable Capacity Utilization', Economic Theory, 6,35-50.

Greenwood, J., and Yorukoglu, M. (1995b), 'The Replacement Problem', Journal of Monetary Economics, forthcoming.

Diebold, F., Ohanian, L., and Berkowitz, J. (1995), 'Dynamic Equilibrium Economies: A Framework for Comparing Models and Data', NBER Technical Working Paper No. 174.

Finn, M. (1994), 'Variance Properties of Solow's Productivity Residual and Their Cyclical Implications', Journal of Economic Dynamics and Control, 19, 1249-82.

Fischer, S. (1981), 'Toward an Understanding of the Cost of Inflation', Carnegie-Rochester Conference on Public Policy, 15, 5-42.

Ghez, G., and Becker, G. S. (1975), The allocation of Time and Goods over the Life Cycle, New York, NY, Coumbia University Press.

Greenwood, J., and Hercowitz, Z. (1991), 'The Allocation of Capital and Time over the Business Cycle', American Economic Review, Journal of political Economy, 99, 1188-214.

_____ _____ Huffman, G.(1988), 'Investment, Capacity Utilization and the Real Business Cycle', American Economic Review, 78,402-17.

_____ Rogerson, R., and Wright, R. (1995), 'Household Production in Real Business Cycle Theory', in T. F. Cooley (ed.), Frontiers of Business Cycle Research, Princeton, NJ, Princeton University Press.

Gregory, A. W., and Smith, G. W. (1990), 'Calibration as Estimation', Econometric Reviews, 9(1), 57-89.

_____ _____ (1991), 'Calibration as Testing: Inference in Simulated Macroeconomic Models', Journal of Business and Economic Statistics, 9(3),297-303.

_____ _____ (1993), 'Statistical Aspects of Calibration in Macroeconomics', ch. 25 in G. S. Maddala, C. R. Rao, and H. D. Vinod (eds), Handbook of Statistics, Vol. 11, Amsterdam, North Holland.

Hall, R. E. (1990), 'Invariance Properties of Solow's Productivity Residual', in P. Diamond (ed.), Growth/Productivity/Unemployment: Essays to Celebrate Bob Solow's Birthday, Cambridge, MA, MIT Press, 71-112.

Hansen, G. D. (1985), 'Indivisible Labor and the Business Cycle', Journal of Monetary Economics, 16(3), 309-27.

Hansen, L. P., and Heckman, J. (1996), 'The Empirical foundations of Calibration', Journal of economic Perspectives, 10,87-104.

Hoover, K. D. (1995), 'Facts and Artifacts: Calibration and the Empirical Assessment of Real-Business-Cycle Models', Oxford Economic Papers.

Imrohoroglu, A. (1992), 'The Welfare Cost of Inflation under Imperfect Insurance', Journal of economic Dynamics and Control, 16, 79-91.

Juster, F. T., and Stafford, F. P. (1991), 'The Allocation of time: Empirical Findings, Behavior Models, and problems of Measurement', Journal of economic Literature, 29,471-522.

Kim, K., and Pagan, A. R. (1995), 'The Economic Analysis of Calibrated Macroeconomic Models', in M. H. Pesaran and M. R. Wickens (eds), Handbook of Applied Econometrics, Oxford, Blackwell, 356-90.

King, R., Plosser, C., and Rebelo, S. (1988a), 'Production Growth and Business Cycles, I, The Basic Neoclassical Model', Journal of Monetary Economics, 21, 195-232.

_____ _____ _____ (1988b), 'Production Growth and Business Cycles, I, New Directions', Journal of Monetary Economics, 21, 309-41.

Koopmans, T. (1947), 'Measurement without theory', Review of Economics and Statistics,29, 186-203.

Kydland, F.E., and Prescott, E.C. (1982), 'Time to Build and Aggregate Fluctuations', Econometrica, 50, 1345-70.

_____ _____ (1991), 'Hours and employment Variation in Business Cycle Theory', Economic theory, 63-81.

_____ _____ (1996), 'THe computational Experiment:An Econometric Tool', Journal of Economic Perspectives, 10, 69-85.

Lucas, R.E., Jr (1981), 'Discussion of "Toward an Understanding of the Cost of Inflation"', Carnegie-Rochester Conference on Public Policy, 15,43-52.

McGrattan, E., Rogerson, R., and Wright, R. (1993), 'Household Production and Growth in the Stochastic Growth Model', Staff Report 166, Federal Reserve Bank of Minneapolis.

Musgrave, J. C. (1992), 'Fixed Reproducible Tangible Wealth in the United States: Revised Estimates', Survey of Current Business, 72, 106-7.

Prescott, E. (1986), 'Theory Ahead of Business Cycle Measurement', Carnegie-Rochester Conference Series on Public Policy, 25, 11-44.

Rupert, P., Rogerson, R., and Wright, R. (1994), 'Estimating Substitution Elasticities in Household Production Models', Staff Report 186, Federal Reserve Bank of Minneapolis.

Shoven, J., and Whalley, J. (1984), 'Applied General Equilibrium Models of Taxation and International Trade', Journal of Economic Literature, 22, 1007-51.

Sims, C. (1996), 'Macroeconomics and Methodology', Journal of economic Perspectives, 10, 105-20.

Stokey, N. L., and Rebelo, S. (1995), 'Growth Effects of Flat Rates Taxes', Journal of political Economy, 103, 519-50.

Watson, M. W. (1993), 'Measures of Fit for Calibrated Models', Journal of

(1) This is a substantially revised version of a paper, `The Nature and Use of Calibrated Models', originally written for an invited session at the Seventh World Congress of the Econometric Society in Tokyo, August 1995. I thank (without implicating) David Andalfatto, Mark Dwyer, Jeremy Greenwood, Gary Hansen, Adrian Pagan, Edward Prescott, Victor Rios-Rull, and David Winter for helpful discussions and comments.

(2) Kydland and Prescott would probably recant some of what they were doing in their 1982 calibration. That is only natural since that was the first important application of the method to business cycle models based on the neoclassical growth framework.

(3) Kydland and Prescott (1991) inventories are included. They are not essential for the point being made here.

(4) The market technology is found to be approximately Cobb-Douglas, the depreciation rate is about 2.2 percent and [Beta] is 0.991. The share of capital is only 0.234, which is smaller than is usual.

(5) The general class of preferences that are consistent with balanced growth is described by King et al. (1988a, b) and Caballe and Santos (1993).

(6) The measurement of the US fixed reproducible capital stock is reported by Musgrave (1992) in The Survey of Current Business. Musgrave distinguishes three major components of the reproducible capital stock: fixed private capital, which includes producers' durables, producers' structures, and residential structures; government capital, which includes both equipment and structures at both the federal level and the state and local government level; and consumer durables. Another component of the capital stock that we will incorporate is land. A measure of the value of land is reported in the flow-of-funds accounts, in the Balance Sheets for the US Economy.
COPYRIGHT 1997 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:methodology, usage and evaluation of calibrated models
Author:Cooley, Thomas F.
Publication:Oxford Review of Economic Policy
Date:Sep 22, 1997
Previous Article:The limits of business cycle research: assessing the real business cycle model.
Next Article:The labour market over the business cycle: can theory fit the facts?

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |