# Instrumental Variables.

Response to Angrist and ImbensI. Introduction

My 1997 JHR paper neither endorsed nor condemned the use of instrumental variables (IV) to estimate the parameters of economic models. Its main goal was to remind readers of some basic points developed in a series of papers with Richard Robb (1985, 1986). We establish that when responses to "treatment" or a policy intervention vary across persons even after conditioning on observable variables (X), a variety of mean treatment effects can be defined. When responses are homogeneous (conditional on X), all mean treatment effects are the same.

The "common effect," homogenous response, assumption underlies most currently used econometric methods used to evaluate social programs. When responses to treatment are homogenous, the case for social experiments and instrumental variables is the most persuasive. (Heckman, 1992; Heckman and Smith, 1992, 1993, 1998).

When responses to treatment vary across persons, different mean parameters can be defined that answer different economic questions. A variety of interesting policy counterfactuals can be constructed. Social experiments are better suited for recovering some of these parameters than others. In general, identification of different parameters requires different assumptions to justify application of the method of instrumental variables in any instance, or to justify the application of any econometric method. That different parameters in general require different assumptions should not be surprising. There is no universal justification of the method of instrumental variables for identifying all economic parameters.

The comment by Angrist and Imbens confuses two ideas that are fruitfully distinguished. The first idea is that every econometric estimator estimates something, provided it converges. Conditions for convergence of the IV estimator are quite weak, and involve invocation of laws of large numbers (see, for example, White, 1984).

The second, and more interesting, idea is an economic one. Does the object defined by application of the estimator answer an interesting economic question? In particular, does the method of IV recover a parameter that answers a well posed economic question?

Their discussion of my condition (C-1-b) demonstrates their confusion. That condition is sufficient for identification of the parameter Treatment on the Treated. In their notation, and mine, this parameter is E([Y.sub.1] - [Y.sub.0]D = 1). Careful readers of my JHR paper will note that I invoke condition (C-1-b) only for that parameter. Another condition is required for ATE (E([Y.sub.1] - [Y.sub.0])). That condition is (C-1-a). Yet other conditions are required to identify LATE. I present those conditions in my paper.

Angrist and Imbens confuse the discussion by changing the question from the one I pose and answer in my paper--what conditions are sufficient to identify Treatment on the Treated, that is, condition (C-l-b)--to the question of whether IV converges to anything, a distinctly different question.

Their "Result 1" restates a point already made in Heckman and Robb (1985, 1986) and repeated and amplified in my 1997 paper that IV does not estimate the Treatment on the Treated parameter in the general heterogenous response case unless agents do not select into the program on the basis of the unobserved heterogeneity in their response. My footnote 11 shows that IV converges to something, however, and it demonstrates that in a context of a Roy model, the parameter is exactly the LATE parameter, not the Treatment on the Treated parameter. I go on to discuss what economic question LATE answers. Under its identifying conditions, which are satisfied in the Roy example, it determines the gross gain to switching from the no treated state ("0") to the treated state ("1") when the policy is changed from "0" to "1" --for those induced into the military by a switch to the draft in the lottery example. It does not identify the social cost of the draft or the effect of the policy on those who do not switch, that is, th e effect of military service on those who would go into the Army even if there were no draft. In general, it does not estimate Treatment on the Treated E([Y.sub.1] - [Y.sub.0]D = 1).

My JHR paper presents conditions under which LATE answers an interesting economic question. Within the context of a utility maximizing version of the Roy model, I present conditions under which a limit form of LATE, defined when the policy variable Z is continuous, identifies the effect of a marginal change in Z on the gross gain to participants who are just indifferent or at the margin of choice between sector "0" and sector "1" or between taking treatment ("1") or not taking it ("0"). In a series of papers, Heckman and Vytlacil (1999a, b, c) and Aakvik, Heckman, and Vytlacil (1998) define the limit LATE parameter as LIV (local instrumental variable) and show how LIV can be used to generate LATE, Treatment on The Treated, and the Average Treatment Effect, using an index model structure of the sort produced from a Roy model and more general economic models of discrete choice. We present bounds for these parameters when they are not identified.

Our analysis shows that a limit form of LATE is the fundamental building block for constructing ATE, Treatment on the Treated, and LATE. Using the mean value function for integrals, it is possible to build up Treatment on the Treated, and the Average Treatment Effect (ATE) using different weighted combinations of LATE parameters. If Z contains one continuous regressor, one can use LIV to build up ATE, Treatment on the Treated, and LATE. In a sense made precise by Heckman and Vytlacil (1999a, b, c), LIV (for a model with one continuous Z) or LATE (for a model with only a discrete Z) are the building blocks from which Treatment on the Treated and ATE can be built up or bounded, if the support of Z is limited.

The Angrist-Imbens "Result 2" in no way casts any doubt on the validity or applicability of condition (C-1-b) to identify E([Y.sub.1] - [Y.sub.0]D = 1) although a quick read of their comment might suggest otherwise. In fact, their result nicely illustrates the generality of (C-1-b) as an identifying condition. They present a model that satisfies that condition and IV identifies Treatment on the Treated as I claim it does. To show this, note that under their assumptions if we define Q = Pr(Z = 1) we obtain the table of probabilities for (D, Z):

D 0 1 Z 0 Pr(D = 0, Z = 0) = 1 - Q Pr(D = 1, Z = 0) = 0 1 Pr(D = O, Z = 1) = (1 - P)Q Pr(D = 1, Z = 1) = PQ

where Z = 0 or Z = 1. If D = 1, then Z = 1. For this model, Treatment on the Treated given Z is

E([Y.sub.1] - [Y.sub.0]/D = 1, Z) = [[micro].sub.1] - [[micro].sub.0] + [alpha]Z

while the parameter Treatment on the Treated identified by (C-1-b) is

E([Y.sub.1] - [Y.sub.0]/D = 1) = [[micro].sub.1] - [[micro].sub.0] + [alpha]E(Z/D = 1).

But from the table we see that E(ZD = 1) = 1 so Treatment on the Treated is

E([Y.sub.1] - [Y.sub.0]/D = 1) = [[micro].sub.1] - [[micro].sub.0] + [alpha].

Apply the instrumental variable condition developed in my paper. First compute the expectation of Y given Z:

E(Y/Z) = E([Y.sub.o](1 - D) + [Y.sub.1](D)/Z)

= E([Y.sub.o]/D = 0,Z) Pr(D = 0/Z) + E([Y.sub.1]/D = l, Z) Pr(D= 1/Z).

For their model

E(Y/Z) = [[micro].sub.0] + ([[micro].sub.1] + [alpha]Z - [[micro].sub.0])PZ.

Then for two values of the instrument, Z and Z', Z [not equal to] Z', we obtain, under standard conditions on sampling distributions, the population parameter defined by the IV estimator:

Population IV = E(Y/Z) - E(Y/Z')/Pr(D = 1/Z) - Pr(D = 1/Z')

= [([[micro].sub.1] - [[micro].sub.0]) + [alpha]Z]PZ - [([[micro].sub.1] - [[micro].sub.0]) + [alpha]Z']PZ'/PZ - PZ'

= ([[micro].sub.1] - [[micro].sub.0]) + ([alpha]Z)PZ - ([alpha]Z')PZ'/PZ - PZ'

Now in their example if Z = 1, Z' [not equal to] Z implies Z' must equal zero. Thus Population IV = ([[micro].sub.1] - [[micro].sub.0]) + [alpha] = E([Y.sub.1] - [Y.sub.0]/D = 1) as I claimed.

The confusion in their discussion of "Result 2" arises because in all of their work they assume independence of ([U.sub.0], [U.sub.1]) and Z in defining instrumental variables, including all of their coauthored papers with Rubin. However such conditions are overly strong. For estimating mean parameters, only mean-independence of the appropriate error tern is required. Condition (C-1-b) does not require that either [U.sub.1] or[U.sub.0] be independent or even mean independent of Z. It only requires that a particular combination of the [U.sub.0], [U.sub.1] that constitute the error term be zero:

(C-1-b) E([U.sub.0] + D([U.sub.1] - [U.sub.0] - E([U.sub.1] - [U.sub.0]/D = 1))/Z) = 0,

so the error term does not depend on Z.

Their objection to my condition is semantic and not substantive. In their example, the Z dependence coming through D[U.sub.1] is cancelled by the Z dependence coming through E[E([U.sub.1]/D = 1)/Z] so E([D.sub.1][U.sub.1]/Z) = E(E([U.sub.1]/D =1)/Z). It is an advantage of my more general approach that overly strong conditions like ([U.sub.0],[U.sub.1]) being independent of Z are not required to identify parameters like Treatment on the Treated.

Parenthetically, in footnote 4 of my 1997 paper I note that if X variables are introduced as direct arguments of outcome equations, the method of IV does not require that E([U.sub.0]/X, Z) = 0 or E([U.sub.1]/X, Z) = 0. It only requires that for Treatment on the Treated on the Treated

E([U.sub.0] + D(U1 - U0 - E([U.sub.1] - [U.sub.0]/D = 1, X)/X, Z)= M(X).

For each fixed X, we can difference E(Y/Z, X) for distinct Z values, and the M(X) differences out. Heckman and Smith (1998, p. 264) provide an extensive discussion of this condition. See also Heckrnan, Lalonde, and Smith (1999).

These points illustrate a central theme in all of my work on program evaluation since my early papers with Robb. When responses to treatment are heterogenous, standard econometric methods have to be modified, and parameters have to be clearly defined. The case when responses are heterogeneous but agents do not participate in the program on the basis of the heterogeneity is much simpler and justifies the application of familiar methods. The method of instrumental variables is especially sensitive to the assumption of lack of information by agents. Robb and I use this sensitivity to propose an econometric test that can be used to determine if agents act on unobserved components of gains in participating in programs (1985, pp. 195--97). Robinson (1989) implements the test.

II. The Meaning of A Causal Parameter

A major source of confusion that arises in the work of Angrist and Imbens is the definition of a causal parameter. They ignore an entire body of work in econometrics and economics that defines a causal parameter in the ceteris panbus-all other things equal (or set at specified values)--sense that economists have used since the time of Marshall (1920). Thus a ceteris paribus effect of price on demand varies price and holds constant all other factors, both observed and unobserved.

A causal parameter is critically dependent for its definition on what is held constant. Structural economic models are causal models that recognize the constraints arising from behavioral relationships and feasibility requirements, including the possibility of interdependence among causes. It may not be possible to vary one variable without changing another one. For example if we hold the income of the consumer constant, an increase in the consumption of one good can only come at the cost of reducing the consumption of at least one other good. Angrist and Imben's statement that structural models are not necessarily causal models (1995, p. 433) and that the Rubin (1978) model is the only valid casual model has created confusion in the literature. Economists don't need the Rubin (1978) model. Our structural models already improve on it by defining causal parameters that account for behavioral constraints and feasibility constraints. Structural models clarify what can and cannot be held constant when some varia ble or parameter is varied. [1]

A structural economic model is a model of potential outcomes where the economic assumptions of the model are made explicit. Thus, for example, a structural model of labor supply writes hours of work, h, as a function of wages, w, unearned income, e, and unobservables u:

h = h(w,e,u).

Holding e and u fixed and varying w defines the causal and structural effect of w. This is a model of potential outcomes for h as a function of w,e,u. Ceteris paribus variations in w define different potential outcomes. Elsewhere (Heckman 2000), I discuss causal parameters in economics and the critical role of economic theory in defining economic causal parameters.

III. Understanding The Limitations of The Data

Angrist and Imbens combine their defense of instrumental variables with an entirely different issue: understanding what can be identified from the available data. It is a great virtue of the LATE parameter that it makes the investigator stick to the data at hand, and separate out the aspects of an estimation that require out of sample extrapolation or theorizing from aspects of an estimation that are based on observable data.

However, this very valid and useful principle is completely distinct from the question of which particular estimator to use to impose the available information on the unrestricted data. In the standard simultaneous equations model, the reduced form contains the sample information and we obtain the system (or structural) parameters by imposing restrictions on it. A variety of different estimators can be used to impose the out-of-sample information that an analyst finds comfort in using.

More recent prototypes for this separation of conjectural from factual information are studies by Smith and Welch (1986), by Glynn, Laird, and Rubin (1986), Holland (1986) and Rosenbaum (1988, 1991, 1995) who analyze a standard selection problem. Smith and Welch consider identification of means. Glynn, Laird and Rubin, as well as Holland and Rosenbaum consider identification of entire distributions.

Let f(X/D = 1) be the density of outcomes (for example, wages) for persons who work (for example, D = 1 corresponds to work). Suppose that we know Pr(D 1/Z) and hence Pr(D = O/Z). Missing is f(X/D = 0) (for example, wages of non-workers). In order to estimate E(X/Z), Smith and Welch (1986) use the law of iterated expectations to obtain

E(X/Z) = E(X/D = 1,Z) Pr(D = 1/Z) + E(X/D = 0, Z) Pr(D = O/Z).

To estimate the left hand side of this expression, it is necessary to obtain information on the missing component E(X/D = 0, Z). Smith and Welch propose and implement bounds on E(X/D = 0, Z), for example,

[X.sub.L] [less than or equal to] E(X/D = 0, Z) [less than or equal to] [X.sup.U]

where [X.sup.U] is an upper bound and [X.sub.L] is a lower bound. Using this information they construct the bounds

E(X/D = 1, Z) Pr(D = l/Z) + [X.sub.L] Pr(D = O/Z) [less than or equal to] E(X/Z) [less than or equal to] E(X/D = 1, Z) Pr(D = l/Z) + [X.sup.U] Pr(D = O/Z).

By doing a sensitivity analysis, they produce a range of values for E(X/Z) that are explicitly dependent on the range values assumed for E(X/D 0, Z). Glynn, Laird, and Rubin (1986) present a sensitivity analysis for distributions using Bayesian methods under a variety of different assumptions. Holland (1986) and Rosenbaum (1988, 1991, and 1995) consider more classical sensitivity analyses varying the ranges of parameters of models. The key idea in LATE, the Smith-Welch bounds and the Bayesian and classical sensitivity analyses is to clearly separate what is known from what is conjectured about the data. That is a very valuable principle that is conceptually distinct from the issue of whether or not to use IV, although LATE implements this principle.

What we can get from the available data may not always answer interesting questions. What is remarkable is that LATE sometimes answers a well posed economic question as is shown in my 1997 article and my papers with Smith (1998) and Vytlacil (1999a).

IV. Matters of Attribution

Angrist and Imbens hint that many of the ideas in my paper are taken from other sources. It is certainly true that many of these ideas appear in Heckman and Robb (1985, 1986) and that some of these ideas have been "rediscovered" in later work. [2] Let me comment on their principal insinuations.

(1) I welcome their references to the work of Rubin, Peters and Belson, all of whom use the concept of Treatment on the Treated in their analyses. None of these authors presents a comprehensive discussion of alternative parameters that is featured in my papers with Robb (1985, p. 161, 1986, p. 72-77). We were unaware of these rather obscure papers in the psychology literature when we wrote our own paper and defined a variety of parameters of interest.

A more glaring omission from their list is the research of H. G. Lewis (1963) who investigated the impact of unionism on the unionized and clearly defined the parameters Treatment on the Treated and the Average Treatment Effect in both partial and general equilibrium settings. [3] Recent joint work with Lochner and Taber (1998) builds on Lewis' research and defines and estimates both partial and general equilibrium versions of Treatment on the Treated, ATE, and LATE. Parenthetically, we find that in a dynamic general equilibrium model of skill formation, the monotonicity assumptions of LATE are violated even when they are valid in a partial equilibrium model. We define three LATE-like effects: Direct and Reverse "LATE" and Total LATE as well as the marginal treatment effect for persons indifferent between two choices, what Vytlacil and I call the LIV parameter (Local Instrumental Variable).

(2) The structural model of schooling that I discuss in Section VII of my JHR paper owes much more to Sherwin Rosen (1977) and Becker and Chiswick (1966) than it does to David Card (1995). Rosen introduced the Wicksell (1934, pp. 178- 83) schooling or "tree cutting" model into the analysis of schooling decisions. The relevance of this model to schooling choices was briefly discussed by Becker 1975 (p. 99, footnote 82). I assumed that Rosen's work and that of Becker and Chiswick were well known. I use it to define the various treatment effect parameters. Card uses a somewhat similar model to remind readers of the point already made in Heckman and Robb (1985, p. 195-97) that instrumental variables applied to a random coefficient model of schooling, or some other variable, where the coefficient is correlated with schooling, are inconsistent, in general, except under the conditions that we specify in our work and I restate in my JHR paper. Different instruments define different parameters in the general case.

Card presents a very interesting summary of the empirical evidence on the effect of different instruments on the estimated return to schooling. However, he is not clear about what parameter he is estimating. As I read his paper, he is estimating the Average Treatment Effect. Heckman and Honore (1990) present identification conditions for a Roy version of the schooling model when schooling is discrete valued and the coefficient on schooling is correlated with schooling. Those conditions are extended to models with more general decision structures in Heckman (1990) and Heckman and Smith (1997, 1998).

Heckman and Vytlacil (1998) discuss the random coefficient model of schooling a systematic fashion, and analyze the various treatment effects for the Card model and the Rosen model. A major point that we make is that in Card's model, unlike the Wicksell-Rosen model, foregone earnings are implicitly assumed not to be a cost of schooling. They are the main cost in the Rosen model. We analyze a model with both direct costs and foregone earnings costs. Ignoring the major empirical component of the costs of schooling simplifies the econometrics but guts the economics from the model and makes it a much less interesting empirical framework for analyzing the returns to schooling.

(3) In focusing on Angrist's military draft paper in my 1997 paper, I may have personalized the discussion too much, something I regret. I should have also discussed the first paper that used the draft lottery as an instrument. Hearst, Newman and Hulley (1986) use military draft eligibility as determined by a lottery as an instrument for the effect of military service on the post-military mortality of Vietnam veterans. Their estimator (equation 3 in their paper) can be viewed as an I.V. equation and identifies "treatment" only under the assumption that treatment effects are homogenous (at least ex ante) among people who were exempt from the draft, but volunteered anyway, and among those who were draft-eligible and were drafted. (Similarly there must be (ex ante) homogeneity in the "treatment" administered to draft-eligible persons who were inducted and persons who were draft-exempt but nonetheless chose to serve.) The clearest interpretation of their work is that they implicitly assume treatment homogeneity, a sufficient condition the validity of C-1-b.

(4) It is difficult for me to discuss the unpublished papers that I have not seen that are referred to by Angrist and Imbens in their comment. My own work with Robb on the limitations of IV methods in models with utility maximizing heterogenous agents was done in the early 1980s. My paper with Vytlacil (1998) discusses these issues further.

This research was supported by NSF 97-09-873, NICHD: R0 -1-HD32058-01A1, NICHD: R01-34598-03 and a grant from the American Bar Foundation.

(1.) Galles and Pearl (1998) prove the equivalence of the Rubin model and the recursive structural economic model of simultaneous equations theory.

(2.) For example, Heckman and Robb (1986, Section X, 100-104), discuss how the propensity score (the probability of selection into a program) is used differently in matching and sample selection models, and present a method for introducing the propensity score or polynomials of propensity scores as a regressors to estimate selection models. This is an instance of the method of control functions introduced in their paper. Recent work by Angrist (1995) reproduces these results.

(3.) See Chapter II of his book, especially pages 12, 14, and Section 11.5.

References

Aakvik, A., J. Heckman, and E. Vytlacil. 1998. "Local Instrumental Variables and Latent Variable Models For Estimating Treatment Effects." Unpublished manuscript, University of Chicago.

Angrist, J. 1995. "Conditioning on the Probability of Selection to Control for Selection Bias." NBER Technical Working Paper #181, June.

Angrist, J., and G. Imbens, 1995. "Two Stage Least Squares Estimation of Average Causal Effects in Models with Variable Treatment Intensity." Journal of The American Statistical Association 90(430):431-42.

Becker, G. 1975. "Human Capital and The Personal Distribution of Income: An Analytical Approach" (W. S. Woytinsky Lecture). In Human Capital, Second Edition. New York: Columbia University Press.

Becker, G., and B. Chiswick. 1966. "Education and The Distribution of Earnings." American Economic Review 56:358-69.

Card, D. 1995, "Earnings, Ability and Schooling Revisited." In Research in Labor Economics, Vol. 14, ed. S. Polachek, Greenwich, Conn.: JAI Press.

Galles, David, and Judea Pearl. 1998. "An Axiomatic Characterization of Counterfactuals," Foundations of Science 3:151-82.

Glynn, R. J., N. Laird, and D. Rubin. 1986. "Selection Modeling vs. Mixture Modeling with Nonignorable Response." In Drawing Inferences From Self-Selected Samples, ed. H. Wainer. New York: Springer-Verlag.

Hearst, N., T. Newman, and S. Hulley. 1986. "Delayed Effects of the Military Draft on Mortality: A Randomized Natural Experiment." New England Journal of Medicine March 6, 620-23.

Heckman, J., 1990. "Varieties of Selection Bias." American Economic Review 80:313-18.

_____. 1992. "Randomization and Social Policy Evaluation." In Evaluating Welfare and Training Programs, ed. C. F. Manski and I. Garfinkel, 201-30. Cambridge, Mass.: Harvard University Press.

_____. 1993. "Assessing The Case For Randomized Evaluation of Social Programs." In Measuring Labour Market Measures, ed.. K. Jensen and P. K. Madsen, 35-96. Copenhagen, Denmark: Ministry of Labour.

_____. 1997. "Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations." Journal of Human Resources 32(2):441-62.

_____. 2000. "Causal Parameters in Economics: A Twentieth Century Retrospective." Forthcoming.

Heckman, J., and B. Honore. 1990. "The Empirical Content of the Roy Model." Econometrica 58(5): 1121-49.

Heckman, J., R. Lalonde, J. Smith, 1999. "The Economics and Econometrics of Active Labor Market Programs." In Handbook of Labor Economics, Vol. 3, ed. O. Ashenfelter, and D. Card, Chapter 31. Amsterdam: Elsevier.

Heckman, J., L. Lochner, and C. Taber. 1998. "General Equilibrium Treatment Effects: A Study of Tuition Policy." American Economic Review 88(2):381--86.

Heckman, J., and R. Robb, 1985. "Alternative Methods for Estimating The Impact of Interventions." In Langitudinal Analysis of Labor Market Data, ed. J. Heckman, and B. Singer, 156-245. New York: Cambridge University Press.

Heckman, J., and R. Robb. 1986. "Alternative Methods For Solving The Problem of Selection Bias in Evaluating The Impact of Treatments on Outcomes." In Drawing Inference From Self-Selected Samples, ed. Howard Wainer, 63-107. New York: Springer-Verlag.

Heckman, J., and J. Smith 1998. "Evaluating the Welfare State." In Econometrics and Economics in the 20th Century: The Ragnar Frisch Centenary, ed. S. Strom. 241-318. New York, NY:. Cambridge University Press for Econometric Society Monograph Series, Monograph 31.

Heckman, J., and E. Vytlacil. 1998. "Instrumental Variables Methods for the Correlated Random Coefficient Model: Estimating The Average Rate of Return to Schooling When The Return Is Correlated with Schooling." Journal of Human Resources 33(4):974-87.

_____. 1999a. "Local Instrumental Variables." Unpublished manuscript, University of Chicago.

_____. 1999b. "The Relationship Between Treatment Parameters Within A Latent Variable Framework," Forthcoming Economics Letters.

_____. 1999c. "Local Instrumental Variables and Latent Variable Models for Identifying and Bounding Treatment Effects." Proceedings of The National Academy of Sciences, Vol. 96, 4730-34.

Holland, P. W. 1986. "A Comment on Remarks By Rubin and Hartigan." In Drawing Inferences From Self-Selected Samples, ed. H. Wainer, 149-52. New York: SpringerVerlag.

Lewis, H. G. 1963. Unionism and Relative Wages. Chicago: University of Chicago Press.

Marshall, A. 1920. Principles of Economics, Ninth Edition, p. 36. London: McMillan.

Robinson, C. 1989. "The Joint Determination of Union Status and Union Wage Effects: Some Alternative Models." Journal of Political Economy 97(3):639-67. Rosen, 5. 1977, "Human Capital: A Survey of Empirical Research." In Research in Labor Economics, Vol. 1, ed. R. Ehrenberg.

Rosenbaum, P. 1988. "Sensitivity Analysis For Matching With Multiple Controls." Biometrika 75:577-81.

_____. 1991. "Sensitivity Analysis For Matched Case-Control Studies." Biometrics 47: 87-100.

1995. Observational Studies, Chapter 4. New York: Springer-Verlag.

Roy, A. 1951. "Some Thoughts on the Distribution of Earnings." Oxford Economic Papers 3:135-46.

Rubin, D. 1978. "Bayesian Inference for Causal Effects: The Role of Randomization." Annals of Statistics 6:34-58.

Smith, J., and F. Welch. 1986. "Closing the Gap: Forty Years of Economic Progress for Blacks." Santa Monica, Calif.: Rand Corporation.

White, H. 1984, Asymptotic Theory For Econometricians. New York: Academic Press.

Wicksell, K. 1934. Lectures on Political Economy, Vol. 1, General Theory, translated by E. Classen. London: Routledge and Kegan Paul, Ltd.

Printer friendly Cite/link Email Feedback | |

Author: | Heckman, James J. |
---|---|

Publication: | Journal of Human Resources |

Article Type: | Statistical Data Included |

Date: | Sep 22, 1999 |

Words: | 4613 |

Previous Article: | Comment on James J. Heckman, "Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations". |

Topics: |