The Chances of Explanation.
Until very recently, the philosophical literature on probabilistic causation has developed in almost complete isolation from the techniques for making causal inferences that go under the general rubric of causal modeling: regression analysis, path analysis, factor analysis, and so forth. In my view, this isolation has been extremely unfortunate. Causal modeling techniques raise a number of important philosophical issues in their own right and are the natural place to look in scientific practice for disciplined and mathematically explicit discussion of the connection between causal claims and information about statistical relationships and probabilities. In taking up a number of problems concerning the interpretation of such techniques, and in drawing attention to the connections between these problems and the philosophical literature on causation, Humphreys, along with several other philosophers,(1) has taken an important step toward ending this isolation.
Humphreys' discussion of linear models involves two related enterprises. First, he attempts to motivate standard linear regression models by showing how these models follow from certain very general philosophical assumptions about causality and explanation. Second, he attempts to connect a special case of these regression models with the models of probabilistic causation that are standard in the philosophical literature.
One of Humphreys' guiding ideas throughout CE is that a cause must satisfy an invariance or unconditionality requirement--it must make the 'same' contribution to its effect over some range of circumstances. As we shall see, in the case of the causal modeling techniques Humphreys interprets this idea about invariance as requiring that the functional forms representing causal relationships be additive. In the case of ordinary probabilistic causation, the invariance requirement is cashed out as the claim that if B is a 'direct contributing cause' to A, B must, among other things, 'increase the chance of A in all circumstance Z physically compatible with A and B'. Here chance is physical probability, not relative frequency, and the increase is relative to some suitably specified 'neutral state' meant to represent the case in B is absent or has zero value.
In his discussion of linear regression, Humphreys assumes, as a point of departure, (1) that some dependent variable Y can be represented as the sum of a deterministic functional F of the variables ([X.sub.1] . . . [X.sub.n]) and a stochastic disturbance term U. He also assumes that the context is one in which each of the independent variables [X.sub.i] are allowed to vary, one at a time, with the other independent variables held constant. He then imposes various philosophically motivated restrictions on the form of this functional. Humphreys argues that the natural way of representing or implementing the idea that a cause must make an invariant contribution to its effect within the context of (1) is in terms of the requirement that 'the contribution of [each] individual variable [X.sub.i] must be the same at whatever level the other variables are held constant'. It thus follows that (2) the functional F must be additive in the variables [X.sub.1] . . . [X.sub.n]. Now in much of the causal modeling literature it is assumed that relationships are not just additive, but linear. Unlike the assumption of additivity, which is built into the notion of causality via the invariance requirement, Humphreys does not think that the assumption of linearity can be 'motivated on purely general causal grounds'. Non-linear (but not non-additive) quantitative causal relationships are possible, and because of this (2a) the assumption of linearity, which Humphreys is willing to make in order to recover the standard apparatus of causal models, has a 'conventional role' in his approach. Finally, Humphreys also argues that the idea that 'scientific theories should be such as to account for the regularity of the phenomena as far as possible' yields the requirement that (3) F should take a form 'which [minimizes] the contribution of the stochastic element to changes in Y[prime]. Humphreys then shows that it follows from these assumptions (1, 2a, and 3) that F will correspond to or have (something like) the standard features of a simple linear regression equation relating Y to [X.sub.1] . . . [X.sub.n]--that is, that the functional F will give the expectation of Y conditional on [X.sub.1] . . . [X.sub.n] and will be of the form
(4) E(Y/[X.sub.i]) = [Sigma][b.sub.i][X.sub.i] + a
where (5) [b.sub.i] = cov(Y[X.sub.i])/var([X.sub.i])
According to Humphreys, (5) 'satisfies a standard definition for regression coefficients'.
What is the connection between such regression models and probabilistic theories of causation? The probabilistic theories that are most familiar to philosophers in effect assume dichotomous or two-valued variables, corresponding to the occurrence or non-occurrence of some event of interest. By contrast, in the case of regression models one usually assumes that one is dealing with variables that are measurable on an interval scale. Humphreys attempts to connect his treatment of linear regression with probabilistic theories by arguing that if we take [I.sub.A], [I.sub.B] to be indicator variables for the dichotomous variables A and B, and write down a linear regression model relating [I.sub.A] and [I.sub.B], then the regression coefficient [I.sub.A] on [I.sub.B] will be given by [Mathematical Expression Omitted]--the 'relevance difference' that the occurrence of B makes to the probability of A. This is then used to motivate the idea mentioned earlier, that in the use of models with dichotomous variables it is the change in the probability of A as one moves from the case in which [Mathematical Expression Omitted] occurs to a case in which B occurs which is 'central to causation'. B's causing A just is a matter of B's increasing the probability of A.
I strongly recommend CE as a lucid and informed introduction to the literature on causal modeling techniques. None the less, I found some of the details of Humphreys' treatment problematic or at least in need of further discussion. To begin with the most important point: I think Humphreys is quite correct to emphasize the deep connection between the notion of causation and the notion of invariance and to think that this is a connection which deserves more attention from philosophers than it has hitherto received. But my own inclination is to hold that the sort of invariance requirement that is most relevant to understanding causation will take a different form than that emphasized by Humphreys and will be less closely tied to satisfaction of an additivity requirement than he supposes. One of the merits of Humphreys' discussion is that it draws attention to different possible ways of understanding invariance and allows us to explore different forms that the connection between causation and invariance might take.
In my view, the notion of invariance which is most relevant to understanding causation is just the notion which I think is in fact most emphasized in econometrics and the more sophisticated parts of the causal modeling literature--invariant relationships are those relationships which will remain stable or continue to hold under some class of changes or interventions or permutations in initial or background conditions. For example, the relationship expressed by the ideal gas law is invariant or nearly so in this sense--it continues to hold (approximately) if one changes the temperature, pressure, or volume within a certain range, if one changes the spatial position of the gas, or the color of its container, and so on. Relationships having this general character are described in the causal modeling literature by a variety of different terms--they are said to be 'structural' or 'autonomous' relationships, or to satisfy an 'exogeneity' condition.(2) It is these relationships which (or so it is thought) represent causal or homological connections. By contrast, relationships which hold, but not invariantly--i.e. relationships which can be disrupted by interventions or changes in background circumstances from some appropriate class of interest--are regarded as only accidentally true. As the econometrics literature emphasizes, this conception links invariant relationships (and hence causal relationships) to those relationships which might in principle be used for purposes of manipulation or control: if the relationship between C and E will remain invariant under some class of changes then we may be able to avail ourselves of the stability of this relationship to produce changes in E by producing changes in C. If, on the contrary, the result of changing C is simply to disturb the previously existing relationship between C and E we will not be able to make use of this relationship to bring about changes in E by manipulating C.(3)
A standard econometric illustration of these ideas involves the so-called Phillips curve, which describes the historically observed inverse relationship or 'trade-off' between unemployment and inflation in many Western economics from the mid-nineteenth to the mid-twentieth centuries. According to some Keynesian economic models, this relationship is at least approximately invariant for many policy interventions of the sort Western governments might undertake. If this is correct, there is a causal or lawful relationship and not just an accidental correlation between these variables, and governments may be able to take advantage of this relationship to produce changes in one of these variables by producing change in the other. For example, governments may be able to lower the unemployment rate by introducing measures which lead to a higher rate of inflation. The burden of an influential criticism of this claim by Lucas (the so-called Lucas critique; Lucas ) is that contrary to these Keynesian models, it follows from fundamental microeconomic assumptions that the relationship discovered by Phillips is not invariant under such policy interventions: at least in the moderate long run, the result of introducing changes in the inflation rate will not be to change the unemployment rate, but rather simply to produce changes in the relationship specified in the Phillips curve. Arguably, this is just what happened during the stagflation of the 1970s. According to Lucas, the Phillips relationship is thus at best 'accidentally' true; it is not a causal or lawful relationship. It is not the sort of relationship which could be exploited for purposes of manipulation and control and does not support counterfactual claims about what would happen to the unemployment rate under hypothetical alterations of the inflation rate.
This broad notion of invariance (as stability of a relationship under some class of interventions) seems, as I have said, to be the most promising place to look to develop the idea that there is a philosophically interesting connection between causation and invariance. What is the relationship between this notion and Humphreys' additivity requirement? As I understand it, Humphreys' requirement is really a special case of the broad notion of invariance--what Humphreys in effect demands is that causal relationships be representable by an additive function that remains invariant under changes in the values of the various independent variables. When understood in this way, Humphreys' requirement seems unnecessarily narrow. Invariance rather than additivity captures what is crucial to causality and a relationship of any functional form whatsoever, whether or not it is additive, can turn out to be highly invariant if it remains stable under some large class of changes or interventions. Indeed, many of the relationships which figure prominently in economics are not additive and are not readily transformable into relationships having this character, but are none the less relationships which economists hope are roughly invariant under some non-trivial class of changes and which are thus taken to represent causal relationships. There are also many familiar natural scientific examples of laws (Maxwell's equations, the gravitational inverse square law) that describe relationships that are invariant under very large classes of changes in initial conditions, but which are not additive in form, at least under the most natural representation of their variables. While it is certainly true there is a (in my view, unfortunate) tendency within much of the causal modeling literature to focus almost exclusively on additive (and, indeed, linear) models, this seems to derive mainly from considerations having to do with computational tractability and from a preference for certain simple statistical techniques and perhaps also from a desire to impose an artificial limit on the space of alternative models among which one needs to search rather than from any assumption that additive relationships are especially suitable for expressing causal connections.(4)
While Humphreys does not suggest otherwise, it is also worth underscoring the point that the mere fact that a relationship is additive in form is not sufficient condition for it to be invariant in the sense under discussion. The notion of invariance found in the causal modeling literature is a modal or counterfactual notion--it has to do not just with whether a relationship holds under the range of circumstances that actually occur, but with whether that relationship would continue to hold if, contrary to fact, various possible changes or interventions were to occur.(5) It is thus perfectly possible for a relationship to be additive and to fit what has been so far observed very accurately and yet for the relationship to be only accidentally true and thus non-invariant--not such that it would continue to hold if certain changes or interventions were to occur.(6) Indeed, just this possibility is a central concern whenever one constructs an additive or linear model that one wants to interpret causally. Finding a model (or indeed lots of models) which fits the observed data fairly well and which is additive or linear is often easy, but success in this enterprise does not at all show that the relationships postulated in the model are structural or invariant in the sense described above and hence interpretable as reflecting genuine causal connections.(7) I shall return to this point below.
A second general way in which Humphreys' discussion seems problematic has to do with his treatment of the so-called error term U. As we have noted, Humphreys requires that the functional F be such as to minimize this quantity. The more usual approach within the causal modeling literature imposes no such requirement (at least when understood as a requirement that a regression equation must satisfy if it is to have a causal interpretation). Instead, the usual assumptions about the error term are assumptions about its distribution. In the simplest case involving linear regression models, it is standard to assume that the error term satisfies the following conditions.(8)
(a) zero mean: E([u.sub.i]) = 0 for all i,
(b) common variance: V([u.sub.i]) = [[Sigma].sup.2] for all i,
(c) lack of correlation between or statistical independence of error terms [u.sub.i], [u.sub.j] for i [is not equal to] j,
(d) statistical independence of the error terms [u.sub.i] and independent variables [x.sub.i].
In my view, these distributional assumptions need not be thought of as resting on the idea that (as Humphreys suggests) it is desirable to minimize the role of chance in constructing explanations. Instead the most important causal motivation underlying the assumptions is roughly this: taken together they help to insure that no factors which causally affect the dependent variable and which are correlated with the included independent variables have been omitted from the equation. If distributional assumptions like (a)-(d) are violated, then it is likely that some of the causal influence of the omitted variables will be mistakenly assigned to the variables that are included in the equation. Slightly more technically, the usual ordinary least squares estimators (OLS) for the regression coefficients will turn out to be biased in the sense that they need not yield values which reflect the real causal contribution of the various independent variables to the dependent variable. Satisfaction of Humphreys' requirement that the contribution of the error term be minimized is neither a necessary nor a sufficient condition for the avoidance of such bias.
Humphreys does mention such distributional assumptions in the course of his treatment of linear models in Chapter 2 of CE, and discusses them at much greater length in an appendix to CE. In particular, his Assumption B is equivalent, or nearly so, to assumptions (c) and (d) above. Moreover, as he also notes, assumption (d) is automatically satisfied in connection with the derivation under discussion because it is assumed that the context is relevantly like an experimental one in which all the independent variables except some single variable [X.sub.i] of interest are held constant. I therefore do not in any way mean to suggest that Humphreys is unaware of the role played by assumptions like (a)-(d) in the interpretation of causal models, or that he does not accept some such set of assumptions. My worry about this portion of Humphreys' discussion is rather this: first, it seems to me that something like the distributional assumptions concerning the error term (a)-(d) (or some more generalized analogue of these) are required if one is to estimate reliably and interpret causally a linear model, and that this fact does not emerge as clearly as it should in CE. Indeed, unless such assumptions are satisfied, I am puzzled about what entities one to take the regression equation (4) and the expression (5) for the regression coefficients which Humphreys derives to reflect structural, causal facts about the impact of the independent variables on the dependent variable. As noted above, given a dependent variable and an arbitrary set of independent variables we can always find a linear function relating the two which minimizes U and satisfies (4) and (5). But finding such a function is just an exercise in curve-fitting; it does not show that the resulting equation represents a causal relationship in the invariance-linked sense of causal relationship described above. It doesn't show, for example, that if in (4) one were to produce a change [Delta][X.sub.i] in [X.sub.i] with everything held constant, this would produce a change in Y of [b.sub.i][Delta][X.sub.i] where [b.sub.i] is given by (5). The role of the distributional assumptions (a)-(d) mentioned above is that they are a proper subset of assumptions thought to be sufficient for this causal interpretation of (4) to be correct.(9)
Second, it seems to me that if these distributional assumptions are satisfied, Humphreys' assumption (3) that F should be chosen to minimize the value of U is otiose--it isn't required for the interpretation or estimation of a causal model. Moreover, this is just as well, since it is hard to see what causal motivation there could be for adoption of assumption (3).(10) By contrast, the underlying causal rationale for the distributional assumptions is, as I have suggested, fairly straightforward. My view is thus that a more perspicuous derivation would focus more directly on these distributional assumptions and omit reference to assumption (3).
Finally, and relatedly, the restriction of the derivation to cases in which other variables besides [X.sub.i] are held constant represents an important loss of generality. In the sorts of contexts in which causal models are typically used, one assumes instead that all of the independent variables are free to vary and that they may exhibit non-zero correlations with each other. The expression (5) for the regression coefficients that Humphreys derives is correct only in the special circumstance--assumed in Humphreys' derivation--in which the independent variables are all uncorrelated. In the more usual case in which the independent variables [X.sub.i] . . . [X.sub.n] are correlated, the regression coefficient for [X.sub.i] will be a function of the covariance between [X.sub.i] and all of the independent variables with which [X.sub.i] is correlated and their variances, as well as covariance (Y, [X.sub.i]) and variance ([X.sub.i]). One consequence of the satisfaction of distributional assumptions like (a)-(d) is to allow reliable estimation in more general cases in which there is such correlation.(11)
I turn next to Humphreys' attempts to extend his regression framework to cases involving dichotomous variables. Unfortunately, there are important technical obstacles to establishing the direct connection Humphreys seeks. While dichotomous independent variables may be readily used in linear models with interval-valued dependent variables, special problems arise when one attempts, as Humphreys does, to use dichotomous dependent variables in a linear regression model. While the assumption that E([u.sub.i]) = 0 can be maintained, the assumption that the [u.sub.i] have constant variance will be violated--indeed, the variance of the error term will vary systematically with the values of the independent variables. In consequence, OLS estimates of the coefficients, although unbiased, will not be best--that is, they will not have minimal sampling variances. Moreover, estimates of the sampling variance and hypothesis tests based on them will not be correct. More importantly, because the values of the dependent variables are constrained to just two extreme values, the relationship between it and the independent variables may not be approximated very well by a linear relationship.(12) Humphreys acknowledges these difficulties in a footnote (footnote 13, pp. 34-5, referring to comments of mine on an earlier draft of CE), but seems to suggest that these problems can be avoided if (as he intends) one restricts one's attention to cases in which both dependent and independent variables are dichotomous. He also writes that he is concerned to show how regression equations can be given a 'coherent causal interpretation' rather than 'with regression techniques as a method for estimating the parameters in a regression equation' and seems to suggest that the technical difficulties I have mentioned create difficulties for the latter project but not the former.
With regard to the first claim, I am puzzled about what could be meant by the suggestion that the relationship between two variables, both of which can take only two values, is linear. In such a case there will be no intermediate values of either variable that would justify one in regarding any continuous curve (whether a straight line or not) as descriptive of their relationship. By way of contrast, when the dependent variable is dichotomous, but the independent variables interval valued, it is generally supposed that drawing a continuous curve does make sense and in fact in such cases the expected value of the dependent variable will be interpretable as a probability. However, for the reasons described above, it is standard practice in the causal modeling literature to assume that the functional forms relating such variables are nonlinear. For example, a typical approach is to make use of a so-called logit model, in which a logistic curve rather than a linear relationship is fitted to the data (cf. Fox , pp. 302ff.). In any case, dichotomous dependent variables just don't fit comfortably with linear models.
With regard to the second point, I, of course, agree (as my remarks above should make clear) with Humphreys' claim that there is a difference between estimating parameters in a regression equation and interpreting such an equation causally. One can estimate parameters even when no causal interpretation is intended. To do this is to fit a linear relationship to the data without regard to whether the relationship is invariant in the sense described above. But, when causal interpretation is intended, the technical difficulties described above are relevant. In such cases, biased parameter estimates and choice of an inappropriate functional form represent mistakes in causal interpretation.
The technical difficulties under discussion are important not only because they bear on the relationship between linear models and familiar probabilistic theories of causality, but because they also bear on the general issue of how we are to understand invariance. If, as the econometrics literature generally assumes, the preferred functional forms for relating dichotomous dependent variables to interval-valued independent values are highly nonlinear (and non-additive) and if the assumption that the real relationship between dichotomous dependent and dichotomous independent variables is linear (or additive) makes little sense, then (assuming that we wish to interpret these relationships causally), this provides additional reason for not associating the kind of invariance which is relevant to understanding causation with additivity in the way that Humphrey advocates.
Rather than attempting, as Humphreys does, to connect probabilistic theories of causation involving dichotomous variables with the standard apparatus of linear causal models by treating the former as a special case of the latter, it seems to me more promising to attempt to make the connection by focusing on the notion of structural invariance itself and requiring that some appropriate probabilistic relationship between C and E be invariant. One obvious possibility would be to require that if C causes E, then not only must it be the case that, e.g. (1) [Mathematical Expression Omitted] or (2) [Mathematical Expression Omitted] for some set of additional causally relevant factors K, but also that the relationships (1) or (2) must themselves be structurally invariant under some class of changes in initial or background conditions. Alternatively, one might require that the value of the conditional probability P(E/C) be invariant. Proposals regarding probabilistic causality of this second sort have been put forward by Kevin Hoover  and by Frank Arntzenius .(13) On this approach it is the invariance of the relationship between C and E and not the presence of some special functional form (like additivity) that is essential for causal interpretability.
This having been said, I also want to emphasize that it is a great virtue of this portion of Humphreys' discussion that it draws attention to an issue that deserves more attention from philosophers than it has hitherto received. The causal or nomological relationships on which research focuses in both the natural and social sciences are typically relationships between quantitative magnitudes. As Humphreys supposes, there must be some connection between such quantitative relationships and ordinary qualitative causal talk of the short-circuit-caused-the-fire variety. None the less it is surprisingly difficult to make this connection explicit and precise. The technical problems described above in attempting to extend causal modeling frameworks to causal relationships between dichotomous variables are one expression of this general difficulty, but it seems to me that there are many others. Related problems arise when, for example, one tries to spell out the commonly accepted philosophical idea that 'underlying' every true ordinary (qualitative) causal claim, there must be some law of nature (given that typically such laws will be quantitative in character). For a variety of reasons one can't simply say that the former 'instantiates' the latter.(14) In this, as well as many other cases, troubles arise because qualitative causal talk doesn't fit very readily into a quantitative framework. Humphreys' discussion is a pioneering exploration of one aspect of this general difficulty.
I turn now to some more familiar philosophical themes, having to do with the relationship between causation and probability increase. As noted above, in the context of probabilistic accounts of causation, Humphreys cashes out the idea that a cause must make an invariant contribution to its effect in terms of the idea that B must 'increase the chance of A in all circumstances Z that are physically compatible with A and B'.(15)
This requirement resembles but is not identical with the unanimity requirement imposed in many philosophical accounts of probabilistic causation.(16) We can also think of the requirement as the natural analogue, in the context of such probabilistic theories, of the additivity requirement in the case of linear models with interval-valued variables. In both cases, the underlying idea is that a cause should universally or without exception satisfy a very strong form of context-independence: it should make the same contribution (or behave in the same way with respect to) its effect in all appropriate background circumstances or in the presence of all additional causally relevant factors. In the case of probabilistic theories, 'same contribution' means 'increase the chance of the effect'. In the case of causal relationships between variables measurable on an interval scale, one natural way of interpreting the notion of 'same contribution independent of context' is, as Humphreys says, in terms of an additivity requirement. Just as with additivity, I think that the requirement of probability increase across all physically compatible background circumstances doesn't adequately capture what is plausible in the underlying idea that there is a close connection between causation and invariance.
In assessing Humphreys' version of the requirement of probability increase, we need to distinguish between (a) claims that a causal connection exists between particular events and (b) claims to the effect that some type of factor causes some type of effect. In familiar terminology, due to Eells and Sober , claims of the first sort involve token-causation, while claims of the second sort involve type-causation. The claim that Smith's smoking caused his lung cancer is a claim of the first sort, while the claim that smoking causes lung cancer is a claim of the second sort. There are thus at least two distinct questions we can ask about the probability increase idea--whether it is plausible in connection with token causal claims and whether it is plausible in connection with type causal claims.
Humphreys clearly intends his discussion to apply to claims of token causation, but, like Eells and Sober, I think there is a fundamental difficulty with this suggestion. It looks as though c and e can both occur, c can boost the probability of e, and yet it can be false that c caused e. To make the point vivid, consider one of Humphreys' own examples. (In what follows I use lower case letters for tokens and upper case letters for types.) Suppose we know from controlled experiments performed separately that each of two different carcinogenic materials [C.sub.1] and [C.sub.2] can cause cancer in mice (E). Suppose also that the operation of each of these causes is non-deterministic but governed by a stable probability distribution: each increases the probability of cancer, although to some value strictly less than one. Suppose also that there is no evidence for any interaction effect between [C.sub.1] and [C.sub.2] when both are present. Now suppose that a particular mouse is exposed to individual tokens [c.sub.1], [c.sub.2] of [C.sub.1] and [C.sub.2] and develops cancer e. It follows on Humphreys' account that since both [c.sub.1] and [c.sub.2] increase the possibility of cancer, both cause or have causally contributed to the cancer. But why should we believe this? How do we know that the cancer was not instead caused by [c.sub.1] alone or by [c.sub.2] alone? We know that when tokens of [C.sub.1] occur in isolation (without tokens of [C.sub.2]) on other occasions, it is perfectly possible for them to increase the probability of tokens of E and yet on some occasions for them to fail to cause tokens of E. Here probability increase is not sufficient for actual token causation. How do we know that the envisioned cases is not also one of these cases, in which [c.sub.1] fails to cause e (even though it increases the probability of e) and e is instead caused by [c.sub.2]? (Remember that we have learned from controlled experiments that tokens of [C.sub.2] can by themselves cause tokens of E--why shouldn't this be one of those occasions?) To suppose, as Humphreys' treatment would require, that whenever (tokens of) [C.sub.1], [C.sub.2] and E occur together, both (tokens of) [C.sub.1] and [C.sub.2] cause (or causally contribute to) E is to suppose that [C.sub.1] behaves very differently in isolation than it does in the presence of [C.sub.2]--that the rate at which [C.sub.1] causes E goes up in the presence of [C.sub.2], even though, by hypothesis, there is no independent evidence of any interaction between C and [C.sub.2]. It seems to me that one very natural way of understanding the idea that a cause should make an invariant contribution to its effect is that, in the absence of specific evidence for interaction effects, one should presume that the cause will behave in the same way (exhibit the same stable relationship with its effect) whether or not other causes are present. Rather than following from this understanding of invariance, Humphreys' account seems to be inconsistent with it.(17)
In his discussion Humphreys considers and rejects the suggestion that examples like this show that there is more to the token causation than just probability increase. He writes that 'To raise the image of something else occurring after the chemicals have interacted with the cellular structure [of the mouse] and changed the chance is to reify chance into a non-Humean chancy connection'. I agree, of course, that there are not, in addition to the interactions of [c.sub.1] and [c.sub.2] with the cells of the mouse, and the transfers of energy and momentum and change in molecular structure that these involve, distinct further items in nature corresponding to singular causal connections. To say that it was [c.sub.1] on this particular occasion that caused the cancer and not [c.sub.2] presumably is just to say that some interaction of a relevant sort between [c.sub.1] and the cellular structure occurred, and that no such relevant interaction occurred between [c.sub.2] and e. The question I mean to pose is whether we can adequately cash out the claim that such an interaction has occurred by invoking the notion of single case probability increase in the way that Humphreys advocates. Is it really true, as Humphreys seems to suggest that, because [C.sub.2] 'contributes to the chance [of e] on this occasion', it follows (given the other features of the example) that [C.sub.2] 'is not (causally) irrelevant on this or any other occasion'? The argument I have given seems to cast doubt on this claim and I do not see how Humphreys' strictures about non-Humean connections address this doubt.
What about the invariant probability increase idea, when understood as a requirement on type-causation? Here, too, I think that there is a good reason for skepticism: if the requirement is taken seriously, most causes will turn out to be unmanageably large and complex and the successful identification of causes will require more information than we usually have. To appreciate the force of this difficulty, consider the familiar point that most causes, as ordinarily described or understood, produce their characteristic effects only in the presence of appropriate sorts of background conditions or in an appropriate causal field. For example, it is usually thought that short circuits cause fires, but of course they can produce this effect only when appropriate background conditions like the presence of oxygen obtain. Since the presence of a short circuit alone thus does not increase the chance of fire across all physically possible background circumstances, it follows that on Humphreys' account the short circuit is not by itself a 'direct contributing cause' of fire--instead the cause, correctly specified, must include the presence of oxygen. Indeed, as Humphreys explicitly notes, on his interpretation of invariance, every such background condition which must be present if a cause is to produce its effect must be regarded as part of the cause. Similarly, if C is a candidate for a cause of E, and K some condition such that in its presence, C lowers the chance of E, then the absence of K will be part of the cause of E. Nor is this all. As Humphreys also notes, if S is some factor which is sufficient for E, then C cannot increase the chance of E in the presence of S. Thus, by itself C cannot qualify as a cause of E, and in specifying a factor which does invariantly increase the chance of E across all relevant background contexts we must also specify that any such competing sufficient cause S is absent.
The upshot of this is that for Humphreys any genuine contributing cause will consist of an extremely long conjunction of factors and conditions, including conditions to the effect that various competing or interacting causes are absent. Most ordinary causal claims will be false, or at least misleading and radically incomplete. Moreover in most contexts in which we attempt to discover causal relationships, we will be unable to discover factors conforming to the invariance condition that Humphreys lays down. For example, while most of us believe that smoking (S) causes lung cancer (C), we are unable to specify a set of conditions K which in conjunction with the presence of smoking invariantly increases the chance of lung cancer in all background circumstances that are compatible with S and C.
Is this a disturbing consequence of Humphreys' theory? Humphreys argues vigorously and entirely persuasively that the mere fact that an account of causation fails to comport with every feature of ordinary usage or with pre-theoretic 'intuition' is no reason to reject that account. The difficulty presently under consideration, however, seems to me to go much deeper than mere failure to conform to intuition or ordinary usage. Indeed, it is essentially the same difficulty which Humphreys himself advances, with devastating effect, against standard models of statistical explanation. Very briefly, Humphreys argues that such models impose completeness conditions (in the form of objective homogeneity or maximal specificity requirements) that, as he puts it, result 'in a hopelessly unrealistic ideal being imposed upon [statistical explanations] which will rarely, if ever, be satisfied'. In a similar vein, it seems to me that Humphreys' version of the invariance requirement also imposes informational demands on the identification of causes which there is often no realistic possibility of satisfying and which, furthermore, scientists don't seem to think that they need to satisfy in establishing causal claims. As a result the requirement doesn't yield a useful guide to the discovery and assessment of causal claims.(18)
To illustrate, consider a prototypical experiment. A population of mice is divided, randomly, into a treatment and control group, with the treatment group being exposed to some putative carcinogenic material C. Suppose that the incidence of cancer does indeed turn out to be higher in the treatment group and, furthermore, that the randomization 'works'; there is no systematic difference between the two groups, other than their exposure to C, which is causally relevant to their developing cancer. If this sort of result (higher incidence of cancer in the treatment group) were to be replicated in other laboratories, with different populations of mice or perhaps with other animals, and in somewhat different background circumstances, it is exactly the sort of experimental evidence that would convince us that (as we would ordinarily put it) C was a cause of cancer in mice and that the explanation of the higher incidence of cancer in the treatment group was due to their exposure to C.
However, on Humphreys' account, it is clear that exposure to C is not a contributing cause to cancer in mice, but rather an element or factor in some much more complex set of conditions that constitute such a cause. This is because exposure to C will not increase the chance of cancer in mice unless various other conditions are satisfied--presumably, the mice must be in some appropriate range of normal physiological conditions, any causes which counteract the carcinogenic effects of C must be absent, and so forth. The experiment itself does not tell us what these additional factors are. In short, the causal information which comes out of the experiment is just not information which allows us to identify a factor F which increases the chance of cancer 'in all circumstances which are physically compatible with F and the cancer'. If controlled experiments like that described above are a good strategy for identifying causes (as I think they are) Humphreys' version of the invariance condition is too strong.(19)
We can further bring out what is at issue here by commenting briefly on the implications of Humphreys' account for an important distinction which is frequently made in the literature on experimental design in the social and behavioral sciences--the distinction between internal and external validity.(20) Internal validity has to do with whether some causal agent is in fact responsible for an effect within a given experimental context (or whether the effect instead is due to some other cause, or is a matter of chance). The question of whether treatment with C in fact caused the increased incidence of cancer in the particular group of mice involved in the experiment described above is a question about internal validity. By contrast, external validity has to do with the question of how far this result about the effects of C generalizes outside the particular experimental context in which it was discovered--with whether C causes cancer among other groups of animals and in other circumstances and, if so, what these circumstances are. The distinction between internal and external validity is thus premised on the assumption that an investigation can reliably establish internal validity--can establish that a cause in fact produced an effect of interest within some experimental group--and yet be uncertain about the issue of external validity--uncertain about how to extend the claim about internal validity to a generalization about the behavior of the cause in other circumstances.
It seems to me that it is a troubling feature of Humphreys' characterization of invariance that it threatens to undercut (or at least radically to transform) this distinction. Humphreys' characterization has the consequence that we cannot claim to have successfully identified a cause within a given experimental context unless we know a very great deal about its behavior in a very large number of other possible circumstances. On Humphreys' theory, we cannot establish internal validity without knowing a great deal about external validity.
Does this difficulty mean that we must give up the idea that there is a close connection between causation and invariance? This idea is so intuitively appealing and so plainly exemplified in a wide range of scientific practice that it seems a shame to abandon it. While I lack the space for a detailed discussion, let me conclude by suggesting how we can retain this connection if we are willing to countenance a weaker notion of invariance than that entertained by Humphreys.(21) In identifying a factor as a cause of some effect in many contexts (especially, but not exclusively in ordinary life and in the social and bio-medical sciences) we often are unable to formulate precise generalizations specifying conditions under which the cause deterministically produces its effect and are often unable to specify conditions under which it produces this effect with some stable fixed probability or even conditions under which the cause invariantly increases the probability of this effect. (For this reason we typically cannot identify causes in such contexts in ways which satisfy the strong invariance condition formulated by Humphreys or the unanimity conditions standardly imposed in philosophical discussion.) None the less we typically have some general information about the behavior of such causes--we know at least some of the different circumstances in which the cause can produce or will fail to produce its effect, even if we don't possess precise exceptionless generalizations about how exactly the cause will behave in those circumstances. For example, if C has genuinely caused the increased incidence of cancer in the experiment described above, it at least will be reasonable to expect an increased incidence of cancer if C were administered in similarly designed experiments conducted at different times or places, with other 'normal' animals of the same species and perhaps with animals from closely related species. (It is certainly possible and perhaps even likely that we will be in a position to establish that this sort of increased incidence typically occurs across different experimental groups even though we are unable to formulate lawful generalizations specifying conditions under which exposure of an individual animal to C always results in cancer, or always results in some stable probability of cancer.)
We can think of this sort of behavior on the part of C as amounting to the satisfaction of a very weak kind of invariance requirement--the requirement being that the relation between cause and effect must be stable in the sense that even if the operation of the cause is not governed by any law that we are able to formulate, the cause must be able to produce its effect (although perhaps irregularly) across some range of changes in time, place, and background circumstances.
Much familiar talk of 'background conditions' or 'causal fields' in connection with causal claims has, I believe, the role of picking out or suggesting, typically in a vague or imprecise way, some notion of the scope of invariance of causes--some of the domains or regimes in which there is a good reason to believe that the cause can be effective in producing its effect--rather than of describing something which is itself part of the cause. Thus, for example, an experimenter may conclude on the basis of the above experiment that C causes cancer among 'mice of species S in normal physiological conditions', even though he is utterly unable to unpack the quoted phrase in a way that allows for the same precise identification of the physiological conditions which must be present if C is or is not to produce cancer.
In this case, it seems to me mistaken (or at least analytically unilluminating) to think of 'in normal physiological conditions' as a description of a part of the cause of cancer. The quoted phrase is rather a description that we use in picking out the circumstances or populations in which C is efficacious in producing cancer. A similar point holds regarding the absence of other sufficient causes of the cancer. The absence of such causes isn't itself causally efficacious in producing cancer (and isn't part of the cause that does produce cancer) but is rather a condition that must obtain if C is to be efficacious in producing cancer.(22)
On this picture then, the connection between causation and invariance is simply this: for a factor to count as a cause of some effect it must be the case that the presence of the factor will be generally followed by an increased incidence of the effect across some non-trivial variety of changes in background circumstances, and conditions, but it need not be the case that, as Humphreys requires, the cause increases the probability of the effect across all background circumstances compatible with the cause and effect, or that there exists some precise probabilistic or deterministic law linking cause and effect. Especially in the social and behavioral sciences, the evidence that establishes that a causal claim is true is often evidence for (at best) this weaker invariance claim, rather than evidence for some more strongly invariant or universal relationship and this weaker invariance claim seems more adequately to capture what is meant when one factor is described as a cause of another.
My discussion has followed the usual convention in philosophy reviews of focusing on those aspects of CE that I found less than fully convincing or that seemed in need of further supporting argument. I hope, however, that I have succeeded in making it clear that CE is a very rewarding book that ought to be read carefully by everyone with an interest in causation, explanation, and related topics. In addition to the material described above, CE contains original and illuminating discussions of a wide variety of other issues--for example, various alternative theories of causation, relative frequency and propensity of theories of probability, scientific realism, and alternative models of statistical explanation. Humphreys' own account of causation is applied to a number of standard puzzles and problems in the literature--the direction of explanation, frequency-dependent causation, spurious causation, whether connecting process or mechanisms are necessary for causation, and so forth. CE will take its place beside Hempel  and Salmon  as a standard source for future discussions of these matters and in my opinion is comparable in quality and interest to these classics.
1 In addition to CE, other valuable and important work by philosophers of science which makes extensive use of the causal modeling literature includes Nancy Cartwright  and Clark Glymour, Richard Scheines, Peter Spirtes, and Kevin Kelly , as well as a variety of more recent papers by these last authors.
2 The connection between lawfulness (and causation) and invariance under some class of interventions is made explicitly in a great deal of early work in econometrics. A classic statement is Haavelmo . For a more recent discussion, see, for example, Engle, Hendry, and Richard , especially their remarks on what they call 'superexogeneity'. For an excellent introduction to the current state of discussion within econometrics and to the status of the Lucas critique, see Hoover . It is also worth noting that a related connection between causation and invariance is sometimes assumed in discussions of causation within physical contexts. For example, the robustness condition on causation imposed by Redhead  is certainly an invariance condition, although more specific than the econometric condition described above. Invariance conditions (often in the form of symmetry conditions) are of course also standardly imposed on laws in the physical sciences.
Two further points about invariance also deserve explicit emphasis. First, the notion of invariance has the notions of counterfactual dependence and physical possibility built into it--it shouldn't be thought of as part of a program of explaining causal notions in non-causal terms. Second, what is crucial for invariance is stability under actual physical changes or interventions in some system of interest. In the language of Salmon , invariance is an 'ontic' and not an 'epistemic' notion. Invariance is thus not just a matter of, e.g. stability of an investigator's beliefs about a relationship under the addition of further information. Although I like the space for detailed discussion here, I believe that it is also true that invariance is not just a matter of de facto relationships among conditional probabilities, as is implicity assumed in many probabilistic theories of causation. For further discussion see Woodward [forthcoming b].
3 This connection between invariance and possible use for purposes of manipulation is made very explicitly in Hoover . I might add that manipulability theories of causation are often criticized on grounds of circularity and this is obviously a cogent criticism if they are taken as attempts to provide reductionist accounts of causation in non-causal terms. But when understood instead in a non-reductionist spirit--as attempts to elucidate the interconnections between a family of related notions--such theories can be quite illuminating and a non-trivial source of constraints on philosophical theories of causation. For example, as I note below, the plausible claim that if a relationship can be used for manipulation and control, then it is causal is inconsistent with the unanimity condition standardly imposed in philosophical discussions of causation and with Humphrey's version of the invariance condition.
4 I should emphasize that Humphreys' additivity requirement is a requirement on causal relationships, not on nomological or lawful relationships. One possible response to the difficulty under discussion is thus to regard non-additive relationships like Maxwell's equations as nomological but non-causal. Although I am quite sympathetic to the idea that there are nomological relationships that are not causal, this particular way of marking off the class of causal relationships seems to me to lack independent motivation.
5 This is clear, for example, in Haavelmo . See especially p. 29 where Haavelmo insists that there is 'clearly a difference' between what he calls the actual persistence of a relationship 'which depends upon what variations actually occur' and the 'degree of autonomy' of a relationship which 'refers to a class of hypothetical variations in structure, for which the relationship would be invariant' (emphasis in original). The modal or counterfactual character of the notion of invariance is discussed in more detail in Woodward (forthcoming b). It is worth noting that since a number of different structural models can be observationally equivalent in the sense that they imply exactly the same facts about statistical relationships among measured variables, the relevant notion of invariance apparently cannot be defined as captured in terms of relationships among conditional probabilities or densities, etc.
6 A parallel point also holds for Humphreys' understanding of invariance. As noted above, Humphreys' general characterization requires not just that the cause in fact make the same contribution to the effect in the actual circumstances, but that it would continue to do so in all other possible circumstances compatible with the occurrence of the cause and effect. The fact that one can fit a curve to a set of variables which is additive in form doesn't show that the relationship is invariant in the sense described by Humphreys' general characterization.
7 Humphreys does discuss the question of when coefficients in a regression equation can be interpreted as 'structural' coefficients in Appendix 2 to CE, but his understanding of this notion seems to be rather different than the interpretation I have advanced above. On my interpretation a regression equation with some particular definite set of regression coefficients should be understood structurally if (and only if) the equation with just those coefficients describes a relationship which not only holds de facto in the actual population on which it is estimated, but which would continue to hold under some suitably large class of changes or interventions in that population and also across different populations. A necessary but not sufficient candidate for this to be the case is that the coefficients [b.sub.1] in the equation be constant across all individuals in the population (Humphreys' Assumption A, p. 147). This condition is not sufficient because the constancy in question may hold only accidentally--it may be. for example, that if we were to change [X.sub.i] the value of [Y.sub.i] would not change in the way specified by the coefficient [b.sub.i], but rather that the coefficient [b.sub.i] would itself change. By contrast, Humphreys suggests that the independence of the various independent variables from one another is 'sufficient for the invariance of the structural coefficients [b.sub.i] and this result captures in a completely explicit manner the invariance requirement for genuine causation'. This remark suggests that Humphreys must be working with a notion of structure and invariance which is very different from the notion I have described since the condition he cites is not sufficient (and not even necessary) for invariance in my sense. Although it is difficult to be sure, I also think that the notion I have described above captures roughly what Duncan , whose characterization Humphreys quotes approvingly, has in mind (cf. Duncan, pp. 56ff, pp. 15ff.). Certainly Duncan intends something much stronger than mere de facto constancy of coefficients across the individuals in a particular population, since this condition can clearly be satisfied by systems of equations he labels nonstructural (cf. pp. 152-3).
8 Cf. Fox . Under these conditions OLS estimates will be blue (best linear unbiased estimators) for the coefficients in the regression equation. I write that 'something like' these distributional assumptions must be satisfied, because reliable estimation is certainly possible even if one or more of (a)-(d) is violated, provided appropriate additional information is available. For example. if (b) or (c) is violated, one or another variety of generalized least squares (GLS) estimators may be employed, provided that the full variance-covariance matrix of the disturbance vector is known up to a factor of proportionality or a suitable estimator for this matrix is available. From the point of view of causal interpretability, however, the role of this additional information is essentially the same as that described above in the text.
9 For relevant discussion, see Fox . Fox writes: '[to] regard a regression model as a structural model, we must be prepared to argue that the aggregated omitted causes of the dependent variable, which comprise the error, are uncorrelated with the independent variables included in the model'. I should emphasize that, as suggested above, the satisfaction of distributional assumptions like (a)-(d) are not themselves sufficient for a linear model to have a causal interpretation or, what on my view comes to the same thing, to be structurally invariant. One way of seeing this is to note that, in the case of simultaneous equation models, the so-called reduced form equations may individually satisfy (a)-(d) even though these equations are not generally structural. Similarly--although many philosophers seem to deny that this is a real possibility--it seems to me to be perfectly possible in the case of a simple regression equation like Y = aX + U for U to meet all of the distributional requirements (a)-(d) and yet for Y and X to be merely 'accidentally' correlated (i.e. not correlated in virtue of being directly causally related or effects of a common cause, etc.). Conditions like (a)-(d) are thus really conditions for consistent estimation rather than (sufficient) conditions for structural invariance. For further discussion see Engle, Hendry, and Richard  and Woodward [forthcoming b].
10 The requirement that the stochastic element in U be minimized is interpreted by Humphries to mean that the mean squared error in the predicted value of Y should be minimized; this seems to amount to the requirement that (in more familiar statistical language) the regression equation be chosen so as to maximize [R.sup.2], the percent of variance explained. Although this is a criterion undoubtedly used by many researchers, textbooks usually caution strongly against this practice. Choosing the regression equation or causal model that maximizes [R.sup.2] will often result in data-mining or over-fitting that capitalizes on accidental correlations that happen to be present in the data--variables that maximize in-sample predictive power but are causally irrelevant may be included. Assuming that Humphreys' idea is to find conditions for the choice of F that can be causally motivated or that warrant the causal interpretability of F, this third condition thus seems inappropriate.
11 See Fox (, pp. 27-41).
12 For relevant discussion, see Aldrich and Nelson . In comments on an earlier version of CE, I claimed that for a linear regression equation with dichotomous dependent variables, OLS estimators will be biased--a claim that Humphreys repeats, citing my comments on an earlier draft of CE in his footnote 13, pp. 34-5. This claim is incorrect and I am sorry that I am apparently responsible for its presence in CE. I am grateful to David Grether for calling my attention to this error.
13 As both Arntzenius and Hoover note, if one takes the second approach, requiring that P(E/C) be invariant, this carries with it an added bonus--it yields a natural account of (or at least representation of) the asymmetry of cause and effect. To see this, write P(C/E) as P(C) P(E/C)/P(E).
If C causes E, a natural way of formulating the invariance condition is that P(E/C) must be invariant under changes in the marginal probabilities P(C), and P(E). But if P(E/C) is invariant in this sense, P(C/E) will not be and hence P(C/E) will not represent a causal relationship from E to C. This treatment of causal asymmetry reinforces my sense that there must be something deeply right about the idea that there is a close relationship between causality and invariance. By contrast, it is well known that standard probabilistic theories of causation face a serious prima-facie difficulty in connection with the asymmetry problem since if [Mathematical Expression Omitted], then [Mathematical Expression Omitted].
14 For discussion illustrating some of these difficulties, see Woodward . The discontinuities between the dichotomous and quantitative case raise the question of whether the standard philosophical tendency to focus almost exclusively on the former case is really a defensible strategy.
15 I should caution that this is just one of a set of conditions that are taken by Humphreys to be jointly sufficient from A to be a 'direct contributing cause' for B. I have omitted Humphreys' remaining conditions for expository convenience but it seems to me that nothing in my subsequent discussion is affected by this omission. In particular I do not see how appeal to these additional conditions can be used to avoid the difficulties described below.
16 See, for example, Eells and Sober . Eells' and Sober's version of the unanimity condition differs from Humphreys' invariance requirement in that the former is meant only to apply to generic or type causal factors and is intended to characterize what it is for such a factor or be a positive causal factor for some effect within some particular population--i.e. the unanimous probability increase conditional on the causal factor need only occur for all background circumstances within this population, not as for Humphreys, for all background circumstances physically compatible with the occurrence of the cause and effect.
17 For a more detailed development of this argument, see Woodward . I might add that Eells and Sober also hold, on somewhat different grounds, that the requirement that a cause must increase the probability of its effect applies only to type causal claims, not to token causal claims.
18 In connection with standard theories of statistical explanation Humphreys' objection is this: such theories require that the explanans in a statistical explanation permit the derivation of the true or correct probability value for the explanandum outcome. However, the omission of even a single statistically relevant factor from the explanans will result in an incorrect probability ascription and hence a defective explanation. Thus if an explanation is to be non-defective, it must cite all such factors--it must be a complete explanation. Outside of microphysical contexts this is an ideal that is usually not practically possible to satisfy. As Humphreys notes, his account of explanation does not suffer from this particular difficulty. This is because his account does not require true probability values, but only that we have identified some cause that satisfies his invariance condition. As Humphreys put it:
if one holds that it is causally relevant factors which are explanatory, where a factor is causally relevant if it invariantly changes the propensity for an outcome (i.e., a change in the factor results in a differential change in the propensity, irrespective of what other changes or conditions are also present), then specification of one or some of the causally relevant factors will allow a partial yet true explanation, even in cases where the other factors are not known and the true probability value cannot be calculated.
However, most ordinary explanations do not successfully identify causes that satisfy the invariance condition in parentheses and we are very often unable to expand them into explanations that do. Although the specific problem facing conventional models of statistical explanation is avoided, a very similar, more general difficulty remains--the model places unrealistic demands on what we must know to have even a partial explanation.
19 It is tempting to respond to this objection in the following way: strictly speaking, true or complete causes are complicated conjunctions of factors in just the way Humphreys claims. But in ordinary usage we often speak more loosely, using the word 'cause' to describe just one element or factor in such a conjunction. When someone says that exposure to C causes cancer, this simply means that (1) C is a factor in some very complex conjunction which does invariantly increase the chance of cancer in just the way that Humphreys characterization requires. The objection in the text thus rests on a purely verbal point about the word 'cause' and can be dealt with simply by conceding that ordinary usage differs from Humphreys more technical usage in the way described.
I find this quite unsatisfying, primarily because it is not accompanied by any account of how we are unable to reason about and test 'ordinary' causal claims, use them in explanations, and so on. To begin with, we need an epistemological story--one that makes it intelligible how the procedures we actually use to infer to ordinary causes in fact successfully identify conjuncts in real causes in the way envisioned. For example, given that no one knows how to describe the complex combination of conditions that figures in the antecedent of generalization (1) above, we need to know just what it is about the experiment involving exposure to C that entitles us to believe that (1) is true. More generally there must be some feature, other than invariant probability increase, by which we are able to identify ordinary causes as such. What is this feature, and why shouldn't we take it rather than invariant probability increase as the distinguishing feature of causation? Similarly, why is the citing of an ordinary cause explanatory, given that ordinary causes do not invariantly increase the probability of their effects? (Recall that Humphreys' official theory of explanation requires such invariant probability increase.) Why, so to speak, does the explanatoriness of an unknown complex conjunction which does invariantly increase the chance of an effect transfer to the ordinary causal claims which specify just one conjunct in this conjunction?
20 See e.g. Cook and Campbell .
21 For more detailed discussion, see Woodward [forthcoming a].
22 If this is correct, there must be something defensible after all in the old-fashioned and frequently criticized distinction between causes and conditions--a claim which Humphreys explicitly denies, and in fact must deny, given his formulation of the invariance condition. A detailed discussion must be beyond the scope of this review, but two additional observations may be helpful. First, contrary to what many philosophers suppose, one does find something very like this distinction in sophisticated scientific discussion--for example, the methodologically important distinction between 'attributes of units' and 'causes' or 'treatment' in Holland  is at least closely related to the cause/condition distinction. Similarly, as I have argued elsewhere (Woodward [forthcoming b]) the cause/condition distinction has important implications for the understanding and assessment of causal modeling techniques. Second, I think that the contrast between causes and conditions gets its point at least in part from situations like the following: one knows that the presence of some condition is relevant to or makes a difference for whether some other factor produces a characteristic effect in the sense that one knows if the first condition were sufficiently different, the effect also would be very different. However, one is unable to say exactly how the first condition would need to be different to produce a change in the effect of exactly how the effect would be different were the first condition sufficiently different. For example, in the case of a drug which causes a characteristic cancer in a mouse in normal physiological condition, it is probably true that if the mouse were in some sufficiently different physiological condition, the drug would not have this characteristic effect, although we may be in no position to identify the precise features of the mouse's physiology that would have to be different for the effect of the drug to be different or to say exactly what effects the drug would have under these sufficiently different physiological conditions. Here 'mouse in ordinary condition' is a vague stand-in for everything we don't know about the mouse's physiology that is relevant to the effects of the drug. I think that the contrast between causes and conditions gets its point at least in part from contexts like these. The quoted phrase doesn't describe a factor with sufficiently determinate effects to qualify as a cause of cancer. Rather the phrase is a way of talking about the circumstances or regimes in which causal agents like C, which do have determinate consequences, are causally effective.
ALDERICH, J. and NELSON, F. : Linear Probability, Logit, and Probit Models. Beverly Hills: Sage Publications.
ARNTZENIUS, F. : 'Physics and Common Causes', Synthese, 82, pp. 77-96.
CARTWRIGHT, N. : Nature's Capacities and their Measurement. Oxford: Oxford University Press.
COOK, T. and CAMPBELL, D. : Quasi-Experimentation. Boston: Houghton Miflin Company.
DUNCAN, O. : Introduction to Structural Equation Models. New York: Academic Press.
EELLS, E. and SOBER, E. : 'Probabilistic Causality and the Question of Transitivity', Philosophy of Science, 50, pp. 35-7.
ENGLE, F., HENDRY, D. and RICHARD, J. F. : 'Exogeneity', Econometrica, 51, pp. 277-304.
FOX, J. : Linear Statistical Models and Related Methods. New York: John Wiley & Sons.
GLYMOUR, C., SCHEINES, R., SPIRTES, P. and KELLY, K. : Discovering Causal Structure. Orlando; Academic Press.
HAAVELMO, T. : 'The Probability Approach in Econometrics', Econometrica, 12 (Supplement).
HEMPEL, C. : Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: Free Press.
HOLLAND, P. : 'Statistics and Causal Inference', Program Statistics Research Report No. 85-63. Educational Testing Service, Princeton, NJ.
HOOVER, K, : The New Classical Macroeconomics. Oxford: Basil Blackwell.
HOOVER, K. : 'The Logic of Causal Inference', Economics and Philosophy, 6, pp. 207-34.
HUMPHREYS, P. : The Chances of Explanation. Princeton: University Press.
LUCAS, R. E. : 'Econometric Policy Evaluation: A Critique', in Vol. 1 of the Carnegie-Rochester Conference on Public Policy, supplementary series to the Journal of Monetary Economics, eds. K. Brunner and A. Meltzer. Amsterdam: North Holland Publishing Company.
REDHEAD, M. : Incompleteness, Nonlocality and Realism. Oxford: Clarendon Press.
SALMON, W, : Scientific Explanation and the Causal Structure of the World. Princeton: Princeton University Press.
WOODWARD, J. : 'Are Singular Causal Explanations Implicit Covering-Law Explanations', Canadian Journal of Philosophy, 16, pp. 253-80.
WOODWARD, J. : 'Supervenience and Singular Causal Claims', in D. Knowles (ed.), Explanation and its Limits. Cambridge: Cambridge University Press, pp. 211-46.
WOODWARD, J. [forthcoming a]: 'Capacities and Invariance', to appear in J. Earman, A. Janis, G. Massey, and N. Rescher (eds.), Philosophical Problems of the Internal and External Worlds: Essays Concerning the Philosophy of Adolph Grunbaum. Pittsburgh: University of Pittsburgh Press.
WOODWARD, J. [forthcoming b]: 'Causality and Explanation in Econometrics', to appear in D. Little (ed.), The Reliability of Economic Models. Dordrecht: Kluwer Academic Publishers.
|Printer friendly Cite/link Email Feedback|
|Publication:||The British Journal for the Philosophy of Science|
|Article Type:||Book Review|
|Date:||Mar 1, 1994|
|Previous Article:||Experiment and the Making of Meaning.|
|Next Article:||Persuading Science. The Art of Scientific Rhetoric.|