Printer Friendly

Partial convergence and approximate truth.

I SCIENTIFIC REALISM VS. CONSTRUCTIVE EMPIRICISM

Many Scientific Realists (SRs) believe that the best--indeed, the only nonmiracle-involving---explanation for the increasing predictive success of scientific theories is that they are getting closer to the truth. (This is the so-called No Miracles Argument, the NMA.) (See Putnam [1975], p. 1, Boyd [a] (as cited in van Fraassen [1980], p. 219, n. 33), Boyd [1973], Harmon [1965], [1968], and Smart [1968].) But such skeptics as Bas van Fraassen and Larry Laudan argue that a theory's success does not imply its truth, and that we don't need its truth to explain its success. We have successful theories simply because we selected them for their observational indistinguishability from the truth, not for the truth of their statements about the unobservable ('U-statements'), which we've no way of telling. We might have inductive evidence that further predictions from a theory will prove true (van Fraassen allows belief in induction to generalities over in- principle confirmable instances), and that is a reason to continue using it. But we needn't believe its U-statements to work with it. Hence van Fraassen thinks we should accept theories, i.e., that we should believe their observational statements ('O-statements') and should develop research programs inspired by their U-statements, but that we should not believe the latter.(1)

But he hasn't explained how some theories manage to be empirically adequate (especially novelly so) and others don't. That we select theories for this property explains how we have them, but not why they have it. That they are observationally indistinguishable from the truth doesn't explain their predictivity. It just restates it. How did they manage to entail true predictions? Now, if a theory is assumed to be true, its own claims explain naturally and in detail why it predicts: the unobservable world is so structured as to produce certain observable phenomena. The theory says true things about that structure and its influences on the observable. So truth-preserving inferences about the observable from descriptions of that unobservable structure and its influences come out true.

Still, it might just be an accident that some theories are predictive. We seem to have no way of confirming their truth as explanational of their predictivity. Indeed, given the underdetermination of theory by data (UDT) I think van Fraassen is right to council agnosticism about the exact truth of the U- statements of any given theory. But he tacitly assumes we are only justified in ascribing truth in any degree if we can warrant ascribing it to one theory over all others. With UDT we can't do that. But I will argue that there are contingent, empirically ascertainable conditions under which, though there is more than one empirically adequate theory. we are justified in ascribing-- indeed, logically forced to ascribe--approximate truth to U- statements. Briefly, even if adequate theories do not perfectly converge in the limit of scientific inquiry, they may yet partially converge. Since the true theory, whichever it is, would be in this partially convergent set (assuming all possible theories empirically adequate at this world have been unearthed in the limit), all adequate possible theories would then partially converge on the true theory. Thus all would be approximately true-- true at least to within the limits of the differences between them. The correct explanation for their empirical success must then be the approximate truth of their U-statements. For if a theory did not have to be verisimilar to predict, there would be a predictive theory the U-statements of which did not partially converge on those of whichever is the true theory. But if they all partially converge, the conclusion that they are approximately true is not an inference to the best explanation, but a simple deduction. (Note that here, that a theory's approximate truth will explain its success is no reason to believe it roughly true. Rather, the partial convergence of all predictive theories entails both their approximate truth and that that explains their predictivity. Note also that this argument does not proceed in a Baysian way from the---controversial--assignment of a prior probability of a theory to a claimed high posterior probability for it on the evidence; thus it avoids all the difficulties in confirmation theory.)

Larry Laudan, however, (in his [1981]), finds the idea of approximate truth in current forms of SR too ill defined for us to tell whether a theory's having it entails and so even possibly explains its empirical success. But progress has been made here. Ilkka Niiniluoto resolved the matter into the logical problem of what it is for a theory to approximate the truth and the epistemic problem of how to tell when it does. On the first, he gave a measure for specifying the closeness of a theory to a theory presumed true. On the second he identified the expectability of a theory's being true with how well confirmed it is on the evidence. Combining his two principles, a theory may be more strongly expected to be closer to the truth the more closely it approximates a more well confirmed theory. (See Niiniluoto [1987], usefully summarized in Pearce [1987].) Now, on his measures, a theory's approximate truth may only entail the approximate truth of its empirical predictions. But we could firm this up by stipulating that a theory is only to be called approximately true if, among other things, it is at least empirically adequate; and for this, of course, there are empirical tests. (As David Pearce notes in his [1987], p. 152, it is partly a matter of convention how close a theory must be to the empirical truth before it may be called approximately true.)

But Laudan has also accused SRs of affirming the consequent in inferring from the empirical adequacy of a theory to its approximate truth. (Thus he would presumably balk at Niiniluoto's criterion for when a theory may be expected to be true on the evidence to the extent Niiniluoto claims anything more than weak confirmation; and I would balk too on that criterion considered independently of the conditions specified in my main argument.) And it may seem that the only way legitimately to infer from empirical adequacy to approximate truth is if truth is construed in an anti- realist way so as to reduce to empirical adequacy--the more empirically adequate a theory, the more it approximates the truth. Of course, given such an identity between approximate truth and empirical adequacy, the former could not non-trivially explain the latter.

But on my proposal. warranted approximate truth is distinguished from empirical adequacy by a condition on a theory's being known to be approximately true: the U-statements of all empirically adequate possible theories (A-theories) must be known to resemble (beyond making the same predictions) by, say, Niiniluoto's measure, M, in some degree m, and no empirically inadequate theories (non-A- theories) may resemble the A-theories by M in degree m or greater. Thus a theory. T, is not known to be approximately true just if it is adequate to the evidence, but just if it and all and only theories like it in the U-statements they make in degree m by M are. Since the true theory must be among those T resembles, T must resemble it. Ascribing approximate truth does not reduce to ascribing empirical adequacy since T may be adequate and yet there be no evidence that T is approximately true. This holds if theories wildly unlike T by M are also empirically adequate, which fact would also falsify the SR's claim that all A-theories are approximately true. Since a theory's being approximately true is not simply its adequacy, it can genuinely explain its adequacy: if only possible theories resembling the true theory are empirically adequate, their adequacy must be caused, and so explained by, their resemblance to the true theory. We retain a realist conception of truth.

In what follows, I detail and defend these arguments.

2 SOME ASSUMPTIONS ABOUT THE NATURE OF THEORIES

Concede the UDT in the limit. Concede to van Fraassen that theories are literal in meaning, putatively referring, true or false statements about the unobservable world: that something justifies a belief only if it entails or probabilities its truth; that truth depends only on the world.(2) Now assume we have a theory of the following sort in the limit: it is an empirically perfect 'total' scientific theory, a theory of all the evidence. It is thus a finite conjunction of first-order claims jointly sufficient to entail the evidence given initial conditions. There is no empirical telling whether this theory or one of its limit competitors is true. It contains both O- and U-statements. Its U-statements are not exhaustively translatable into, reducible to. or merely instrumental permutations of, O-statements. They are to be taken literally, with sui generis meanings and putative references at least partially different from those of the theory's O-statements; they have truth values, even if we don't know which truth values they have; their truth conditions differ, at least in part, from those of O-statements. In short, assume all but epistemic forms of realism for scientific theories; we leave it open whether we can justify belief in the truth or approximate truth of the U- statements of theories. (These assumptions could, of course, be challenged. But I want to set things up so that skepticism is the only issue before us.)

Since I will later use the empirical adequacy of theories as part of a test for the approximate truth of their U-statements, evidence must so connect to theory that it truly tests it and can weakly confirm it; the theory can only be true given certain evidence. Thus the theory (given facts about initial conditions) must entail the predictions; but given the UDT, these will not be identical to the theory, to the background conditions, or to their conjunction. The theory must be internally consistent (or it is simply false), and consistent with all of its predictions (or the data would not truly test the theory since it could still be true even if 'its' predictions proved false). Further, no contingent claim in the theory may be simply excised and the theory remain adequate to all evidence (for otherwise the evidence does not weakly confirm the whole theory). This means that in the context of the theory, every one of the theory's (non-analytic) statements has observational significance, even if it is not exhaustively definable in Observational terms. Given the UDT, there will be more to the significance of a U-statement than its observational significance for the theory, but there must be some such significance. Thus the theory has no functional 'danglers'--no statements inessential to its being predictive, The evidence thus infirms or weakly confirms the whole theory. Note then how I use holism, the view that theories encounter data only in the whole: no contingent sentence is empirically significant independently of its embeddedness in a theory and all make an empirical difference. Finally, note that given the foregoing conditions a theory's U-statements plus facts about initial conditions would have entailed novel true O-statement predictions and retrodictions.

3 SOME CONDITIONS ON THE RELATIONS BETWEEN THEORIES

When a theory meets all the above requirements have we reason to think its U-statements at least truish? Only given five conditions. First (to reiterate), the theory must be empirically adequate; it can't be truth-like unless its observational consequences are true. (Empirically inadequate theories can, presumably, be better or worse, closer or farther from the truth. But as their falseness is directly apparent from their empirical inadequacy, I ignore them here.) Second, all possible A-theories must co-resemble. Third, the resemblances must exceed the theories making the same O- predictions, else truth (or approximate truth) is empirical adequacy, contradicting the realist view that truth is independent of our epistemic limitations. (3) Let me develop this point.

Inference from the observable to the unobservable might proceed either analytically or ampliatively. The former entails an epistemically constrained theory of truth (where something is true only if empirically/knowably true). But the latter leaves Scientific Realism a realist doctrine. The inference won't be ampliative (won't go beyond the empirical evidence) if approximate truth follows trivially on empirical adequacy merely because the latter is a component in the truth, any A-theory (plus background information) logically containing such a component and so being ipso facto truish. However valid that inference, we want the theory's being approximately true to consist in more than it sharing empirical predictions with the true theory; we want it that it was contingently because the theory's U-statements were truth- like (in their own right) that they generated true O-predictions. Thus while empirical adequacy is necessary for approximate non- empirical truth, it must not be sufficient. Approximate truth must not reduce to empirical adequacy.

The fourth condition is that the respects of resemblance must be logically relevant to the truth of the theories, must consist in a resemblance between the truth-conditions (as such) of their U- statements (since I want to establish their resemblance to the truth). Fifth, no non-A-theory may so resemble any A-theory. For if the resemblance is shared by empirically inadequate theories (ones known to be false) it won't distinguish ones especially near the truth.

There have been many obstacles to understanding the similitude of theories: relativist and incommensuralist concerns, problems in the theory of translation, in the notion of meaning, in the countability of truths and truth-conditions, in the notions of reduction and subsumption, in the giving of quantitative interpretations of qualitative difference. But I shall simply assume that Niiniluoto's ingenious measures of the similitude of theories overcome these. The details do not matter here, only that he relates the approximate truth of a theory to its truth- conditions. A theory is approximately true if it would be made true by roughly the same conditions as make whichever is the true theory true. More fashionably: total theories describe possible worlds, the true theory, the actual world. A theory is closer to being true the more its world is like the actual world. (See Pearce [1987], pp. 151-2.) In any case my interest is in the epistemic problem about approximate truth, not in the logical problem. For van Fraassen would think that even granting the comparability of theories for similitude we still can't justifiedly ascribe any interesting approximate truth to their U-statements; it is this reservation I wish to assuage.

4 THE CONDITIONAL ARGUMENT FOR THE ASCRIPTION OF APPROXIMATE TRUTH

On satisfaction the foregoing conditions we are justified in ascribing approximate truth to all the A-theories and so to any given one of them. For the true total theory would be empirically adequate. A theory approximates the truth if it resembles the true theory. We may never know which theory is true. We don't know (yet) in which respect(s). if any, a given theory resembles whichever is the true theory. But suppose all the possible theories which had any chance of being the true theory--i.e.. the A-theories-- co- resembled beyond sharing empirical predictions. i.e.. that the truth-conditions of their JEU-statements resembled, and that this resemblance was not found in non-A-theories. Then since the true theory is necessarily among the possible A-theories, all such theories would be approximately true. For in approximating each other, all would approximate whichever among them was true.(4) Empirical adequacy would then be a rough index of non-empirical truth. Whether all possible A-theories co-approximate is knowable in advance of knowing which is true by comparing the theories themselves; we needn't know which U-statements are true to know which are kindred.

Thus belief in the approximate truth of an A-theory is justified when we have reason to believe in the approximate truth of all A- theories. It is contingent (or at least inductible) whether that condition is satisfied; so ascribing approximate truth to a theory is not trivial. Nor does approximate truth reduce to empirical adequacy, since that is logically distinct from a theory's kindredness in other respects with theories that may be true. To be sure, if all the A-theories exactly co-converge they are exactly true. For they are then just, at worst, different phraseations of the one true theory. But we do not

If [C.sub.1] ..... [C.sub.k] are all the complete theories compatible with evidence e. and if these theories resemble each other. so that the truthlikeness of [C.sub.i] relative to C is high (Tr([C.sub.i, C.sub.j]) [equal to or greater than] 1 -Epsilon. for all i,j ), then for all

[MATHEMATICAL EXPRESSION OMITTED]

reduce truth to approximate truth. Truth is not empirical adequacy plus the resemblance of all A-theories, only detectable truthlikeness. If all A-theories resemble, a theory's (in principle) detectable truthlikeness is a condition of its being true. If they don't, various clusters of A-theories might still contain theories resembling others in that cluster (indeed, any A- theory will have infinitely many resemblers), but not in other clusters, and one cluster might contain the true theory (indeed, at least one must) so that its cluster-mates would be truish. But absent universal resemblance, we could not identify that happy cluster. And it would not then be an epistemic condition of a theory's being a good truth-candidate that it resemble theories in all other clusters. But given the co-resemblance of all A-theories, a theory can only be true if it is at least truthlike. We thus have a realist conception of truth, but an (unsurprisingly and unworrisomely) epistemic conception of warrant for credence.

The important point to recognize in seeing that I have not created an antirealist conception of truth is that I do not define 'true' as meaning 'empirically adequate and resembling all other such theories's. For I allow it as logically possible that the true theory does not so resemble all the A-theories as to vindicate my speculation and argument. Indeed, the claim that all A-theories are approximately true is falsifiable: the more non-convergent any two Atheories, the less nearly true at least one of them is (for one would less approximate the truth were the other true). Thus if totally different theories prove empirically adequate, they can't both be nearly true, and empirical adequacy isn't a reliable indicator of even approximate truth. But if all Atheories converge in some common ways, all are approximately true, and the more co- convergent they are, the better empirical adequacy is as an index of exact truth. Conversely, it confirms the approximate truth of A- theories if theories non-convergent with them are non-A-theories.

But how could it be that all the possible A-theories would actually nonempirically resemble given that the only thing making them A- theories is their empirical adequacy.; If evidence does not restrict membership in the A-set except by empirical criteria, what else could result in its being a set with further non-observational restrictions? Empirical evidence does not test for nonempirical resemblance, so how can it yield it?

Answer: evidence tests theories. Thus it constrains the theories which could be adequate. Further, the connection between a theory and its evidence is not magical or arbitrary. It is deductive. As such, the evidence a theory needs to deduce to be adequate constrains the structure and content of the U-statements in the theory from which the evidence must be deduced, and so also the structures in the world that could be responsible for the production of empirical phenomena. The remaining question is whether evidence constrains theory enough as to produce significant resemblances among the U-statements of adequate theories. This may be the same as the question whether the unobservable and its influences on the observable have a definite structure, one at least partly revealed in the structure of the observable phenomena. What I have proposed is a test for this.

Now to connect the explanational power of approximate truth and the NMA with the foregoing conditional argument for theories being approximately true. The NMA says it would be a miracle if theories were getting to be better predictors without their U-statements getting more true. The skeptic says that for all we know it is a miracle. That would be for theories to be predictive independently of the degree to which their claims about the unobservable are true, for it to be a fluke that their U-statements predict. But the above conditions rule this out, for then only theories resembling the true theory are perfectly predictive. Were their predictivity an accident they should not have to resemble the true theory to be predictive. Them having to is just what it is for them to be good predictors not by accident, but because their U-statements resemble the true ones and so are roughly true. This must then be the correct explanation of them being predictive. The more perfect the convergence between theories, the more exactly do they approximate the truth, and the more detailed will their so approximating make the explanation of their predictivity.

Several things should be noted about this argument. Before, we did not know what was making a theory empirically adequate; but now (given the above conditions) we know it is no fluke, but rather the resemblance of its U-statements to those of all other A-theories, and so to the truth. The absence of this resemblance explains the failure of non-A ones. Since empirical test reduces the range of permitted variation in U-statements, we may ascribe truth to the survivors within the limit of accuracy of their divergence from each other?

Before, we thought that if the approximate truth of a theory is not entailed by its empirical adequacy, the additional ascription is unjustified. We thought that no valid epistemology could take us beyond the observational facts. But now we know one can. For it is empirical whether a given theory is empirically adequate. After that, it is a priori whether all the A-theories are similar. We needn't know if they are true to know this: we only need to know what they assert. Yet a theory's warranted approximate truth is more than its empirical adequacy, due to the additional resemblance condition.

But does this really take us beyond the observational facts the theory entails? If all A-theories resemble in the limit, aren't they then just empirically proved to be exactly right in their resembling claims? Haven't I simply mislabeled these as 'U- statements' when they are really just more O-statements?(7) No. They are not O-statements because it is not observationally decidable which is true; any given one of them asserts more than observation warrants. (Observation only warrants belief in the exact truth of their disjunction--i.e., that there exists something, either an x, a y, or a z... with either properties F, G, or H... in a magnitude not greater than n and not less than m--and in the falseness of the non-A-theories.) But their approximate truth is empirically proved under the conditions I describe. Of course if all A-theories co-resembled only in sharing some so- called U-statement then it would in fact be an O-statement. But not if such theories merely have similar, not identical U-statements.

But isn't it unjustified to infer the approximate truth of a whole theory from a resemblance involving only some of its parts? Perhaps. Maybe we should only speak of the approximate truth of the resembling U-sentences. On the other hand, the objection implies that only the resembling U-sentences are responsible for a theory's empirical adequacy. But holism means we can't so allot this responsibility. Rather, every sentence in the theory shares in it since we stipulated that if you rip any one out of the theory, the theory won't remain empirically adequate. So maybe truth-likeness holds of the totality of U-statements of each theory; it's just that the measure of truth-likeness for the whole is found in the degree of similarity of such of its parts as resemble corresponding parts in all other such theories. But we might take a harder line: if it makes sense to speak of the approximate truth of whole theories (rather than just of some specific claims in theories), then there must be a resemblance between each U-statement of any given approximately true theory and a corresponding one in each other such theory. Still, even here, parts of theories could be known to be approximately true. On to other issues.

I've said we need 'merely' to compare theories for resemblance to decide whether they are approximately true. However. since infinitely many theories will be in the set of possible A-theories (the A-set), this will be indefinitely complex, inductive rather than enumerative. But no matter. For whether any two theories co- resemble is in principle discoverable, and if many do, induction may warrant believing all do. Recall that van Fraassen permits induction to generalities over confirmables. He only balks at believing statements for which, according to their embedding theories, there can be no direct evidence; but there can be direct evidence for partial convergence.

Perhaps A-theories must independently resemble to generate all the same predictions.(9) It may even be a priori true and logically demonstrable. But I make only the weaker claim that they might resemble and that there could be evidence to that effect. It is by now, however, a tired a priori truism that everything resembles everything in some respects. (See. e.g., Davidson [1978], Hampshire [1950], and Searl [1979].) And since. therefore. there has to be a resemblance between all members of a given set, e.g., the A-set, how can I hold it contingent and non-trivial whether A-theories are approximately true? Well, that truism does not imply that any collection of things will always coresemble in just any ways, only in some. Who knows whether the U-statements of A-theories will resemble in ways not shared by those of non-A-theories?

But if it were not contingent that theories empirically alike would be otherwise convergent, would there still be independent content to the ascription of approximate truth to a theory, over and above its empirical adequacy? Would the inference from the latter to the former still be ampliative? Sure. To say that a theory is empirically adequate is to say something about its relation to the observable; that it resembles other A-theories, something about its relation to other theories; that it is approximately true, something about both. If all A-theories partially converge, that implies something about their relation to the unobservable. (The chain of reasoning is the original argument, above.) Now even if, somehow, a set of theories being empirically adequate guaranteed their non-empirical resemblance and approximate truth, the inference from some particular theory being empirically adequate to its being approximately true would still be ampliative. The inference would go: (a) this theory is empirically adequate: (b) some other theories are too; (c) all the possible A-theories non- empirically resemble; (d) therefore, this theory resembles all other A-theories: (e) if all A-theories resemble, all are approximately true; (f) therefore this theory is approximately true. (e) is analytic. (c) may be too, or at least a priori (e.g., if it really is true that everything--e.g., the U-statements of A- theories--significantly resembles everything). But even so, (f) is still contingent because it is true only if (a) and (b) are, and they are contingent. And the inference to the approximate truth of A-theories in general from their adequacy would remain ampliative in that even were it guaranteed that A-theories would non- empirically resemble, that involves a separate truth from their empirical truths. So empirical adequacy and ....... approximate truth would still be separate properties.

A more important worry: since everything resembles everything in some ways, any number of empirically inadequate theories will resemble all the Atheories in some ways. So, while A-theories may co-resemble, they will also always resemble non-A-theories. Thus I must call both kinds approximately true if either, and empirical adequacy no longer indexes a special closeness to the truth.(10)

But this mistakes my hypothesis. I claim that a resemblance defined on exactly the A-theories may not be shared by the non-A ones. The adequate ones would thus truth-resemble in a way the inadequate ones wouldn't, and so would be approximately true while the inadequate ones would not. Thus, define a resemblance between the A-theories, e.g.. that they all claim either proposition 1, 2, or 3. Of course if some inadequate theory claims proposition 4, the adequate and inadequate theories will resemble in claiming either 1, 2, 3, or 4. But this doesn't show that adequate and inadequate theories will thus always so resemble that the adequacy of the former fails to identify them as especially close to the truth. For that to be true, the inadequate theory would have to satisfy the condition of claiming either 1.2, or 3 (but not 4). Otherwise, there is a resemblance holding only between A-theories, vindicating my speculation and supporting the claim that the adequate theories approximate the true theory more closely than the inadequate ones. Some inadequate theories may resemble the adequate ones in respects definable on exactly the adequate theories. But no matter. What is important is that there be at least one respect in which only the adequate theories resemble. We are justified in ascribing approximate truth in ever more significant degrees and respects the more of such similarity there is, for it then stands as the distinguishing property and common cause of the predictivity of the theories having it.

5 SOME OBJECTIONS ANTICIPATED

Laudan (in his [1981]) objects as follows to the Scientific Realists: first, all infer from the empirical adequacy of a theory (or from the empirical success of science in general) to its approximate truth (or increasing verisimilitude) as the best explanation of its adequacy. But this fallaciously affirms the consequent; that a theory entails the observational truths doesn't mean it is true or even approximately true. Second, they generally do not sufficiently define approximate truth that we can tell whether it entails (and so explains) a theory's being empirically adequate. Third, so far as they do define it, e.g., as implying that such theories refer, we have counter-examples to SR in theories which, though we think they referred, were not empirically adequate; also, of A-theories which did not refer. So far as we are told what is meant by 'approximately true', then, it seems no inferences may be drawn either from or to the empirical adequacy of theories.

How do I escape these objections? First, I do not infer the approximate truth of a theory from its empirical adequacy, but from the independent similitude of its U-statements to those of all other A-theories and from their dissimilarity to those of the inadequate ones. The inference Laudan criticizes is fallacious but this one is valid. Second, in my terms, approximately true theories must be empirically adequate; for a theory is approximately true just in case, first, it is empirically adequate, and second, it is so because its U-statements resemble those of all other A-theories. Third, I agree that a theory's empirical adequacy does not entail that it refers and that its referring does not entail its empirically adequacy. But if all possible A-theories in the limit resemble in their putative references, then since theories which don't aren't empirically adequate, the ones which are adequate are so at least in part because of their putative referents. Thus there must then exist objects roughly like those purportedly referred to in the surviving theories. Moreover if we have similitude exclusively among A-theories in the limit then we know a theory consisting entirely of approximately true U-statements is empirically successful, for we work from the latter to the former.

Finally, Laudan concedes that approximate truth might be clarified to entail the empirical adequacy of a theory, as on, say, Niiniluoto's account (perhaps with some amendments). He only objects that this in itself does not tell us how to identify the true or approximately true theories, only how to specify the extent to which a given theory approximates one we take for true. But that theory might well be false. However if Niiniluoto has said what it is for a theory to be like a theory presumed true, we can use his account of similitude to compare A-theories in the limit and to see if they all co-converge. If so, since the true theory must be among them, all must be at least approximately true.(11)

On Laudan's counter-examples: referring theories that are not predictive and predictive theories that are not referring falsify neither the thesis that science is increasing in verisimilitude, nor that all successful theories are approximately true. Genuine counter-examples would be two non-resembling theories, equally predictive of the same phenomena, or two resembling theories in the same problem area, one predictive, the other not. I don't know if science's history contains such examples; there is historical work to be done applying (say) Niiniluoto's measures to theories to seek for examples bearing on the conjecture. On to other objections.

Doesn't Craig's Theorem (CT) refute me(7), For any A-theory, CT shows we can generate an empirically equivalent one consisting of a statement asserting what the theoretical postulates are and an open set of statements asserting the observational consequences of the former plus initial conditions. (CT supposedly shows that such a set is isolable from the total set of consequences.) The A-set might thus contain a 'Craigified' theory, Tc, making no claims about unobservables. If I deny Tc membership in the A-set, then it is not logically exhaustive of the possibly true total theories, with the true theory necessarily contained in it. For Tc might be the total truth. But if I admit Tc to guarantee membership of the true theory in the A-set, Tc will only have its empirical adequacy in common with the other theories (since it makes only empirical claims). It will then be an a priori exception to the contingency of all theories in the A-set partially convering in their U- statements. Ascription of approximate truth is then never justified.(12)

But making no claims about the domain of the unobservable, a Craigified theory does not compete with any of the theories in the original A-set for it is logically compatible with all of them, not an alternative to them. Put colloquially, the Craigifier says, 'well, I don't know about the rest, but I think the empirical consequences of the theories in the initial A-set are all true'. Well. all those theories imply that much. Thus Craigified theories are not so much different theories from their originals as expressors of different attitudes towards them; to affirm one is just to affirm the empirical part of its original. Since no Craigified theory logically competes with any of the theories in the Aset, it may legitimately exclude them without failing logically to exhaust the possibly true theories.

But doesn't CT convert all a theory's claims about the unobservable, positive or negative, into 'functional danglers' visa vis the theory's predictive adequacy, so that the empirical facts no longer tell us whether the theory's U-statements are approximately true? No. A functional dangler statement can be excised from a theory without affecting its predictivity; a functionally essential statement cannot. Now consider two possible interpretations of CT: on the first, a Craigified theory, Tc, is just the total theory expressed in a sub-vocabulary of the original. But Tc still makes U-statements if T did, only by translating them into statements with an empirical vocabulary and deploying negation (van Fraassen [1980], pp. 53-6). Thus we still have U-statements, only differently expressed; they have not become functional danglers. On the second interpretation, Tc omits all putative reference to the unobservable, even 'hamstrung' (van Fraassen's word) reference in sentences using negation and empirical predicates. But the theorem still doesn't make the U- statements of T functional danglers (if, as I so stipulated above, they weren't such before 'Craigification') because it 'decides' which O-statements to make by deducing them from T's U-statements plus descriptions of observable initial conditions. Thus it must use those statements in affirming its empirical consequences. Tc is only empirically adequate if T is.

But though a theory's U-statements could be methodologically indispensable to the generating of predictions, they can be false and yet its O-predictions be true. Thus even though as a matter of socio-psychological accident we can't crank out predictions without postulating about the unobservable, couldn't the predictions thus generated be true independently of whether those postulates were true or approximately true? So aren't a theory's U-statements functionally irrelevant to the truths of its empirical predictions? Sure, but no matter; I am not inferring from the predictivity of a theory to the approximate truth of its U-statements, but rather from the partial convergence of all Atheories. Some collections of U-statements in some theories will still entail true predictions, others not. If all A-theories partially converge we can still call them approximately true as before.

But a Craigified theory is epistemically 'safer' than normal theories while yet doing all their predictive work. And we can create such a safe theory from any A-theory. For any A-theory, T. making claims about the unobservable, a Craigified version of that theory, Tc, simply consists in the following: 'The empirical consequences of T are true.' Tc makes no claims about the unobservable, yet will be empirically adequate if T is. So shouldn't we refrain from the further epistemic risk of ascribing approximate truth to pre-Craigified theories?

No. Even if CT produces an epistemically 'safe' theory from any A- theory, that is no argument against the additional approximate truth of the U-statements of theories in my original logically exhaustive A-set. For Craig's theorem does not produce a theory which denies all positive U-statements, only one which doesn't affirm them. That doesn't preclude the true theory's making positive U-statements. One can still justify holding such theories true to within the limit of their differences from each other. Indeed. ascribing approximate truth to theory T under the conditions I impose is no less epistemically safe than ascribing truth simpliciter to Tc, since it asserts nothing non-empirical. It merely asserts what the Craigified theories assert--the empirical consequences of the originals--plus the logical consequences of the presumptively empirically vindicated (in the limit) claim that all A-theories co-resemble. (Of course it is epistemically risky to hazard that all A-theories will independently resemble. It is not systematically risky, however: rather, it is confirmable to an arbitrary degree.(13)

But look, can't one just use CT to extract the empirical consequences of an A-theory, and then just write a new theory with those consequences, but with U-statements unsimilar to those of the other A-theories? If so. a little creative theory writing can always guarantee the non-co-convergence of the Atheories' U- statements.(14) This possibility is abetted by W. V.O. Quine, Wilfrid Sellars, Richard Rorty, Thomas Kuhn, Paul Feyerabend, N.R. Hanson, David Bloor, Barry Barnes, et al., all of whom claim theoretical truth is underdetermined by objective evidence, and some of whom even claim that the evidence underdetermines the observational truths and that logic is soft. Perhaps then with sufficient ingenuity one can tell any story whatever and make it fit any evidence.

All those thinkers. however, suppose that the world and the preconditions of meaningful discourse somehow and somewhat constrain the form of possible adequate theories. All grant some external constraint on what one can take as the totality of evidence, and that once you take some sentences to report empirical facts, you are limited in what else you can credibly say. Given some constraints on the sentences one can reasonably accept there are limits on the theories one can write up. There may remain an immense flexibility. But it is open whether it would permit completely non-resembling sets of U- and O-statements all fully adequate to the evidence. So it is contingent (or at least open) whether, granting under determination, there will be sufficient slack in the relation between evidence and putatively evidence- reporting sentences, and between those sentences and those of higher theory, to permit either theories resembling only in the ways in which they are empirically adequate, or theories everywhere different. And even if theories differ on the data, we may still have arguments for the approximate truth of theories relative to stipulations of the data. The kinds of contraints on theories these writers envisage may not be congenial to realism about truth, but my argument should work even for anti-realism if it sees some constraint on theory.

Isn't this conception of approximate truth methodologically useless. For it seems we can only identify a theory as approximately true after all scientific investigation is complete.(15) But this might have been said of truth simpliciter, and of the kind of truth found in some more standard forms of SR. If the claims about truth found in yet other forms of SR seemed methodologically more useful, that was only because truth was there (anti-realistically) defined in terms of its methodological utility.

Still, we can tie our speculations into present concerns. We can have inductive grounds for believing theories are (ever more closely) approximative of the truth prior to the limit of inquiry. For if empirically adequate total theories are sure to be approximately true if they co-resemble, then as theories increase in empirical adequacy they can be held to increase in approximate truth and to exhibit empirical progress because of that increase, provided that the only theories able empirically to improve on any given theory co-resemble (in spite of our best efforts to make them not) and provided that all theories identically adequate to the same data equally resemble. The rationale is the same as for total ideal theories. So were there some value to having that knowledge sooner than later, we could justify guesses on it sooner. Further, it needn't be useful to know that a theory is approximately true for one to be able to know that it is; it might just be a fact which we are warranted in believing.

Finally (a point of which I am less sure), to have solid inductive evidence that a theory will remain predictive is to have evidence that its predictivity is no mere fluke, and this requires that one have evidence that only theories like it will be predictive, i.e., that it is approximately true, that the phenomena in the world responsible for its observable regularities are roughly as the theory says. So even van Fraassen's pragmatic attitude towards theories is logically dependent on the concept and ascription of approximate truth.

But what should we believe now? We need to know a lot to decide between total skepticism and credence in some degree of approximate truth for U-statements. Well, I have no conclusive reasons to expect significant partial convergence in A-theories, but nor has any skeptic to expect divergence. We must investigate.

6 FINAL REFLECTIONS

Perhaps it will be objected that everyone might have agreed that of course if theories are partially convergent in the limit they are approximately true. But van Fraassen assumes UDT, and UDT is just the thesis that theories will not converge, partially or otherwise. All van Fraassen ever meant to claim is that to the extent that UDT holds, ascription of truth is unwarranted. I think, however, that this is too charitable to the defenders and presumers of UDT. What they thought is that if a claim's truth is not empirically decidable, we should not believe it true. But I say that even if a claim's truth is not empirically decidable, its approximate truth may be strongly confirmable, namely, by whether it partially converges on the claims of all empirically adequate possible theories. I don't think the literature really considers the possibility of partial convergence in my sense. And since the view that the truth is undecidable in the limit is consistent with the view that approximate truth is decidable, arguments for the undecidability of truth in the limit are not necessarily arguments for the undecidability of approximate truth. Moreover, no one, to my knowledge, has offered an argument (or not a good one, at any rate) to the effect that there will be no degree of convergence among the U-statements of A-theories.

Even granting UDT, then, there are conditions under which belief in the approximate truth of theories can be justified. If they are satisfied, SR's theory of science's success goes through. It then explains the success of theories, which van Fraassen cannot. And we are then logically forced to believe the SR's explanation for theory success. Likewise, epistemic realism towards theories is then vindicated. The skeptic cannot demure since the credence I advocate in theories does not exceed what can be justified empirically or logically. One should only believe that theories are true to within the limits of empirical test, and only so far as all A-theories can be empirically discovered or logically shown to co- resemble. But that is an attitude of relative credence in theories, not just of pragmatic acceptance.(16)

Department of Philosophy Dalhousie University

REFERENCES

BOYD, RICHARD [a]: Realism and Scientific Epistemology, forthcoming. Cambridge University Press.

BOYD. RICHARD [1973]: 'Realism, Underdetermination, and a Causal Theory of Evidence'. Nous, 7, pp. 1-12.

CHURCHLAND, PAUL [1979]: Scientific Realism and the Plasticity of Mind. Cambridge: Cambridge University Press.

DAVIDSON, DONALD [1978]: 'What Metaphors Mean', in S. Sacks (ed.) [1978], pp. 2946.

HAMPSHIRE, STUART [1950]: 'Skepticism and Meaning', Philosophy, 25, pp. 235-46.

HARMON, GILBERT [1965]: 'The Inference to the Best Explanation', Philosophical Review, 74, pp. 88-95.

HARMON, GILBERT [1968]: 'Knowledge. Inference, and Explanation', American Philosophical Quarterly, 5, pp. 164-73.

LAUDAN, LARRY [1981]: 'A Confutation of Convergent Realism', Philosophy of Science, 48, pp. 19-49.

MATHESON, CARl, [1986]: 'Why the No Miracles Argument Falls'. ([Unpublished manuscript)

NIINILUOTO, ILKKA [1987]: Truthlikeness. Boston: Reidel.

ORTONY, ANDREW (ed.) l1979!: Metaphor and Thought, Cambridge: Cambridge University Press.

PEARCE, DAVID '[1987]: 'Critical Realism in Progress: Reflections on Ilkka Niiniluoto's Philosophy of Science', Erkenntnis, 27, pp. 147-71.

PUTNAM, HILARY [1975]: Mathematics, Matter and Method. Vol. I. Cambridge: Cambridge University Press.

SACKS, SHELDON (ed.) [1978]: On Metaphor. Chicago: University of Chicago Press.

SMART, J.J.C. [1968]: Between Science and Philosophy. New York: Random House.

SEARL, JOHN R. [1979]: 'Metaphor', in A. Ortony (ed.) [1979]. VAN FRAASSEN, BAS C. [1980]: The Scientific Image. Oxford: Clarendon Press.

1 See van Fraassen [1980], Chs. 1, 2, 3.4, Sections I and 4. and Ch. 7. He prefers to say that theories have models some of the substructures of which are observable according to the theory itself, others unobservable. For me, a U-statement directly describes the latter, an O- statement, the former. I invoke no a prioristic or syntactic differentia.

2 While defending a version of SR and of the NMA, then. I don't accept all of the claims about theory virtues made by various SRs. especially not those in. say, Churchland [1979]: I disdain relativistic conceptions of truth in a theory: I eschew such methods of fixing belief as trying to maximize such global properties of theories as simplicity, convenience. unity, fecundity. beauty, or explanational adequacy. They have no obvious connection with what it is for a theory to be true. Nature need not be economic, beautiful, simple: it need not satisfy demands for explanation conceived a priori as having certain ideal forms. Moreover, logically incompatible theories could equally satisfy all of these desiderata. But only one total theory can be the truth, so there is no logical connection between a theory's having these properties and its U-statements being true. These conditions, then, neither entail nor probability a theory's truth; so neither do they justify belief in it.

3 My thanks to Carl Matheson and Terry Tomkow for pressing me on this point.

4 The reader for this Journal suggests that we can state this argument formally using Niiniluoto's two alternative ways of appraising truthlikeness on empirical evidence. expected truthlikeness vet and probable approximate truth PA:

5 Thanks to Stephen Monk for this worry.

6 My thanks to Julia Colterjohn for discussion on this point.

7 Thanks to Stephen Monk and Terry Tomkow for this objection.

8 Thanks to Stephen Monk for this worry.

9 My thanks to Tom Vinci for this suggestion, following ones made by various Scientific Realists.

10 My thanks to Terry Tomkow for this worry.

11 See footnote 4.

12 My thanks to Thomas Vinci for this objection, and to Robert Bright, Julia Colterjohn, and Neera Badhwar for discussion of points in my reply.

13 It is standardly said of Craigified versions of theories that they are uneconomic. parasitic on the theories from which they are generated. involve converting a theory into a practically unmanageable, open-ended collection of infinitely many ()-statement axioms, cannot generate novel predictions, and are not suggestive of further lines of inquiry. Further. Craig's Theorem alone is no longer thought to separate empirical from non-empirical claims in theories a priori, for that is no longer thought to be an a priori distinction of syntax or vocabulary. Indeed. one needs T to say which sub-vocabulary of T happens to refer to observable substructures of models of T. And even if one could by such a theorem isolate the empirical vocabulary of T, one could still defeat the whole enterprise by using those terms together with negation to make claims about the unobservable. e.g.. 'There exists an object of which no empirical predicates hold at t'. (See van Fraassen [1980]. pp. 53-6.) My objections to thinking that Craigified theories are serious alternatives to ordinary theories have been somewhat different.

14 My thanks to Robert Martin for this objection.

15 Thanks to Stephen Monk for this worry.

16 My thanks to Carl Matheson, whose [1986] manuscript, criticizing the so-called No Miracles Argument for the truth of science, helped structure the present work. and who wrote comments on earlier versions. My thanks also to Neera Badhwar. William Barthelemy. Richmond Campbell. Julia Colterjohn, Randall Keen, Victoria McGeer. Bob Martin. and Kadri Vihvelin for discussion of this material. and especially to Robert Bright. Danny Goldstick. Stephen Monk. Terry Tomkow. Tom Vinci. and an anonymous referee for this Journal, all of whom wrote me comments. I am grateful to Dalhousie University for the Killam Post-Doctoral Fellowship which funded the initial work on this paper.
COPYRIGHT 1994 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1994 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Macintosh, Duncan
Publication:The British Journal for the Philosophy of Science
Date:Mar 1, 1994
Words:8343
Previous Article:Causes and contexts: the foundations of laser theory.
Next Article:A limited defense of the pessimistic induction.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters