Printer Friendly

Induction and indefinite extensibility: the Godel sentence is true, but did someone change the subject?

Over the last few decades Michael Dummett developed a rich program for assessing logic and the meaning of the terms of a language. He is also a major exponent of Frege's version of logicism in the philosophy of mathematics. Over the last decade, Neil Tennant developed an extensive version of logicism in Dummettian terms, and Dummett influenced other contemporary logicists such as Crispin Wright and Bob Hale. The purpose of this paper is to explore the prospects for Fregean logicism within a broadly Dummettian framework. The conclusions are mostly negative: Dummett's views on analyticity and the logical/non-logical boundary leave little room for logicism. Dummett's considerations concerning manifestation and separability lead to a conservative extension requirement: if a sentence S is logically true, then there is a proof of S which uses only the introduction and elimination rules of the logical terms that occur in S. If basic arithmetic propositions are logically true--as the logicist contends--then there is tension between this conservation requirement and the ontological commitments of arithmetic. It follows from Dummett's manifestation requirements that if a sentence S is composed entirely of logical terminology, then there is a formal deductive system D such that S is analytic, or logically true, if and only ifs is a theorem of D. There is a deep conflict between this result and the essential incompleteness, or as Dummett puts it, the indefinite extensibility, of arithmetic truth.

When an expression, including a logical constant, is introduced

into the language, the rules of its use should determine its meaning,

but its introduction should not be allowed to affect the meaning

of sentences already in the language. If, by its means, it

becomes possible to derive certain such sentences from other

such sentences, then either their meanings have changed, or those

meanings were not, after all, fully determined by the use made of

them. (Dummett 1991b, p. 220)

Dummett ... suggests ... that the requirement of harmony ... can

be made more precise by saying that it is equivalent to requiring

that the addition of the expression to a language should not license

a use of the old vocabulary which was not already licensed

in the original language. This can hardly be correct, however, because

from G6del's incompleteness theorem we know that the addition

to arithmetic of higher order concepts may lead to an

enriched system that is not a conservative extension of the original

one in spite of the fact that some of these concepts are governed

by rules that must be said to satisfy the requirement of

harmony. (Prawitz 1994, p. 374)

1. Manifestation, logicism, logic

Dating at least from "The philosophical basis of intuitionistic logic" (1973), Michael Dummett provides a rich program for assessing the meaning of the terms of a language. The program provides a handle on the notion of an analytic truth, a proposition true in virtue of the meaning of its terms. This is despite the fact that at least in North America, many philosophers regard the analytic/synthetic distinction to be untenable due to deep unclarities in notions like "meaning". The major influence, of course, is W. V. O. Quine.(1)

In addition to the seminal work in philosophy of language and philosophy of logic, Dummett is a major exponent of Frege's views in the philosophy of mathematics (see, for example, Dummett 1991a). Frege is noted for his articulation and defense of logicism, the thesis that mathematics-or at least arithmetic--is part of logic. Logicism usually involves the thesis:

Terms:

The basic terms of arithmetic--"natural number", "zero", "successor",

"addition", "multiplication", etc.--are logical terms or

are definable from logical terms alone;

and/or one of the following:

All Truths:

Every true sentence composed entirely of arithmetic and (other)

logical terminology is a logical truth.

Basic Truths:

A core subset of the arithmetic truths contains only logical truths. If we grant that all logical truths are analytic, then it follows from (All Truths) that every arithmetic truth is analytic and it follows from (Basic Truths) that the core truths of arithmetic are analytic. Thus, the main logicist contention is that either all the truths of arithmetic or a core set of arithmetic truths are true in virtue of the meaning of certain logical terms alone.

Of course, the tenability of (Terms) depends on the location of the logical/non-logical boundary, and the tenability of (All Truths) and (Basic Truths) depends on what a logical truth is. In both cases, we query the limits of logic. With hindsight, we know that if the set-theoretic membership relation is logical and if the axioms of (say) Zermelo set theory are logically true, then (Terms) and at least (Basic Truths) hold, and logicism is established. However, if set theory counts as logic, then one can wonder what the point of logicism is and why we should care about the logical/ non-logical boundary.

Many, perhaps most, contemporary philosophers restrict "logic" to first-order logic, in which case the only logical terms are the signs for the standard propositional connectives, the first-order quantifiers, and perhaps identity. The only logical truths are the theorems of standard, first-order predicate calculus. From this perspective, logicism is a non-starter, since one cannot capture much arithmetic with a first-order language, especially if non-logical terminology is not employed (see Shapiro 1991, Ch. 5). Frege's own development invoked higher-order logic, and contemporary logicists (or "neo-logicists") like Crispin Wright (e.g. 1983) and Bob Hale (1987) follow suit. These programs have a chance to fulfil their own aims only if second-order logic is in fact logic, which contravenes Quine's claim (1986, Ch. 5) that second-order logic is set-theory in disguise, a "wolf in sheep's clothing".(2)

For present purposes, we assume that full, standard second-order logic and even full type theory (as in Martin-Lof 1984, for example) is logic and so is available to a prospective logicist. In particular, we presuppose a language with a "ground-type" of variables ranging over ordinary objects like cars, cats, and continents, as well as variables ranging over properties (or sets) and relations on ordinary objects, and possibly also variables ranging over properties of properties, etc. Concerning expressive resources, full higher-order logic is reducible to second-order logic (see Hintikka 1955 and Ch. 6, [sections] 6.2 of Shapiro 1991).

No matter. Our issues lie elsewhere. Over the last decade, Neil Tennant (1987, 1997b) has developed an extensive, Dummettian program in defense of anti-realism and logic revision. Recently, he defended a version of logicism in straightforward Dummettian terms (Tennant 1997a, see also Tennant 1997b, Scs. 9.5-9.7). Dummett has also influenced other remaining logicists, like Wright and Hale. The purpose of this paper is to explore the prospects for logicism within a broadly Dummettian framework. My conclusions are mostly negative. Dummett's views on analyticity and the logical/non-logical boundary leave little room for logicism. Something has got to give.

Dummett's views on logic revision and on logical terminology flow from his general orientation to language. The meaning of an expression is what a competent user of the expression knows. One who understands a sentence must grasp its meaning, and one who learns a sentence thereby learns its meaning. As Dummett (1973) put it, "a model of meaning is a model of understanding". Since language serves the purpose of public communication, the meaning of a statement is determined by its use:

... if two individuals agree completely about the use to be made

of [a] statement, then they agree about its meaning. The reason is

that the meaning of a statement consists solely in its role as an instrument

of communication between individuals ... An individual

cannot communicate what he cannot be observed to

communicate: if an individual associated with a mathematical

symbol or formula some mental content, where the association

did not lie in the use he made of the symbol or formula, then he

could not convey that content by means of the symbol or formula,

for his audience would be unaware of the association and would

have no means of becoming aware of it.

To suppose that there is an ingredient of meaning which transcends

the use that is made of that which carries the meaning is to

suppose that someone might have learned all that is directly

taught when the language of a mathematical theory is taught to

him, and might then behave in every way like someone who understood

the language, and yet not actually understand it, or understand

it only incorrectly.

It follows that a competent language user can show that he understands the meaning of the terms he uses. That is, the grasp of meaning can be made fully manifest in behavior. This is Dummett's manifestation requirement. In particular, the meaning of the logical terms must be manifest in inferential behavior.

Dummett combines this with an anti-holist, compositional approach to semantics. Accordingly, the grasp of the meaning of any primitive logical term should be local--the understanding need not depend on the grasp of the meanings of any other terms. A subject should be able to manifest his understanding of, say, "and" without presupposing that he understands "or", "for all", "natural number", "mountain range", etc. The behavior by which one manifests grasp of a logical term should isolate the role that term plays in inferential practice. As Tennant (1997b, p. 315) put it

... the analytic project must take the operators one-by-one. The

basic rules that determine logical competence must specify the

unique contribution that each operator can make to the meanings

of complex sentences in which it occurs, and derivatively, to the

validity of arguments in which such sentences occur. This is the

requirement of separability.

It follows from separability that one would be able to master various

fragments of the language in isolation, or one at a time. It

should not matter in what order one learns ... the logical operators.

It should not matter if indeed some operators are not yet

within one's grasp. All that matters is that one's grasp of any operator

should be total simply on the basis of schematic rules governing

instances involving it.

Dummett (1991b, p. 251) wrote: "Gerhard Gentzen, ... by inventing both natural deduction and the sequent calculus, first taught us how logic should be formalized." Accordingly, the Dummettian argues that the introduction and elimination rules of logic terminology are so basic as to be meaning-constituting. In other words, for each logical term, the introduction and elimination rules consist of a complete analysis of the meaning of that term.(3) The Dummettian holds that a necessary and sufficient requirement for a term to be logical is that there be introduction and elimination rules for the term which are harmonious with each other, conservative, and fully constitute the meaning of the term (see Tennant 1997b, Ch. 10). Hacking (1979) proposes a similar criterion, in the context of classical logic. It will suffice for present purposes that the condition be necessary: for each logical term, there are introduction and elimination rules that fully constitute the meaning of the term, and one's grasp of these rules can be made fully manifest.

Two corollaries to these Dummettian theses are crucial for present purposes (Tennant 1997b, p. 318). The first is the sub-formula requirement: if an inference from a set [Gamma] to a sentence [Phi] is deductively valid, then there is a proof of [Phi] from [Gamma] in which every line is either a sub-sentence of [Phi] or a sub-sentence of a member of [Gamma]. Afortiori, if [Phi] is a logical truth, then there should be a proof of [Phi] (with no undischarged premises) in which every line is a sub-formula of [Phi].

The second corollary is the conservative extension requirement: if an inference from a set [Gamma] to a sentence [Phi] is deductively valid, then there is a proof of [Phi] from [Gamma] which involves only the introduction and elimination rules for the logical terminology that occurs in the conclusion [Phi] and the members of [Gamma]. Thus, if is a logical truth, then there should be a proof of [Phi] from the introduction and elimination rules of the logical terms that occur in [Phi].

The conservative extension requirement is a sharpening of the thesis that an analytic sentence is true on the basis of the meaning of its terms. Since, according to our Dummettian, the meaning of a logical term is constituted by its introduction and elimination rules, the introduction and elimination rules of the terms in [Phi] should suffice to establish [Phi]. The requirement entails that the result of adding a logical operator to a language should be a conservative extension of the original language: in the expanded system, one cannot derive a sentence that lacks the new term unless that sentence was derivable in the old system. The passage from Dummett (1991b, p. 220) at the top of this paper puts it clearly.

The Dummettian argument against classical logic proceeds in these terms. The sub-formula and conservative extension requirements are violated with classical logic truths like ([Phi] [right arrow] [Psi]) [disjunctions] ([Psi] [right arrow] [Psi]) and ((([Phi] [right arrow] [Psi]) [right arrow] [Phi]) [right arrow] [Phi]), whose natural deduction proofs usually involve a detour through negation. Moreover, the law of excluded middle ([Phi] [right arrow] ?? (phi]) sins against separability. Tennant (1997b, p. 317) calls excluded middle an "unstable" and "shoddy marriage of convenience".

2. Logic and what there is

The stage is set for our first conflict with logicism. In the jargon of contemporary philosophy, arithmetic has ontological presuppositions--the natural numbers. If someone accepts the logicist thesis (Basic Truths) and takes the language of arithmetic at face value, then she will hold that the existence of numbers is a logical truth, and is thus analytic. Frege certainly held this view, as does Tennant (1997b, pp. 299-302):

[Natural numbers] are necessary because they must be in any

world that admits of conceptualization and of thought about it ...

That is to say, they must be in any world.

The logically necessary existents would be those to which commitment

is incurred by discourse obeying conceptual controls

linking it with ordinary discourse about ... ordinary things ...

Commitment to numbers is immanent in any conceptual scheme

allowing for the discrimination of particulars.

See also Tennant (1997a).

Tennant (1997b, pp. 303-4) notes that this runs against a widely-held view that nothing exists as a matter of meaning or logic alone. In other words, no non-trivial existential proposition is analytic.(4) He argues that this "dogma of existence" should be rejected:

Having existential commitments does not entail syntheticity ...

If, by the use of certain expressions in one's language, one commits

oneself to (better: accommodates oneself to, or acknowledges)

the existence of certain entities that exist necessarily anyway,

one is not going beyond the meanings of the expressions involved

if one says, by means of those expressions, that those entities exist

... If we know that the entity e exists necessarily, and that it is

to be the function of an expression "E" to pick out e, then the

statement "E exists" will be true solely in virtue of its meaning-that

is, it will be analytic.

Tennant holds that one example of such an "E" is the numeral for the number zero. He argues that "0 exists" is an analytic truth.

Be this as it may, there is serious tension between logicism and the foregoing Dummettian doctrines. Suppose that the predicate N for being a natural number, as well as the terms 0 for zero and s for the successor function are logical terms. Then according to the Dummettian, there are introduction and elimination rules for these terms that fully constitute their meaning, or else these items are definable from other terms which themselves have introduction and elimination rules that fully constitute their meaning. Presumably, it follows from such rules, and the rules for negation and identity, that 0 [not equal to] 0. It seems clear that the sentence 0 [not equal to] 0 should be a basic truth of arithmetic, and so a theorem in any logicist system. Thus, according to our logicist, this simple arithmetic sentence is analytic and logically true.

Most logicists and neo-logicists, including Frege, Wright, Hale, and Tennant, hold that the natural numbers are in the ground-type and thus in the range of the first-order variables. That is, zero and one are in the same variable-range as cars, cats, and continents. We consider alternatives to this in due course, but in the present framework, the sentence (0 [not equal to] s0) entails the following, by existential introduction:

Two:

[exists] x [exists]y(x [not equal to] y).

The sentence (Two) does not have any arithmetic terminology. Although it is a statement that there are at least two objects, (Two) is composed only of standard logical terminology: negation, identity, and first-order existential quantifiers. Since (Two) is a logical consequence of 0 [not equal to] 0, then according to our logicist (Two) is itself analytic and logically true, provided that analyticity and logical truth are closed under logical consequence, or at least the introduction rule for the first-order existential quantifier. Thus, by the sub-formula requirement and the conservative extension requirement, there should be a proof of (Two) in which each line is a sub-formula of (Two) and which uses only the introduction and elimination rules for negation, identity, and the first-order existential quantifier. Clearly, however, there is no such proof and (Two) does not follow from these rules alone. For example, there are structures that satisfy the relevant introduction and elimination rules and yet have only one element. In such models, (Two) is false.

A logicist might retort that a structure whose domain has only one element is not a logically possible model, since it does not contain the necessarily existing natural numbers. Thus, such a model fails to show that (Two) is not a logical truth. Perhaps, but the present point does not directly concern the logical status of (Two). The issue is whether (Two) is deducible on the basis of the introduction and elimination rules for negation, identity, and the first-order existential quantifier. Even if a one-element model is not a logically possible universe (so to speak), such a model does satisfy the relevant introduction and elimination rules. The fact that (Two) is false in such models does show that (Two) is not deducible from those rules alone, and so we have a violation of separability.

The same argument shows that even if "natural number", "zero" and "successor" are logical terms, they are not definable using negation, identity, and the first-order existential quantifier alone. If they were, then the natural numbers would be definable in finite models. The requirement of separability entails that a subject should be able to master the meaning of the logical terms one at a time, in any order, and that this meaning should suffice to determine the truth of any analytic truth involving just those terms. Just as a subject should be able to manifest her understanding of "and" without presupposing that she understands "or", she should be able to manifest her understanding of negation, identity, and first-order existential quantifiers without yet grasping "natural number", "zero", and "successor". Since this primitive understanding of the rules for negation, identity, and first-order existential quantifiers is, by itself, not sufficient to determine that (Two) is true, we see that (Two) is not an analytic truth after all--logicism or no logicism.

This problem affects anybody who accepts the Dummettian framework for analyticity and logical truth and also holds that there are (at least two) ground-type objects whose existence can be established by logic alone. The latter, of course, is held by any Fregean logicist, Tennant in particular. On such views, there are sentences like (Two) which (1) consist of logical terminology alone, (2) follow from a logical truth, and (3) violate separability. If negation, identity, and the first-order existential quantifier are indeed logical terms, and if the analytic (or logical) truths are closed under consequence, then no analytic (or logical) truth can entail (Two). This is the same as saying that no analytic truth can have existential consequences; no objects exist as a matter of logic alone. In short, Dummett's doctrine of separability entails what Tennant (1997b, p. 303) calls the "dogma of existence", the thesis that analytic truths do not have existential consequences.

I already noted that logicism does not have a chance if the logic is firstorder. The availability of other logical resources, such as higher-order variables and quantifiers, does not affect the present conclusion. According to our Dummettian, this extended terminology should also be separable from negation, identity, and the first-order quantifier, and so our full logical system should satisfy the dogma of existence. There are no nontrivial existential consequences concerning the ground-type. Virtually every contemporary treatment of higher-order logic satisfies this "dogma" (up to the point of note 4 above). The issue right now is not the availability of powerful logical resources, but the assumption that the natural numbers are in the ground-type.

There are several ways to relieve the tension, and I leave it to the reader to decide which of them does the least damage to the underlying motivations and arguments. One obvious and crude resolution would be to give up on (Fregean) logicism and to reject the very prospect of objects whose existence can be established by logic alone. This is to accept the dogma of existence. The reason that logicism looks so implausible to contemporary students is that they encounter it only after they have learned their logic using systems with no object-language, ground-type ontological presuppositions. Also, we have seen how the Dummettian framework yields the dogma of existence.

To continue to wield a blunt knife, another way out of the dilemma is to reject the entire Dummettian framework, and the ideal of separability in particular. This would undercut Dummett's argument for logic revision as well. This blunt "fix" is also common. The central notion of analyticity is widely regarded as suspect. If that notion is rejected as incoherent or useless, as it is in much of North America, the Dummettian project goes with it.

2.1. Multiple sorts, special types

Someone who wants to keep both logicism and the Dummettian framework on the table a while longer needs a more subtle way to resolve the present dilemma. One route is to propose a modification of the status--or nature--of either the ground-type identity relation or the ground-type (i.e. first-order) existential quantifier. The idea is to maintain that either there is no unrestricted (logical) identity relation at the ground-type or there is no unrestricted first-order existential quantifier. Geach (1967, 1968) proposes something like the former, in a different context. A bare statement "a is the same as b" invites the retort: "Same what?". Two different copies can be the same book; two different chunks of molecules can be the same person (at different times); two different masses of water can be the same river. Kraut (1980) develops a similar idea, aiming towards an ontological relativity. In the present context of Dummettian logicism, the idea is that the identity symbol has to be attached to a sortal, and so a sentence like (Two) is elliptical. From 0 [is not equal to] s0, we should conclude [exist]x[exist]y(x [is not equal to][sup.N]y). The explicit reference to N keeps us from violating separability.

In second-order languages, identity (x=y) can be defined as indiscernibility: [inverted A]F(Fx [equivalent] Fy) (see Shapiro 1991, p. 63). Thus, if the secondorder quantifier is unrestricted, then so is the identity relation (over the ground-type). In that framework 0[is not equal to]s0 entails [exist]F(F0 & ?? Fs0), which has no reference to natural numbers. So if our logicist restricts groundtype identity to avoid the violation of separability, then he must restrict the second-order quantifiers and variables as well.

In response to sentences like (Two), Tennant (1997, p. 318) makes a similar proposal for thefirst-order existential quantifier:

One possible way to meet this objection... would be to insist that

all quantifications should be "explicitly sortal" or typed. Then,

instead of having quantifications over objects tout court, one would

need to be more specific in the sentence saying that there are at

least two of them: "two of what type of object?" would be the

legitimate response. When the answer "natural numbers" is provided,

then this licenses appeal to the rules governing the predicate

N(); for that predicate now occurs explicitly in the claim of

twoness. Thus, it is suggested, we would regain conformity with the

sub-formula principle.

Tennant does not say any more about how this is to be developed, but I imagine the plan is for each first-order quantifier to be marked with a sortal. Let Tbe a symbol for a sortal. The introduction rule would be that one can infer [[exist].sup.T]x[Phi](x) from [Phi](t) and T(t). The elimination rule would be that if one can infer [Psi] from [Psi](a) and T(a), then one can infer [Psi] from [[exist].sup.T]x[Phi](x), with the usual restrictions.

According to this plan, there are two quantifiers for each sortal, rather than a single pair of unrestricted first-order quantifiers over the whole ground-type. The proper conclusion from 0 [is not equal to] s0 is [[exist].sup.N]x [[exist].sup.N](y(x [is not equal to] y). As Tennant notes, this does not violate the sub-formula requirement, due to the explicit mention of N.

On this program, we cannot go on to introduce bound variables over sorts. If we did, the following would be a consequence of 0 [is not equal to] s0

[exist]X[[exist].sup.x]x[[exist].sup.x]y(x[is not equal to]y),

and we would have the makings of another violation of separability. That is, [exist]X][exist].sup.x]x[exist]y(x [is not equal to] y) contains no occurrences of "N" and so it should be derivable from the introduction and elimination rules for negation, identity, and the relevant quantifiers alone. In effect, the phrase [exist]X[[exist].sup.x]x would be an unrestricted existential first-order quantifier in purely logical, non-arithmetic terms. Thus, a logicist who takes this rescue cannot adopt unrestricted second-order logic, or at least he cannot have variables ranging over all ground-type sorts.

In any case, the relativity is foreign to the Fregean program, and to the neo-logicism of Wright and Hale. Frege assumes that there is a single quantifier that ranges over all objects, and an unrestricted identity relation. It is part of Frege's scheme that logical terms are universally applicable. The argument that natural numbers are logical is that arithmetic is universally applicable. One can count anything. As we saw above, Tennant argues similarly. Frege also held that identity is logical since one can formulate coherent identity statements among any two singular terms--sorts or no sorts.(5)

An advocate of this rescue must show what is wrong with the usual development of identity and the quantifiers. After all, before the introduction of the natural numbers as logical objects, we had coherent introduction and elimination rules for unrestricted first-order quantifiers and for unrestricted identity over the ground-type (and we still do). These rules are in harmony with each Other, and the sub-formula and conservative extension properties hold for the system developed that far (i.e. before the natural numbers were introduced). It would help if we had an independent argument for the relativity, in Dummettian terms, beyond the ad hoc maneuver to avoid the dogma of existence.

Another moral of this maneuver is that we can never be sure whether a given term is logical. Suppose that we have harmonious, separable rules for a class of operators, and have established that the sub-formula and conservative extension properties hold for the language as developed that far. For a Dummettian, this is a most pleasing situation, and it is all we can do to establish that the operators are logical. However, we cannot rule out the possibility that we will later encounter a logical operator that mucks up the conservative extension property for the present operators. This is what happened with the unrestricted identity and/or unrestricted quantifiers. This gesture toward holism is not in the spirit of Dummettian separability, or the analytic program generally.

Tennant's proposal that quantifiers be sortal "or typed" suggests a variation on this rescue theme. Our logicist can maintain a single ground-type with only one first-order sort, but locate numbers elsewhere in the typehierarchy. On this plan, numbers are not in the same logical category as cars, cats, and continents--contra Frege. Hodes (1984), for example, follows the lead of Whitehead and Russell (1910) and defines numbers to be certain properties of properties. Standard treatments of type theory and higher-order logic already have different quantifiers and different variables and identity relations at each type. So on this option, the distinctions needed to maintain separability are less ad hoc.

However, none of the variations on this rescue-theme succeed in resolving the dogma of existence in favor of the logicist. Frege, Wright, Hale, and Tennant all develop the arithmetic of natural numbers in terms of identity at the ground-type and first-order quantifiers. These treatments collapse once we either introduce sorts at the ground-type or move the numbers up the type-hierarchy. For example, Tennant presents principles that the number off is zero if and only if there are no Fs, the number of F is one if and only if there is exactly one F, and the number of F is 2 if and only if there are exactly two Fs:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

These principles are theorems in the systems of Frege, Wright, and Hale as well. Following Frege, the existence of the number zero is established by instantiating F with the predicate (y [is not equal to] y) in the first principle. The existence of the number one follows by instantiating F with (y=0) in the second principle, and the existence of the number two follows by instantiating F with (y=0 ?? y=s0) in the third principle. And on it goes.

According to all of the present rescues, however, the quantifiers and identity signs on the right side of these principles must be marked with a sign for a sort or a type. Presumably, the "#"-sign on the left must also be marked, since it binds a variable. The principles become:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where all of the variables are of type or sort T.

What of the identity sign on the left side of these principles? In particular, what is the type or sort of the natural numbers? Can we limit the numbers to a single such type or sort, with or without typical ambiguity?

Suppose, first, that our logicist has opted to define the natural numbers at a type other than the ground-type. If we instantiate the first principle with (y [is not equal to][sup.1]y), where the variable and identity are of the groundtype, we obtain the existence of zero as a property of properties of objects of ground-type. That is, zero is a third-order property, as in Hodes (1984) (and Whitehead and Russell 1910, after a fashion). To follow the Fregean plan, we instantiate the second-principle with (y=[sup.3]0), but this variable y and this identity are third-order. So we obtain the existence of the number one as afifih-order property--a property of properties of properties of properties of ground-type objects. The next step is blocked altogether. The string (y=[sup.3]0 ?? y=[sup.5]0) is not well-formed, since the first variable "y" is third-order and the second "y" is fifth-order. We do not have a monadic predicate to put into the third principle. The simple arithmetic truth 0 [is not equal to] s0 is also ill-formed, since 0 and s 0 are of different types. This, of course, is the reason why Whitehead and Russell added an axiom asserting the existence of infinitely many objects at the ground-type, so that all numbers are third-order items. They concede that the principle of infinity is a postulate, not a logical truth. The sentence 0 [is not equal to]s0 comes out well-formed and true, but we have no reason to believe that it is analytic.

A logicist might attempt a Fregean account of arithmetic by developing a cumulative type theory, so that there is a single identity relation and a single quantifier ranging over all finite types. An arithmetic sentence like [inverted A]x[exist]y(y>x) would assert the existence of a type higher than that of any given type. I submit that the relevant "type-existence" principles make the envisioned theory a lot like Zermelo set theory,(6) and so would beg any questions. It is not surprising that arithmetic can be developed in cumulative type theory, since the structure of the natural numbers is explicitly built right into the structure of the types. But why think that Zermelo set theory or this cumulative type theory passes Dummettian muster as logic?

There remains the option of taking numbers to be a special sort of ground-type objects. This is probably what Tennant had in mind, in the above passage. Let T, T' be two different sorts and let = T, = T' be the respective identity predicates. We have #[sup.T]x(x [is not equal to][sup.T]'x)=0 and #[sup.T']x(x[is not equal to] [sup.T]x)=0. Are these zeros the same or different? That is, do we have different numbers for each ground-type sort?

Suppose, first, that we do indeed have different zeros for different ground-type sons. We are then in pretty much the same boat as with the numbers-as-types approach. On this plan, there is no unique zero, but rather a different zero for each sort. We should write: # [sup.T]x(x [is not equal to] [sup.T] x)=[0.sup.T] and # sup.T']x(x [is not equal to] [sup.T]x)=[0.sup.T]. The sort of [0.sup.T] would be "number of T", or NT, and the sort of [0.sup.T], would be NT'. We have no argument that [1.sup.T] and [1.sup.T] exist. To follow Frege, we can inquire after # [sup.NT] y (y = [sup.NT] [0.sup.T]). This gives us a number 1% of type "number of number of T", or NNT. Again [0.sup.T] [is not equal to] [1.sup.NT] is not well-formed. What is the sort of this identity? So we cannot develop arithmetic on this approach, at least not on Fregean lines.

The last case is where we identify all of these groups of "natural numbers". We have # [sup.T]x(x [is not equal to] [sup.T] x)=0 and # [sup.T'] x(x [is not equal to] x)=0, with this single zero being of the sort "natural number", N. So perhaps we should write # [sup.T] x(x [is not equal to] [sup.T]x)=[sup.N] 0 and # [sup.T] x(x [is not equal to] [sup.T'] x)=[sup.N] 0. Now we can proceed with Frege, Wright, Hale, and Tennant. We get that #Ny(y=N0)=Ns0, and that # [sup.N] y(y=[sup.N] 0 v y=[sup.N]s0) =[sup.N]ss0. And 0 [is not equal to] [sup.N] s0 comes out well-formed and true.

However, we can now define an unrestricted first-order quantifier: [exist]xFx is equivalent to #[sup.N]x(Fx)[is not equal to] [sup.N]0. This undermines the motivation we had for introducing sorts and relative quantification in the first place. That is, to maintain separation and the sub-formula principle, our logicist followed Tennant and declared that there is something wrong with our old, unrestricted quantifiers--despite the fact that we had (and still have) harmonious introduction and elimination rules for them. So be it. But in the end, he puts unrestricted quantifiers back into the system, defined in terms of "natural number". But then what was wrong with the original, unrestricted quantifiers?

2.2. Another way out: grades of analyticity

Let us try another option for the Dummettian logicist who maintains an unrestricted identity relation on the ground-type objects and unrestricted first-order quantifiers, and holds that numbers are ground-type objects. The same goes for a Dummettian logicist who holds that there is only one type/sort of natural numbers and maintains unrestricted identity and quantifiers over that type/sort.

The problem was that since the sentence (Two) is an immediate consequence of 0 [is not equal to] s0, (Two) should be analytic and logically true, violating the sub-formula and conservative extension requirement. Our logicist can try to develop a sense in which (Two) is not analytic and/or not logically true, despite the fact that it follows from 0 [is not equal to] s0 which, by hypothesis, is analytic and logically true. Surely, it would do too much damage to our web of concepts to suggest that the logical truths are not closed under logical consequence. Logical reasoning should at least preserve logical truth. So, if 0 [is not equal to] s0 is a logical truth, then so is (Two), despite the fact that it is not a theorem of the standard (classical or intuitionistic) predicate calculus.(7)

The fix is to maintain that, under the relevant notion of analyticity, the analytic truths are not closed under logical consequence. Let us say that a sentence S is strictly analytic if its truth can be determined from the meanings of the terms in S alone--independent of the meaning of any other terms. If S is composed entirely of logical terms, then S is strictly analytic if and only if its truth can be determined from the introduction and elimination rules for the terminology in S. That is, S is strictly analytic only if it is a theorem in the deductive system composed of the introduction and elimination rules for those terms. Call a true sentence loosely analytic if it is a logical consequence of a set F of sentences all of which are strictly analytic. Roughly, a sentence is loosely analytic if its truth can be determined from meaning and logic alone.

Tennant (1997b, p. 294) mentions this distinction, but does not develop it:

A directly analytic sentence would be one that could be proved

using only rules governing expressions occurring within the sentence.

An indirectly analytic sentence would be one that could be

proved apriori, and indeed by means of a proof all of whose steps

involve only meaning-determining rules; but not by means only

of the rules governing only such expressions as occur within the

sentence itself.

On the present plan, and under the logicist theses, the sentence (Two) is not strictly analytic, but it is loosely analytic. Thus, our logicist can still maintain that (Two) is both necessary and a priori. An a priori warrant for (Two) would consist of the establishment of logicism (on a priori grounds), plus the inference from 0 [is not equal to] s 0 to (Two). The sentence (Two) has no particular subject matter--since it is composed of logical terms along--and so we do not "change the subject" in order to establish it. However, something like a change of subject matter has occurred. The supposed a priori warrant for (Two) involves a detour through arithmetic.

Perhaps our Dummettian logicist can live with logical truths that are not strictly analytic, consoling himself with the fact that all logical truths are at least loosely analytic. However, the admission of this distinction does damage to the Dummettian framework. The Dummettian must admit that the requirement of separability only applies to strict analyticity. Perhaps this articulation is sensible anyway, since the dogma of existence follows from separability, and our logicist has to reject the dogma of existence. As long as there are objects whose existence is a matter of logic, there will be logical inferences that do not preserve strict analyticity. However, the patch knocks most of the teeth out of the crucial lemmas. When the conservative extension requirement is reformulated in terms of strict analyticity, it becomes a mere truism, a virtnal tautology:

If an inference from a set [Phi] of sentences to a sentence ?? can be

established from the introduction and elimination rules of the

logical terminology that appear in ?? and the members of [Phi], then

there is a proof of ?? from [phi] which involves only the introduction

and elimination rules for that logical terminology.

Big deal. Here is the sub-formula requirement, rephrased in terms of Strict analyticity:

If an inference from a set ?? to a sentence [Phi] can be established

from the introduction and elimination rules of the logical

terminology that appear in [Phi] and the members of ??, then

there is a proof of [Phi] from ?? in which every line

is either a sub-sentence of [Phi] or a

sub-sentence of a member of ??.

This much is all but self-evident, but it is no longer a powerful principle. Under the foregoing Dummettian theses and logicism, there is neither a conservative extension property nor a sub-formula property for logical truth and loose analyticity.

On the combination views under study, the Dummettian case for logic revision must be formulated in terms of strict analyticity--not mere logical truth or loose analyticity. The relevant thesis is that each logical principle, properly so-called, should fall out of the introduction and elimination rules for the logical terminology invoked in that principle. The idea would be that the logical principles should themselves be strictly analytic. Then the Dummettian argues that the law of excluded middle, double negation elimination, etc. are not strictly analytic, and so classical logic does not enjoy the same kind of justification as the intuitionistic logical principles. However, the Dummettian is no longer in position to rule out the possibility that the law of excluded middle is loosely analytic, nor that it is logically true--unless he claims some sort of access to all logical terms. He cannot very well complain that classical logic is unjustified because it violates the sub-formula and conservative extension properties, since under logicism, intuitionistic logic also violates these principles.(8)

3. Incompleteness and manifestation

The sentences constructed as part of Godel's incompleteness theorem make for a rich extension of the foregoing considerations. First, we need a preliminary concerning effectiveness. Both traditional intuitionists and Dummettians argue that mathematical statements should be understood in terms of proof conditions, rather than mind-transcendent truth conditions. This is the basis for Heyting semantics (see Dummett 1977, pp. 12-26).

Someone might suggest that Godel's incompleteness theorem undermines the identification of truth with provability. Consider the following argument:

Let U be the Godel sentence of a standard formalization of

Intuitionistic arithmetic (and assume that the system is consistent).

We know that U is not provable, by the incompleteness theorem.

We also know that U is true, since U encodes the statement that

U is not provable. Thus, arithmetic truth cannot be identified with

provability.

This quick and dirty argument is a fallacy. The intuitionists are unanimous that the notion of "provability" invoked in Heyting semantics is inherently informal, and cannot be captured in any formal system. They clearly do not identify truth with provability in a fixed formal system.(9) In an early paper (1963, p. 186) Dummett argues that "our notion of `natural number' ... cannot be fully expressed by means of any formal system". An intuitionist may very well be convinced that the Godel sentence U is true, which is to hold that U is provable. Perhaps the reasoning expressed in the quick and dirty argument is the seed of a proof of U (Dummett 1963, but see Wright 1994 and Dummett's 1994 reply). At this point, our intuitionist calmly points out that no proof of U can be captured in the formal system under study. She takes the incompleteness theorem to confirm the insight that intuitionistic proof is not formalizable.

Of course, this ground is well-trodden. I submit, however, that the intuitionist's maneuver is not available to a Dummettian when it comes to logical terminology and strict analyticity. On Dummett's view, a subject who understands the meaning of a term must be able to make this understanding fully manifest. In light of separability, this understanding cannot depend on the subject's grasp of any other terms. Recall also that the Dummettian holds that the meaning of logical terminology is constituted by introduction and elimination rules. If the introduction and elimination rules are not formalizable, then the extension of the term is not recursively enumerable. How can a subject grasp such rules, let alone manifest his grasp of the rules? It is extremely counterintuitive for there to be a logical term whose meaning-constituting rules are manifestable but not formalizable.

From the definition of strict analyticity, we thus have the following lemma:

Strict:

If [Phi] is a sentence composed entirely of logical terminology, then

there is a formal deductive system S such that [Phi] is strictly analytic

if and only if [Phi] is a theorem of S. The system S can be formulated

as a natural deduction system consisting of the introduction

and elimination rules for the terminology in [Phi].

The logicist thesis (Terms) is that the basic terms of arithmetic are either logical or are definable from logical terminology alone. Let us suppose that the basic terminology of arithmetic includes symbols for zero, successor, addition, and multiplication. It follows from (Terms) and (Strict) that there is a formal system A which is composed of the introduction and elimination rules for these arithmetic terms, or the logical terms used to define them. An arithmetic sentence [Phi] is strictly analytic only if it is a theorem of A. In other words, the set of strictly analytic arithmetic truths is recursively enumerable. Since the set of arithmetic truths is not recursively enumerable, the version of (All Truths) formulated in terms of strict analyticity is ruled out. Our logicist can still claim the version of (Basic Truths) formulated in terms of strict analyticity, provided that he can maintain that the theorems of A form a core subset of the truths of arithmetic. Tennant (1997b, pp. 291-2) writes

We have axioms ... that are justifiable by analytic reflection on

the content of the arithmetical notions ... [W]hat the incompleteness

theorem tells us is that we should not, as analyticity theorists

or logicists, expect.., to wring from these axioms ... the full set

of first-order truths about the natural numbers ... [We] can still

maintain that the truth of statements in an extensive and important

fragment is logically immanent in the content of the ... Dedekind-Peano

axioms ... [T]he analyticity theorist can rescue a significant

part of arithmetic as analytic.

It remains to be seen how stable this position is.

Our logicist must confront something like the quick and dirty argument above. Since A is a formal system, it has a Godel sentence G. The incompleteness theorem is that G is not derivable in A, provided that A is consistent. Since an inconsistent set of rules cannot constitute the meaning of any group of logical terms, we can assume that A is consistent. Thus, G is not derivable in A, and so G is not strictly analytic. We also conclude that G is true, since G just codes the fact that G is not derivable in A. So G is an arithmetic truth that is not strictly analytic.

Our Dummettian logicist is a semantic anti-realist, holding that all truths are provable. So he must hold that G is provable. However, since G is not strictly analytic, any proof of G must take a detour and invoke terminology not contained in G. Although the sentence G consists only of arithmetic terminology, to establish G we must invoke something other than the meanings of the arithmetic terminology. In a sense, we have to change the subject (hence the subtitle of this article).

What do we change the subject to? And can the Dummettian logicist still maintain that G is a logical truth, or that it is at least loosely analytic? If not, then how do some arithmetic truths get to be synthetic a priori? What synthetic maneuvers are involved in the proof?.

The following is a standard recipe for constructing a reasonable deductive system in which G is a theorem. Augment the basic deductive system A with a new predicate T, to represent arithmetic truth. The intuitive meaning of a formula Ta is that a is the Godel number of a true sentence in the language of arithmetic. Of course, we do not violate Tarski's theorem, since the range of T includes only (Godel numbers of) sentences containing the terminology of arithmetic, not sentences containing T itself. Let the deductive system A' consist of the axioms and inference rules of A, together with the following rules:

(TI)

From a sentence [Phi], infer T ?? [Phi]??, where ?? [Phi] ?? is the Godel

number of [Phi].

(TE)

From a sentence T ?? [Phi] ??, infer [Phi], where ?? [Phi]?? is the Godel

Number of [Phi].

It seems to me that in some form or other, the induction principle is essential to the natural numbers. Without it, a system has clearly fallen short of capturing the basic truths of arithmetic. Thus, if our logicist is to have any sort of claim on (Basic Truths), then some version of the induction principle must be among the theorems of A. The instances of the induction principle have the following form:

IND-P

(PO & [inverted A]x((Nx & Px) [right arrow] Psx)) [right arrow] Vx(Nx [right arrow]Px)).

Which instances of this are among the core truths of arithmetic--which instances are implicit in the very meaning of the arithmetic terminology? The first-order induction scheme includes instances of (IND-P) in which the predicate variable P is replaced with a formula consisting of arithmetic terminology, variables, connectives, and first-order quantifiers. Clearly, those instances are among the core truths, and so must be theorems of A.

Suppose that the induction scheme in A' is the same as the induction scheme in A. That is, suppose that A' does not include any instances of the induction principle (IND-P) in which P is replaced by a formula that contains the new T predicate. Then A' is a conservative extension of A. If [Phi] is a sentence containing only arithmetic terminology (and thus no instances of T) then [Phi] is a theorem or A' only if [Phi] is a theorem of A. Given all the talk of conservation, this may be welcome news, but it indicates that A' is too weak. Specifically, our target sentence G is not a theorem of A', since G is not a theorem of A.

Let A" be the extension of A' in which we expand the induction principle to include sentences with T. That is, A" includes as theorems instances of (IND-P) in which P is replaced with formulas containing the new predicate T. Then one can do inductions involving truth, and, in particular, one can prove in A" that every theorem of A is true. The consistency of A follows, and so does G, our Godel sentence for A. The derivation of G in A" captures the informal reasoning that establishes the truth of G.

So what is our logicist to conclude about the status A" and thus of the Godel sentence of G? Tennant (1997b, pp. 293-4) suggests that the detour through arithmetic truth may make G synthetic (but still a priori):

There is... a principled reason why the.., theorist should.., concede

syntheticity to, say, any true Godel sentence that is unprovable

in that system. Such a sentence has the form [inverted A]n G(n). We

can prove (in the metatheory) that ?? G(0), ?? G(s0), ?? G(ss0), ...

Thus, so we wish to conclude, each of the instances is true;

whence the universal quantification is true. With each numerical

instance, we make a move from provability (in the system we

start with) to truth; and then infer the truth of the unprovable sentence

by mathematical induction in the metatheory. We can formalize

this reasoning by extending the vocabulary of the original

theory so as to include a primitive truth-predicate. This yields an

extended system, for it allows one to form new instances of mathematical

induction. So the reasoning for the truth of the formerly

unprovable Godel sentence can now go through in the extended

system. But this means that the proof of the Godel sentence thereby

obtained.., has to contain occurrences of items of non-logical

vocabulary (namely, the truth-predicate in question) that are not

involved in the sentence itself. Thus, grasp of the meaning of the

sentence itself is not sufficient for one to be warranted in asserting

it; that is, the sentence is not epistemically analytic, even

though its truth has been established a priori.

Tennant thus proposes that the truth predicate is non-logical and that G is synthetic. However, the predicate T is governed by an introduction rule (TI) and an elimination rule (TE). One can argue that for present purposes at least, the rules fully constitute the meaning of T. We do not invoke anything else about T in any of the foregoing derivations. Thus, on the Dummett-Tennant-Hacking view, the predicate T qualifies as logical. I might add that T satisfies the traditional criterion of topic-neutrality. We can add a truth predicate to any language and deductive system that does not already contain one.

Like its cousin A, the first extended system A' consists solely of introduction and elimination rules for logical terminology, and any theorem of A' (containing all of the relevant logical terminology) is strictly analytic. By hypothesis, the axioms and rules of A" are just the introduction and elimination rules of logical terminology together with some instances of the induction principle consisting of formulas containing that very logical terminology. I submit that our logicist should hold that the relevant new instances of the induction principle are strictly analytic as well. Anyone who understands the meaning of the arithmetic terminology and the various terms, including T, should agree to these instances of the induction scheme. As Shaughan Lavine (1994, 231 n. 24) put it: "Part of what it is to define a property of natural numbers is to be willing to extend mathematical induction to it. To fail to do so is to violate our rules for extending and further specifying our arithmetical usage." Dummett (1994, p. 337) echoes the same idea: "It is part of the concept of natural number, as we now understand it, that induction with respect to any well-defined property is a ground for asserting all natural numbers to have that property" (my emphasis).

Suppose, as a thought experiment, that a language learner is being taught arithmetic. He learns the system A and has no trouble applying the induction principle (IND-P) to any formula in the language of arithmetic. That is, suppose our subject grasps first-order arithmetic. Now suppose that he becomes convinced that a new property M has a determinate extension over the natural numbers, and suppose that he refuses to apply induction to formulas containing a symbol for M until he is convinced that M is definable in A. As indicated by the quote from Dummett, it is part of the current or initial understanding of the natural numbers that induction applies to any well-defined property. So our language learner does not understand the natural numbers after all, or at least his understanding is deficient. Dummett is correct that to fully understand the natural numbers is to understand that induction holds over any well-defined arithmetic property, not just those we are able to define in first-order arithmetic (or in any fixed framework for that matter).

Let us now consider a subject who does grasp the concept of "natural number". It follows, from the requirement of manifestability, that this subject can fully manifest this grasp. How? The most straightforward way would be for her to assert, and show that she understands, the full second-order induction principle (see Shapiro 1991, Chs. 4-5):

IND

[inverted A]X[(X0 & [inverted A]x((Nx & Xx) [right arrow] Xsx)) [right arrow][inverted A]x(Nx [right arrow]Xx)]

The principle (IND) is a single sentence (in the object language) in which X is a predicate variable. So let us assume that our original system A contains this full second-order principle.

It is essential to the meaning of the second-order variables that they range over any property over the domain of discourse, not just those definable in this or that language. This feature is registered in the comprehension scheme of standard second-order logic:

COMP

[exists][inverted A]x(Xx [equivalent] [Phi](x)),

one instance for each formula [Phi] not containing X free. In Dummettian terms, the idea is that a subject who grasps the meaning of the terminology in [Phi] is in position to know that there is a property coextensive with [Phi](x). Then, if he understands the natural numbers, he can apply the induction principle to the resulting formula.(10)

These (Dummettian) considerations indicate that our logicist should allow that the axioms and rules of A" do not go beyond the meaning of the terminology in the extended language. Since the Godel sentence G (for the system A) is a theorem of A", it follows that G is loosely analytic and logically true. Tennant (1997b, p. 294) suggests this as well, as an alternative to the view that G is synthetic.

The Dummettian logicist framework is up for further refinements and adjustments. It is less Dummettian than it appears. To sum up where we are, the Godel sentence G is not strictly analytic and the prevailing proof of G makes a detour, through the notion of arithmetic truth. Just as we invoke the natural numbers to establish the logical truth of (Two), we invoke arithmetic truth to establish the logical truth of G. We "change the subject" in moving from A (through A') to A". To be precise, we change the subject at the point when we extend the induction principle to apply to properties defined in terms of arithmetic truth.

Suppose that we had expanded the original system A by adding terminology for geometric figures, Zermelo-Fraenkel sets, or even physical objects, and suppose that the result was not a conservative extension of A. Then everyone could cheerfully admit that the subject had changed. We learn some new truths about the natural numbers via a detour through geometric figures, sets, or physical objects. Our Dummettian logicist must make the same claim concerning arithmetic truth and the system A". Our logicist learns that G is true via a detour through arithmetic truth. Since G is a theorem of A", but not of A, our logicist is forced to hold that the notion of arithmetic truth is not implicit in arithmetic. In grasping the deductive system A", our subject is taking on a genuinely new concept, one that he had no grasp of before. It is precisely his grasp of arithmetic truth that allows him to establish a sentence about the natural numbers that does not contain the truth predicate.

In another paper (1998), I argue that the failure of conservativeness of theories like A" tells against so-called "deflationist" accounts of truth. How "thin" or "insubstantial" can the notion be if we can use a truth predicate to obtain new knowledge (e.g., about the natural numbers)? Nevertheless, there is a natural intuition that in adding a truth predicate to A, and moving to A", one is not really changing the subject. The idea is that the notion of arithmetic truth is somehow implicit in the arithmetic concepts. Just above, we saw a reason to regard the truth predicate as a logical constant. Primarily, this is because truth has no subject matter. By way of analogy, no one would think of the addition of a Sheffer stroke to a language and deductive system as an extension of the subject matter. Similarly, arithmetic truth is just arithmetic plus a topic-neutral operator.

4. Perspective

The intuition that arithmetic truth is implicit in the concept of the natural numbers is reflected in another key notion in Dummett's writings, that of indefinite extensibility. Dummett notes that one essential feature of a concept consists of a criterion for a given, object falling under that concept. This criterion, however, is not the whole story concerning the meaning of a predicate expressing the concept. Dummett adds a further "criterion for asserting something of all objects falling under a concept" and argues that this latter is also "an essential feature of that concept". The criterion for asserting something of all objects falling under a concept "is not automatically given with the criterion for a given object's falling under" the concept (Dummett 1994, p. 338).(11)

Dummett (1963, p. 196) writes that there is a tendency among philosophers "to think of the meaning of a predicate as constituted wholly by the criterion for its application". Rather than pursue this potential dispute over what "meaning" is, Dummett offers to reformulate the thesis: "... if the meaning of a predicate is taken [to be constituted wholly by the criterion for its application], what must then be acknowledged is that the meaning of the quantifiers whose variables range over the extension of the predicate is not fully determined by the meaning of the predicate".

In the earlier paper, Dummett (1963, p. 193) illustrates the thesis of indefinite extensibility with a scenario:

... for two people might agree in their dispositions to recognize

something as belonging to [a] totality, and still differ on the criteria

they accepted for asserting something to be true of all the members of that

totality. Still more is this true if the totality is infinite.

Arithmetic truth is indefinitely extensible, because the criterion for saying something about all natural numbers outruns the criterion for recognizing natural numbers (in counting, for example). Recall the thought experiment mentioned above (3). Suppose two people agree on the envisioned logicist system A, or perhaps on the axioms and rules of first-order arithmetic. Then they agree on the criteria for the application of the term "number". In a sense, they agree on what it is to be a number. However, the two subjects may not agree on the extensions of the induction principle to properties not definable in the language of A. As above, if one of them refuses to apply induction to some predicates (after admitting that the predicate has a determinate extension) then he does not fully understand the concept "natural number". In Dummett's terminology, he does not grasp the criterion for asserting something about all numbers. This dovetails with the above claim that the (full, second-order) induction principle is part of the criterion for asserting something of all natural numbers--and the induction principle cannot be fully captured in the original theory.

Although it is easy to state the induction principle in a formal second-order language, Dummett (1994, p. 337) argues that the induction principle cannot "be characterized once and for all, because the notion of a well-defined property is itself indefinitely extensible". Second-order logic is itself indefinitely extensible. In coming to grasp the concept natural number, and, in particular, in coming to see that the system A (or first-order arithmetic) is intuitively correct for the natural numbers, our language learner learns the formal language in which A is expressed. She thereby has the resources to formulate the property Of "being a true sentence of that language, or, equally, some weaker property of being intuitively correct". To paraphrase Wittgenstein, when our subject then incorporates this truth concept into the induction principle, she is "going on as before". There is no detour through a new concept. The relevant notion of truth is implicit in the original understanding of the natural numbers, once we see that understanding as including the "criterion for asserting something of all natural numbers", the induction principle in particular.

Dummett argues that if the relevant arithmetic concepts were definite (i.e. not indefinitely extensible), then the part of the use of arithmetic sentences

... that relates to our capacity to recognize the truth of number-theoretic statements ought to be capable of encapsulation in a

formal theory. Godel's theorem shows this not to be so: the problem

[is] to explain this fact without abandoning the principle that a

grasp of meaning is a mastery of use ... (Dummett 1994, p. 335)

What needs to be explained ... is the general applicability of

Godel's theorem to every intuitively correct formal system; the

fact that no such system can embody all that we wish to assert

about the natural numbers. (Dummett 1963, p. 194)

In other words, indefinitely extensible concepts are not formalizable or, to take the contrapositive, any concept that is formalizable is not indefinitely extensible.(12)

Where does this leave our Dummettian logicist? We have seen that in Dummett's framework logical terminology is formalizable, or at least the strictly analytic truths containing only logical terms are formalizable. Recall the fundamental logicist thesis (Terms):

The basic terms of arithmetic--"natural number", "zero",

"successor", "addition", "multiplication", etc.--are logical terms or

are definable from logical terms alone.

This must be severely restricted, if it is to have a chance of being true. At best, the envisioned logicist deductive system A can capture the criteria for a given object being a natural number. Somebody who grasps the system is competent to count his toes and perhaps balance a checkbook. However, this formal system cannot capture the indefinitely extensible criterion for asserting something of all natural numbers. This last, Dummett claims, is essential to our understanding of the concept "natural number". Moreover, the criterion for asserting something of all natural numbers is surely essential to any serious mathematical use of arithmetic. Thus, if Dummett is correct, the most a logicist can hope to establish is that a small part of the meaning of "natural number" is given by introduction and elimination rules. This restriction, the rejection of (All Truths), and the restrictions on (Basic Truths) indicates just how far our Dummettian logicist has drifted from the original dream of logicism.(13)

STEWART SHAPIRO Department of Philosophy The Ohio State University at Newark Newark, Ohio 43055 USA shapiro+@osu.edu

(1) Quine's main targets are the positivist programs that emerged from the Vienna Circle. Tennant (1997b) makes an extensive attempt to revive a positivist program by invoking a Dummettian notion of analyticity. In the present article, focus is restricted to the class of logical truths, which is supposed to be a subclass of the analytic truths. Quine (1986) himself accords a special status to the logical truths, albeit in a non-traditional manner.

(2) Friend (1997) offers an elaborate defense of the doctrine that second-order logic is logic in the relevant sense. I have defended the Tarskian thesis that there is no sharp border between mathematics and logic (Shapiro 1991). Quine argues that there is no philosophically significant border between mathematics and almost anything else. Mathematical presuppositions occur throughout the web of belief. Why should logic, especially the logic of mathematics, be different? That perspective, of course, makes little sense of the traditional question of logicism, and is thus far from the present issues concerning Frege, Dummett, and their followers. As far as I know, Wright and Hale do not provide an extensive treatment of the boundaries of logic.

(3) See, for example, Tennant (1997b, p. 296 n. 24). Tennant (1997b, p. 314) argues that the introduction rule is the more basic of the two. This subtle point does not affect the present discussion.

(4) We have to waive the nicety that [exists] x(x = x) is a logical truth in most classical and intuitionistic systems. This, of course, is not a deep metaphysical commitment on the part of logicians, but only an artifact of the inconvenience of freelogic deductive systems and models whose domain is empty.

(5) On the other hand, this very thesis led to the infamous Caesar problem. Frege held that his account of arithmetic is not complete until there are clear and determinate criteria distinguishing a number from any object whatsoever. The account must show how and why the number 2 is different from Julius Caesar.

(6) Almost. The envisioned theory would presumably have infinitely many types, but it may not have any "transfinite" types. Thus, the theory would resemble Zermelo set theory without the principle of infinity.

(7) The completeness theorem will have to be given up, since, on the present proposal, the usual deductive system for first-order predicate calculus does not have, as theorems, all and only the logical truths in the relevant vocabulary. In light of the incompleteness of arithmetic, a logicist who accepts (All Truths) should not expect logic to be complete anyway. This fact will occupy us shortly.

(8) I do not claim to have an argument in favour of the loose analyticity, or logical truth, of the classical principles. The present point is that once the Dummettian allows the distinction between strict and loose analyticity, he does not have a knock-down argument against classical logic even in his own framework. The question of classical logic must be regarded as (eternally?) open. In later writing (e.g. 1991, Ch. 12), Dummett himself seems to have retreated from the strong conclusion. He points out that the case against classical logic hinges on what he calls the fundamental assumption, a thesis that if one can establish a sentence S whose main connective is a logical operator, then one can in principle establish S via the introduction rule for that operator. So, if we can establish a sentence of the form [Phi] V [Psi], then either we can establish [Phi] or we can establish [Psi]. This makes excluded middle problematic, to say the least. However, Dummett proceeds to raise doubts about the fundamental assumption. Prawitz (1977, 1994) also notes that doubts about conservativeness attenuate the case against classical logic. Dummett's and Prawitz's current conclusion, I take it, is that intuitionistic logic enjoys a certain level of justification and that a similar justification for classical logic is most unlikely. Prawitz (1977, p. 39) writes: "Clearly, we know procedures that make all intuitionistic inference acceptable ... but not any procedure that makes the rule of double negation elimination acceptable ..."

(9) Prawitz (1977, p. 29) links the informality of the notion of proof to the failure of conservativeness. The proof condition for a sentence of the form (A [right arrow] B) is that there is an effective method for transforming any canonical proof of A into a canonical proof of B. Prawitz writes that

although the general form of the condition for something to be a canonical

proof of a sentence A [right arrow] B is formally stated ... there is no

formal system generating all the procedures that transform canonical proofs

of A to canonical proofs of B, and it is left open what more complicated

sentences can be involved in such procedures. For instance, such a procedure

may be definable in an extension of a certain language without

being definable in the language itself, and hence ... the extension of a

language obtained by introducing new logical constants may not be a

conservative extension of the original language.

(10) With second-order logic, there is no need for the intermediate deductive system A'. Once the predicate T is added, the induction principle can automatically be applied to formulas containing T. This is probably what underlies the passage from Prawitz (1994) quoted at the top of this paper.

(11) This is an important part of Dummett's second assault against classical mathematics. He argues that classical logic is not appropriate for indefinitely extensible concepts.

(12) Suppose, once again, that the natural numbers are objects at a given type (perhaps the ground-type) and that the language has an unrestricted quantifier over that type (defined before the numbers were introduced). If the predicate N for "natural number" is logical, then the criterion for "asserting something of all objects falling under" the natural numbers would simply fall out of the introduction and elimination rules for N, the relevant quantifiers, and material implication. That is, "all numbers have [Phi]" would be [inverted]A[sup.T]x (Nx [right arrow][Phi](x)), where "[inverted]A[sup.T]x" is the previously given quantifier over the relevant type T. This leaves no room for indefinite extensibility. A mere combination of three effective pairs of rules yields an effective system, not an indefinitely extensible one. The best prospect for our logicist would be to deny that there is an independent quantifier over objects of the same type as the natural numbers. The natural numbers are their own type, or sort--as suggested in [sections] 2.1 above--in which case there is no prior quantifier ranging over a type or sort containing the natural numbers. In these terms, Dummett's point is that there are no effective rules that give the entire meaning of the new quantifier [[inverted]A.sup.N]. I am indebted to a referee for this point.

(13) Many thanks to Crispin Wright and Neil Tennant for useful conversations and feedback on previous versions of this paper. I especially appreciate the spirit of collegiality. I am indebted to the members of a seminar I gave at Ohio State during the summer of 1996. The main participants were Jon Cogburn, Roy Cook, Cathy Hyatt, and Joe Salerno. Thanks also to two anonymous referees, one of whom forced me to restructure the article to improve readability and to sharpen the argument in two crucial places (and to correct two embarrassing typos).

REFERENCES

Dummett, M. 1963: "The Philosophical Significance of Godel's Theorem", in his 1978, pp. 186-201. Originally published in Ratio, 5.

--1973: "The Philosophical Basis of Intuitionistic Logic", in his 1978, pp. 215-47.

--1977: Elements of Intuitionism. Oxford: Oxford University Press.

--1978: Truth and Other Enigmas. Cambridge, MA: Harvard University Press.

--1991a: Frege: Philosophy of Mathematics. Cambridge, MA: Harvard University Press.

--1991b: The Logical Basis of Metaphysics. Cambridge, MA: Harvard University Press.

--1994: "Reply to Wright", in McGuinness and Oliveri 1994, pp. 32938.

Friend, M. 1997: Second-order Logic is Logic. Ph.D. Dissertation, The University of St. Andrews.

Geach, P. 1967: "Identity". Review of Metaphysics, 21, pp. 3-12.

--1968: Reference and Generality. Ithaca: Cornell University Press.

Hacking, I. 1979: "What is Logic?". The Journal of Philosophy, 76, pp. 285-319.

Hale, Bob 1987: Abstract Objects. Oxford: Basil Blackwell.

Hintikka, J. 1955: "Reductions in the Theory of Types". Acta Philosophica Fennica, 8, pp. 61-115.

Hodes, H. 1984: "Logicism and the Ontological Commitments of Arithmetic". Journal of Philosophy, 81, pp. 123-49.

Kraut, R. 1980: "Indiscernibility and Ontology". Synthese, 44, pp. 11335.

Lavine, S. 1994: Understanding the Infinite. Cambridge, MA, Harvard University Press.

Martin-Lof, Per 1984: Intuitionistic Type Theory. Napoli: Bibliopolis.

McGuinness, B. and Oliveri, G. (eds.) 1994: The Philosophy of Michael Dummett. Dordrecht: Kluwer Academic Publishers.

Prawitz, D. 1977: "Meaning and Proofs: On the Conflict between Classical and Intuitionistic Logic". Theoria, 43, pp. 2-40.

--1994: review of Dummett (1991b). Mind, 103, pp. 373-6.

Quine, w. v. O. 1986: Philosophy of Logic, second edition, Englewood Cliffs, NJ: Prentice-Hall. Originally published in 1970.

Shapiro, S. 1991: Foundations Without Foundationalism: A Case for Second-order Logic. Oxford: Oxford University Press.

--1998 forthcoming: "Truth and Proof: Through Thick and Thin". Journal of Philosophy.

Tennant, N. 1987: Anti-realism and Logic. Oxford: Oxford University Press.

--1997a: "On the Necessary Existence of Numbers". Nous, 31, pp. 307-36.

--1997b: The Taming of the True. Oxford: Oxford University Press.

Whitehead, A. N., and B. Russell 1910: Principia Mathematica 1. Cambridge: Cambridge University Press.

Wright, C. 1983: Frege's Conception of Numbers as Objects. Aberdeen: Aberdeen University Press.

--1994: "About `The Philosophical Significance of Godel's Theorem': Some Issues", in McGuinness and Oliveri 1994, pp. 329-38.
COPYRIGHT 1998 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Shapiro, Stewart
Publication:Mind
Date:Jul 1, 1998
Words:12602
Previous Article:The standard modes of aversion: fear, disgust and hatred.
Next Article:Wide content individualism.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters