# Hintikka's revolution: The Principles of Mathematics Revisited.

The title of this book refers to Bertrand Russell's The Principles of Mathematics, but it does not deal with Russell's ideas. Jaakko Hintikka sees his book in a similar role to that enjoyed by Russell's, almost a century ago. It is 'crisis science', aimed at establishing a new paradigm in the foundations of mathematics. Hintikka presents a new formalism and a new logic, together with a new semantics.

Whether this grandiose claim is borne out or not, the new 'paradigm' is provocative and insightful, and a dedicated reader will learn a lot from a careful study of this work. However, it is not an easy read. The formalisms and some of the ideas are not developed fully, and so the reader is left to supply many of the details, often via guesswork. The model is Principles of Mathematics and not Principia Mathematica. Unfortunately, there are also a number of substantial errors. The process of working through these was most educational for both of us, but we were not able to resolve every case. In addition, the book is marred by many typos, some of which destroy the intended sense.

Chapter 1 concerns the purposes of logic in foundational studies, and sets the stage for the technical development to follow. Hintikka focuses on two distinct but related goals of logical studies. One is to codify the process of inference within a branch of mathematics, and the other is descriptive, where one develops a formal language to describe mathematical structures. In contemporary logic, the first goal is served by proof theory and the second by model theory. Hintikka argues that logicians have placed too much emphasis on the proof-theoretic aspects of logic at the expense of the model-theoretic. He does his part to reverse this imbalance by almost ignoring the deductive aspects of the language he develops in this book.

Chapter 2 contains a brief account of Hintikka's game-theoretic semantics. Begin with a standard, first-order language whose logical terminology is {V, &, [similar to], [exists], [for every]}. Let M be a model for the language. For each sentence S, define a game G(S) between two players, a verifier and a falsifier. The game G(S) is defined by recursion, from the 'outside in':

If S is [S.sub.1] [or] [S.sub.2], the verifier chooses [S.sub.1] (i = 1 or i = 2), and they continue with G([S.sub.i]).

If S is [S.sub.1] & [S.sub.2], the falsifier chooses [S.sub.i] (i = 1 or i = 2), and they continue with G([S.sub.i]).

If S is [similar to]T, the players reverse roles, and they continue with G(T).

If S is [exists]xT(x), the verifier chooses a member m of the domain of M, and they continue with G(T(b)), where b is a name of m.

If S is [for every]xT(x), the falsifier chooses a member m of the domain of M, and they continue with G(T(b)), where b is a name of m.

If S is atomic, then the verifier wins if S is true and the falsifier wins if S is false.

A sentence S is true in the model M if the (initial) verifier has a winning strategy, a set of functions such that the verifier will always win as long as he uses the functions to make the choices. A sentence S is false in M if the (initial) falsifier has a winning strategy. For example, if Axy is atomic, then [for every]x[exists]yAxy is true if there is a function f such that for any choice b (by the falsifier), Abf(b) is true.

If we assume the axiom of choice (in the meta-theory) then game-theoretic semantics agrees with the usual model-theoretic one. Hintikka argues that the principle of choice is involved in the very meaning of the quantifiers - in unpacking the [for every][exists] combination, for example. Thus, he agrees with Hilbert and perhaps Zermelo that choice is a principle of logic.

Hintikka holds that the relevant version of the axiom of choice is (or should be) unproblematic in constructive contexts. This is incorrect, as intuitionistic mathematics is usually understood. Let Gxy be the binary relation over the real numbers, 'y is a natural number and y [greater than] x'. Clearly, [for every]x[exists]yGxy holds in intuitionistic analysis. However, a winning strategy for this sentence would be a function f such that for every real number x,f(x) is a natural number greater than x. Such a function cannot be continuous, and so this instance of choice contradicts Brouwer's theorem that every real-valued function is continuous.

Hintikka might reply that Brouwer was mistaken, and that he should have accepted choice and discontinuous functions. As we see it, the problem is that the game-theoretic quantifier rules seem to assume extensionality, since the outcome of a game should not depend on which name is picked for a object. As Hintikka presents the system, the functions that figure in winning strategies are extensional. In contrast, intensionality is inherent in the intuitionistic framework, due in part to the very meaning of the quantifiers. Clearly, there is a constructive procedure that, given any real number x, produces a natural number that is greater than x. This procedure is not extensional in that it may produce different outputs for different names of the same real number. Thus, this procedure does not compute a function.

The axiom of choice is not necessary for the game-theoretic semantics. Following Krynicki and Mostowski , define an n + 1-place relation R to be a dependency relation on a given domain D if for every [a.sub.1] ... [a.sub.n] in D there is at least one b in D such that [Ra.sub.1] ... [a.sub.n]b holds. By defining winning strategies in terms of dependency relations instead of functions, one can develop game-theoretic semantics without assuming choice.

In Chapter 3, we encounter the first of the new object languages. Hintikka points out that the games associated with ordinary first-order (or higher-order) sentences assume perfect information in the sense that at each step, the relevant player knows all previous moves and choices. Consider, for example, the sentence:

[for every]x[exists]y[for every]z[exists]wAxyzw

When the verifier comes to choose the value for w, he knows what the falsifier chose for both x and z (and what he chose for y). Hintikka calls this assumption 'Frege's fallacy', although Frege was not thinking in game-theoretic (or even model-theoretic) terms. Hintikka proposes a new kind of quantifier (and connective) to overcome Frege's fallacy. In the game associated with the sentence

[for every]x[exists]y[for every]z([exists]w/[for every]x)Axyzw

the verifier must choose a value for w without knowing the previous choice for x. A winning strategy for the verifier would be a pair of functions, one providing the value of y given a value for x and the other providing the value of w given only a value for z. The sentence is similar to one with branching quantifiers (see Krynicki and Mostowski ):

[for every]x[exists]y Axyzw [for every]z[exists]w

Hintikka calls his system independence-friendly first-order logic, or IF first-order logic for short.

Hintikka adds a proviso that 'moves connected with existential quantifiers are always independent of earlier moves with existential quantifiers' (p. 63). However, if this is the intended rule, then a sentence like [exists]x[exists]y (x = y) would not be true over the natural numbers, contradicting Hintikka's claim that game-theoretic semantics agrees with ordinary semantics on first-order sentences. We suggest that the correct proviso is that an independent quantifier in the form ([exists]y/[for every]x) should be independent of any existential quantifiers that depend on the universal ([for every]x). Thus, in the game associated with the above [for every]x[exists]y[for every]z([exists]w/[for every]x)Axyzw, the choice for w should be independent of the previous choice for y. This would match the syntax and semantics of branching quantifiers, where dependencies are indicated with a partial order.

On the game-theoretic semantics, ordinary first-order languages are bivalent, in that for any first-order sentence, if there is no winning strategy for the verifier in the corresponding game, then there is a winning strategy for the falsifier. In contrast, IF first-order logic is not bivalent. Consider the simple sentence:

([for every]x)([exists]y/[for every]x)(x = y),

over the natural numbers. In the associated game, the falsifier picks a value for x, and then the verifier picks a value for y without knowing what the falsifier did. The verifier wins if the two numbers are identical, otherwise the falsifier wins. Clearly, there is no winning strategy for the verifier (unless he cheats), and there is no winning strategy for the falsifier either. Thus, the sentence is neither true nor false.

Hintikka adopts the convention that two sentences are equivalent if they are true in the same models, even if they may not be false in the same models. Thus, [exists]y[for every]x(x = y) is equivalent to ([for every]x)([exists]y/[for every]x)(x = y). Both are true in models whose domain has only one element; in all other models the former is false and the latter is neither true nor false.

Throughout the main text, Hintikka only considers sentences in negation normal form - where negations occur only in front of atomic formulas. The result is that the initial verifier and initial falsifier do not switch roles until after all choices are made. In the syntax, only existential quantifiers and disjunctions can be made independent of previous choices (and so only the verifier is handicapped by independence). It follows that IF first-order sentences are not closed under negation, even syntactically, and thus many instances of excluded middle cannot be expressed in Hintikka's IF first-order languages.

In an appendix to the book, Gabriel Sandu presents a language which can express independence for both the verifier and the falsifier. This language is closed under negation, and excluded middle can be expressed. Nevertheless, the system is still not bivalent for the reason that Hintikka gives. A sentence S [or] [similar to] S is never false, and is true only in those models in which one of the players has a winning strategy for the game associated with S.

When we formalize the statement that there is a winning strategy for a given IF first-order sentence, the result is a standard [Mathematical Expression Omitted]-sentence. Thus, every IF first-order sentence is equivalent, in Hintikka's sense, to a [Mathematical Expression Omitted]-sentence. Hintikka reports that the converse holds as well. This tidy fact entails that the (downward) Lowenheim-Skolem theorem holds for the language.

Chapter 5 presents the case for the grand claim that the 'true basic logic is not ordinary first-order logic but independence-friendly first-order logic'. The argument, which Hintikka takes to be conclusive, consists of a few examples of independence in mathematical practice. The one closest to the mainstream of mathematics is uniform differentiability. The statement that a function f is pointwise differentiable on an interval begins with a prefix like [for every]x[exists]y[for every][Epsilon][exists[Delta], where y is the value of the derivative. Standard textbooks state (in natural language) that the function is uniformly differentiable if one can pick the[Delta] independent of x. In Hintikka's language, the prefix would be [for every]x[exists]y[for every][Epsilon]([exists][Delta]/[for every]x). Lacking this notation, it is common to use the standard first-order prefix [for every][Epsilon][exists][Delta][for every]x[exists]y[for every]z, but this might give the impression that the value of the derivative (y) depends on the choice of [Epsilon]. Although one can show that the ordinary, first-order formulation works, the IF prefix better renders what is going on.

Hintikka writes that 'examples are easily multiplied. In many cases, the use of IF quantifiers is hidden by the use of function symbols'. This 'use of function symbols' points toward second-order logic (or set theory) as much as toward IF first-order logic. The issue would be the extent to which the uses of functions in mathematics is limited to the expression of independence or whether functions play a more direct role. Prima facie, the notion of function is central to modern mathematics.

There is no effective deductive system for logical truth and logical consequence in Hintikka's system - for much the same reason that second-order logic is not complete (see Shapiro , Ch. 4). The detailed and insightful discussion of the role and importance of completeness in Chapter 5 is an elaboration of Hintikka's preference for the descriptive aspect of logic, via model theory. According to Hintikka, the most important type of 'completeness' comes with categoricity. In this respect, at least, Hintikka agrees with most advocates of higher-order logic (e.g. Shapiro ). Since the downward Lowenheim-Skolem theorem holds for IF first-order logic, its expressive power is limited. No uncountable structure can be characterized up to isomorphism with an IF first-order theory.

Chapter 6 contains one of the most interesting and important results of the study: a truth definition. Let L be an IF first-order language of arithmetic, with the natural numbers as the model. Using a second-order meta-theory, Hintikka shows that there is a [Mathematical Expression Omitted]-formula that states that there is a function f that codes a winning strategy for the sentence whose Godel number is x. Since as noted above, any [Mathematical Expression Omitted]-formula is equivalent to an IF first-order formula, we have that L contains a truth predicate for L. So the IF first-order system avoids what Hintikka calls 'Tarski's curse': we get a truth predicate without increasing the expressive resources of the language.

A careful reader will wonder about Tarski's theorem concerning the undefinability of truth. What of Liar sentences? The issue is addressed in Chapter 7. Let T(x) be the truth predicate for the IF first-order language of arithmetic, in the IF first-order language itself. The normal way to proceed would be to construct a sentence L equivalent to [Mathematical Expression Omitted], so that L 'says' that L is false. Hintikka incorrectly indicates that this construction goes through for his language. The problem is that the formula [approximately]T(x) is not well-formed since it is not in negation normal form, and there is no IF first-order formula equivalent to it. Recall, however, that Sandu's language is syntactically closed under negation. In the Appendix, Sandu shows that there is a [Mathematical Expression Omitted]-truth predicate for his language. Thus, Sandu's language has a truth predicate for his (and Hintikka's) language, and the fixed-point construction can be carried out there. So Sandu's language does have a Liar sentence.

The resolution is that the Liar sentence is neither or false. In other words, neither the initial verifier nor the initial falsifier has a winning strategy for the associated game - the game of Liar. Thus, by giving up bivalence and constructing a sufficiently expressive language, Hintikka and Sandu have provided a tidy resolution to the problem of defining truth. The language is semantically closed, after a fashion, and yet it is not overly strong - just minimally beyond first-order.

The careful reader may go on to wonder about a 'strengthened Liar' paradox, a sentence that says that it is either false or neither true nor false. Hintikka's and Sandu's languages each have a 'falsity' predicate, a formula that asserts that there is a winning strategy for the falsifier for the relevant game. However, neither language has a 'non-truth' predicate, a formula that asserts that there is no winning strategy for the verifier. Also, neither language has a 'truth-value gap' predicate, a formula that asserts that there is no winning strategy for either player.

Chapter 7 also contains a lengthy discussion of different kinds of negations. Hintikka develops an extended IF first-order language, which contains a second negation symbol, which he calls 'contradictory negation'. If S is a sentence, then [Mathematical Expression Omitted] is true if and only if S is either false or neither true nor false. That is, [Mathematical Expression Omitted] is true if there is no winning strategy for the verifier in the game G(S). Since there is no game associated with [Mathematical Expression Omitted], the extended language has a mixture of game-theoretic and ordinary truth-valued semantics.

The extended system loses many of the pleasing model-theoretic properties of IF first-order languages (such as the downward Lowenheim-Skolem theorem), but it gains in expressive resources. There is no truth definition for the extended language in the extended language. If there were, the strengthened Liar would rear its head.

Throughout the book, Hintikka asserts that IF first-order languages are first-order. A sceptic might argue that the system is really higher-order since the official game-theoretic semantics invokes winning strategies, which are functions (or relations). Hintikka would reply that the languages themselves do not invoke functions. The object language has no explicit reference to functions or other higher-order items, and there is a truth definition for the unextended language in the object language. Hintikka admits that it is convenient to discuss the language by invoking functions and using equivalent [Mathematical Expression Omitted]-sentences, but functions, relations, and sets are not needed to understand it. The language is learnable directly.

Be this as it may, it is harder to maintain that the extended IF language is not implicitly higher-order. A sentence in the form [Mathematical Expression Omitted] is understood as the nonexistence of a winning strategy. Thus, [Mathematical Expression Omitted] is an almost explicit quantification over functions. Also, if we can learn an (extended or unextended) IF first-order language directly, then we should be able to reason with the language. In Chapter 4, Hintikka sketches part of a deductive system for the IF first-order languages. The rules for the independent existential quantifier elimination require the introduction of function letters (to express the independence).

The topic of Chapter 8 is axiomatic set theory, the erstwhile paradigm in the foundations of mathematics, soon to be replaced by independence-friendly first-order logic. Here we pass over the wealth of material considered in this chapter.

Chapter 9 concerns the expressive resources of IF first-order languages. Hintikka shows how to characterize central mathematical notions (along the lines of Shapiro , Ch. 5, concerning second-order logic). For example, the extension of a predicate P is (Dedekind-)infinite if there is a one-to-one function from P to P which is not onto P:

[exists]f[[for every]x[for every]y(fx = fy [equivalent to] x = y) & [for every]x(Px [approaches] Pfx)& [exists]t(Pt & [for every]x (fx [not equal to] t))].

This is equivalent (in Hintikka's sense) to the IF first-order sentence:

[exists]t[for every]x[for every]z([exists]y/[for every]z)([exists]w/[for every]x)[(x = z [equivalent to] y = w) & Px [approaches] Py) & (Pt & y [not equal to] t)].

(The sentences (3.48) on p. 64 and (9.4), (9.5) on p. 187 are incorrect.) However, there is no IF first-order formula that expresses the finitude of P (or the finitude of the universe). For this, Hintikka moves to the extended IF first-order language, which has contradictory negation:

[Mathematical Expression Omitted].

Similarly, there is an IF first-order sentence that says that a relation R is not a well-ordering, and an extended IF first-order sentence that says that R is a well-ordering. The negation of the principle of induction for arithmetic can be expressed with an IF first-order sentence (since it is [Mathematical Expression Omitted]). The extended language can thus express induction (which is [Mathematical Expression Omitted]). Thus, there is a categorical characterization of arithmetic (and analysis, Euclidean geometry, etc.) in the extended language.

Hintikka  contains a theorem that for any nth-order sentence S, there is a second-order sentence S+ such that if S is satisfiable then so is S+, and S is a logical truth if and only if S+ is a logical truth. This is a significant reduction of standard higher-order logic to second-order logic (see also Shapiro , Ch. 6, [section]6.2). In the present chapter, Hintikka recapitulates this result and shows that S+ can be formulated as a [Mathematical Expression Omitted]-sentence. Thus, there is a sense in which every higher-order sentence is equivalent to an (unextended) IF first-order sentence.

This most intriguing result shows that IF first-order logic enjoys most of the expressive power of full second-order logic. We can have our cake and eat it too: dispense with the troublesome second-order notions ('all subsets', or 'all functions') and still get the advantages of second-order logic. This sounds too good to be true. One might take the result as evidence that IF first-order logic is no more tractable than full second-order logic, confirming the suspicion that the reference to winning strategies is a quantifier over functions. Less contentiously, we can agree that IF first-order logic is a natural extension of first-order logic. It has enormous expressive resources without going very far into the analytic hierarchy.

In the final two chapters, Hintikka argues that the underlying philosophies and orientations of constructive and intuitionistic mathematics are flawed, but there are legitimate insights which are properly understood only within the framework of game-theoretic semantics and independence-friendly logic. The resulting system does not match intuitionistic logic. Space considerations preclude a detailed treatment here.

Department of Philosophy The Ohio State University University Hall 230 North Oval Mall Columbus, OH 43210 U.S.A.

Department of Philosophy The Ohio State University Newark Campus 1179 University Drive Newark, OH 43055-1797 U.S.A.

References

Hintikka, J. : 'Reductions in the Theory of Types', Acta Philosophica Fennica, 8, pp. 61-115.

Krynicki, M. and Mostowski, M. : 'Henkin Quantifiers', in M. Krynicki, M. Mostowski, and L. Szczerba (eds), Quantifiers: Logics, Models and Computation 1, Dordrecht, Kluwer Academic Publishers.

Shapiro, S. : Foundations without Foundationalism: A Case for Second-order Logic, Oxford, Oxford University Press.
COPYRIGHT 1998 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998 Gale, Cengage Learning. All rights reserved.

Author: Printer friendly Cite/link Email Feedback Cook, Roy; Shapiro, Stewart The British Journal for the Philosophy of Science Bibliography Jun 1, 1998 3684 The Vienna Circle's 'anti-foundationalism.' The Principles of Mathematics. Mathematics

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |