Printer Friendly

Abstract complexity theory and the mind-machine problem.

In this paper we interpret the characterization given in Solomon [1978] of the Godel speed-up phenomenon as providing support for the 'Nagel-Newman thesis' (see Nagle and Newman [1958], pp. 100-2) that human theorem recognizers differ from mechanical theorem recognizers (i.e. programs which recognize theorems) in that the former do not seem to be limited by Godel's incompleteness theorems whereas the latter do seem to be thus limited.

However, we also maintain that (currently non-existent) programs which are open systems in that they continuously interact with, and are thus inseparable from, their environment, are not covered by the above (or probably any other recursion-theoretic) argument.

The Godel incompleteness theorem states that arithmetic truth is too powerful for any reasonable (i.e. consistent and recursive) axiomatic system to capture completely. More precisely, given any reasonable axiomatization for arithmetic, there exist arithmetic truths which are not provable within that axiomatization Nagel and Newman claim that all computer programs behave like rules of inference, with certain data (possibly encoded within the program) presumably playing the role of axioms. Therefore, they believe that the incompleteness theorem limits the power of mechanical theorem recognizers along with all other axiomatic systems. This they contrast with the human mathematician who can transcend any particular axiomatization, since

. . . the resources of the human intellect have not been, and cannot be, fully formalized, and that new principles of demonstration forever await invention an discovery. (Nagle and Newman [1958], p. 101)

An obvious possible defence of Nagel and Newman is supplied by the theorem (see Rogers [1967], pp. 96-7) which states that the set of all arithmetic truths is not recursively enumerable, i.e., cannot be recognized by any computer program in the sense of being the domain of the program. Since we may be reluctant (as are Nagel and Newman) to claim that 'the' human mind is capable of recognizing all arithmetic truths, this theorem does not in itself support Nagel and Newman However, it does lend some support because it can be proven as a consequence of the representability of any recursively enumerable set of strings S within any reasonable arithmetic axiomatization T. More precisely, given a Godel numbering g, there exists a formula F in T such that s[Epsilon]S iff F(g(s)) is provable in T. Now if the set of arithmetic truths were recursively enumerable there would exist a formula F in T such that s is an arithmetic truth iff F(g(s)) is provable in T. Assuming that T[prime] is obtained from T by adding an inference rule which derives s from F(g(s)), every arithmetic truth would then be provabl in T[prime] which would contradict the Godel incompleteness theorem. Thus, what can be recognized by programs is limited by what can be proven in T[prime] whic is, in turn, limited by the Godel incompleteness theorem.

While this argument does support the view that mechanical theorem recognizers are limited by Godel's incompleteness theorem, the argument suffers from the following weakness: the argument does not show that each mechanical theorem recognizer is bound to some fixed set of axioms, but only that the theorems recognized could have been obtained alternatively by effective productions from such a set of axioms. It would be much more satisfying to show that all mechanical recognizers accomplish their recognition by behaving in the same way as if they were bound to a fixed axiomatization, and in a way that is clearly different than the human mathematician's.

We shall now argue that the characterization of the Godel speed-up phenomenon given in Solomon [1978] provides evidence of just such behaviour. In Solomon [1993], it is pointed out that the characterization of Godel speed-ups in Solomon [1978] implies that the theorems of a theory [T.sub.2] are 'significantly easier' to recognize mechanically (as being true) than the theorems of theory [T.sub.1] iff [T.sub.2] is 'significantly less abstract' tha [T.sub.1]. By significantly less abstract we mean that [T.sub.2] contains an information theoretically 'large' (i.e., non-r.e.) set of theorems which [T.sub.1] does not contain, so that [T.sub.2] has a much narrower class of models than [T.sub.1]. (Thus, [T.sub.2] possesses more detail and a much narrower context than [T.sub.1].)

Moreover, the property of [T.sub.2] being significantly easier to recognize mechanically than [T.sub.1] is program independent in that if there exists a mechanical theorem recognizer for [T.sub.2] which runs significantly faster tha some mechanical theorem recognizer for [T.sub.1] on many theorems which the theories have in common, then any mechanical theorem recognizer for [T.sub.2] runs significantly faster than any mechanical theorem recognizer for [T.sub.1] on many such theorems.

Thus the obvious fact that making many extra theorems available is the only way to drastically shorten proof lengths, and therefore drastically speed-up the ru times of programs which recognize the truth of theorems by 'blindly' enumeratin proofs (in order of increasing length), automatically implies (by the above 'program independence') that this is also the only way to drastically speed-up any mechanical theorem recognizer. In other words, any method of mechanical theorem recognition has the same behaviour, with respect to 'drastic speed-up', as 'blind' proof enumerators.

On the other hand, although detail can speed-up the human mathematician in recognizing the truth of some theorems, he often passes to a more abstract theory in order to ignore a great many details which may serve to obscure many truths. In passing to a more abstract theory, he not only speeds-up the recognition of many theorems by the avoidance of confusing detail, but also speeds-up the recognition of many other theorems because of the availability of additional models in which to experiment with examples. In fact, some of the greatest advances in mathematics (e.g., Galois theory) have amounted to finding the appropriate abstract theory to use in solving problems which arise in more concrete theories.(1)

We now raise a possible objection to our above argument. The argument given in Solomon [1993], on which our above argument depends, in turn depends on [T.sub.1] and [T.sub.2] being deductively closed (in first order logic). Therefore, even granting that we have successfully demonstrated that all such mechanical theorem recognizers are blind, one may still object that these theories being recognized must contain many extremely artificial and non-intuitive theorems (such as complicated numerical forms of propositional tautologies); and indeed the only way, for human or machine, to have the potential to recognize all the theorems of the theories (even under the ideal conditions of unlimited time and space) is by some boring and blind procedure. The behaviour of these programs on 'interesting and intuitive' fragments of the theories might indeed more closely resemble a human's behaviour.

However, we assume that the 'asymptotic' behaviour of meclanical theorem recognizers on complete theories is indicative of their behaviour on more reasonable fragments of the theories. We believe that we are justified in makin such an assumption since it's merely an instance of a basic presupposition on which the relevance of abstract complexity theory itself is dependent. For example in Young [1977] this presupposition is described as

the usual recursion-theoretic assumption that . . . infinite functions and the ultimate behavior of run times on large arguments yield useful insights into computational complexity theory.

Thus we are presupposing no more in this regard than any investigator in abstract complexity theory who believes his field at all relevant to computer science. Of course, this common presupposition may be false; but until it is rejected by computer science, we do not feel unjustified in deriving some prima facie philosophical interpretations from it. Furthermore, if we conceive of Church's Thesis as asserting that a function is 'intuitively' computable if and only if it is a partial recursive function (and this is surely a common conception of Church's Thesis), then the presupposition in Young [1977] amounts to no more than the application of the if direction of Church's Thesis to the resource bounded computations of complexity theory.

Now consider a program which continuously interacts with its environment throug a steady stream of sensory inputs from sophisticated (and currently non-existent) input facilities, which allow it visual, auditory and tactile inputs. Suppose that this program evolves over time into successively powerful (or at least different) programs through such interactions.

This 'program' is actually a sequence of programs, each program possibly more powerful than the previous one in the sequence. In the limit, this sequence of programs is infinite, and hence has the potential for being non-computable. In fact it's neither unreasonable nor unnatural to consider this sequence to be non-computable, since the programs are 'evolving' through interaction with an environment presumably containing random (i.e., surprising) elements, and one may interpret randomness in terms of non-computability (see, e.g., Chaitin [1974], pp. 405-8 and Soare [1987], pp. 71, 207).

Thus, if we treat mechanical theorem recognizers as open systems, continually interacting with their environment, they may enjoy increases in power (possibly through 'random inspirations' from this environment) which occur in a 'surprising', i.e., non-computable, manner. This non-computability could indeed spare these open mechanical theorem recognizers from being limited by recursion-theoretic arguments, such as the one we gave above (see also Boden [1977], p. 474 and Solomon [1979]). The connection machine, for instance, has been proposed as one such computational model for the mechanical processing of sensory inputs in Hillis [1991]; see Lyngzeidetson [1990] for a discussion of the connection machine in the context of the mind-machine problem.

We may interpret the limiting theorems of logic, such as Godel's incompleteness theorem, as emphasizing a sharp distinction between closed systems, which can b considered to be separate from their environment, and open systems which cannot namely limits of the former can be precisely proven, whereas limits of the latter cannot. Then these limiting theorems can be considered to demonstrate a distinction between human and mechanical theorem recognizers only insomuch as the mechanical theorem recognizers are closed systems, whose inputs have to be painstakingly entered at the terminal keyboard.

Thus, open systems that are interacting with an environment that contains non-computability can themselves be non-computable (say, via the non-effective transformation of the program that drives the system). We have argued that humans differ from (currently existing) machines in that humans have more sensitive, highly developed bodies, not, as Godel apparently felt (see Godel [1972], p. 306 and Wang [1974], pp. 324-6) because they possess a 'mind' that transcends the material 'brain' and can develop non-mechanically through reflection.(2) This sensitivity can allow humans to be affected by non-computable aspects of their environment such as randomness (which we referred to above) or the possibility that a bounded interval of time can consist of an uncountable set of temporal points.

ALBERT E. LYNGZEIDETSON Department of Philosophy, Florida Atlantic University, Davie, Florida, USA

MARTIN K. SOLOMON Department of Computer Science and Engineering, Florida Atlantic University, Boca Raton, Florida, USA

1 For related discussions, the reader is referred to Pt. IV, sections 45-6 in Wittgenstein [1956], sections 1-3 in Frege [1953], and chapter VII in Wang [1974].

2 However, the reader should contrast Godel's non-materialist view of mind related in Wang ([1974], p. 326) with his conjecture as described by Wang ([1974], p. 85).

. . . that some physical organ is necessary to make the handling of abstract impressions (as opposed to sense impressions) possible, because we have some weakness in the handling of abstract impressions which is remedied by viewing them in comparison with or on the occasion of sense impressions.

REFERENCES

BODEN, M. [1977]: Artificial Intelligence and Natural Man. New York: Basic Book Inc.

CHAITIN, G.J. [1974]: 'Information Theoretic Limitations of Formal Systems', Journal of the Association for Computing Machinery, 21, pp. 403-24.

FREGE, G. [1953]: The Foundations of Arithmetic, Oxford: Basil Blackwell.

GODEL, K. [1972]: 'Some Remarks on the Undecidability Results': in S. Feferman et al, eds., Kurt Godel: Collected Works, Vol. II. New York: Oxford University Press, 1990, pp. 305-6.

HILLIS, W.D. [1991]: 'The Connection Machine', Scientific American Special Issu on Science in the 20th Century. 3, pp. 174-182.

LYNGZEIDETSON, A. E. [1990]: 'Massively Parallel Distributed Processing and a Computationalist Foundation for Cognitive Science', The British Journal for the Philosophy of Science. 41, pp. 121-7.

NAGEL, E. and NEWMAN, J. R. [1958]: Godel's Proof. New York: New York Universit Press.

ROGERS, H. JR. [1967]: Theory of Recursive Functions and Effective Computability. New York: McGraw-Hill.

SOARE, R. I. [1987]: Recursively Enumerable Sets and Degrees. New York: Springer-Verlag.

SOLOMON, M. K. [ 1978]: 'Some Results on Measure Independent Godel Speed-ups', The Journal of Symbolic Logic. 43, pp. 667-72.

SOLOMON, M. K. [1979]: 'Complexity of Formal and Natural Systems', Proceedings of the 23rd Annual Meeting of the Society for General Systems Research, pp. 119-23.

SOLOMON, M. K. [1993]: 'Measure Independent Godel Speed-ups and the Relative Difficulty of Recognizing Sets', Mathematical Logic Quarterly, 39, in press.

WANG, H. [1974]: From Mathematics to Philosophy. New York: Humanities Press.

WITTGENSTEIN, L. [1956]: Remarks on the Foundations of Mathematics. Oxford: Basil Blackwell.

YOUNG, P. R. [1977]: 'Optimization Among Propably Equivalent Programs', Journal of the Association for Computing Machinery, 24, pp. 693-700.
COPYRIGHT 1994 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1994 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Lyngzeidetson, Albert E.; Solomon, Martin K.
Publication:The British Journal for the Philosophy of Science
Date:Jun 1, 1994
Words:2206
Previous Article:Phenomena and representation.
Next Article:Transplantation, chemical inheritance, and the identity of organs.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters