Printer Friendly

Artificial Intelligence and Scientific Method.

Artificial Intelligence and Scientific Method, by Donald Gillies. Oxford: Oxford University Press, 1996. Pp. xii + 176. H/b 35 [pounds sterling], P/b 11.99 [pounds sterling].

Artificial Intelligence (henceforth "AI") has always attracted philosophical interest, and Donald Gillies's new book is a worthy addition to the literature. Gillies treats his reader to a fascinating and occasionally bold investigation of an ongoing, two-way interaction between, on the one hand, AI and, on the other, the philosophical understanding of science and logic. The first three-quarters of the book explore the implications, for logic and scientific method, of two areas of AI: machine learning and logic-programming. Gillies argues for three major claims: (i) that work in machine learning suggests that Baconian induction is part of contemporary scientific method, (ii) that work in logic-programming tells in favour of the empiricist, rather than the a priori, conception of logic, and (iii) that work in logic-programming suggests a new framework for logic, one which has the potential to support the development of an inductive logic similar in form to deductive logic. In the last quarter of the book, Gillies examines certain Godel-inspired arguments which conclude that mathematical logic can be used to identify the limits of Al. Here he argues for his fourth major claim: (iv) that whilst arguments making use of Godel's first incompleteness theorem do not establish that human minds are superior to computers, nevertheless human thinking will not be rendered obsolete by the Al-systems of the future.

I shall concentrate on the first three of these claims. Space restrictions force me to be selective, and Gillies's analysis of the potential impact of AI on the philosophies of science and logic is, I think, his more distinctive contribution. Suffice to say that those immersed in the Godel-AI debate will find much food for thought in Gillies's arguments for claim (iv), and, in particular, in his intriguing suggestion that it is the "political" superiority of humans over computers (we build them to perform tasks for us) which ensures that Godel-like arguments can be applied. (Thus, on Gillies's account, Godelian arguments do not prove the human mind to be superior to computers, because human superiority is a prerequisite for the application of those arguments.)

Let's return to Gillies's first claim. Inductivists in the philosophy of science hold that scientific procedure is a process of inference based on many observations (that is, of induction). Anti-inductivists hold that it is not. To stage the debate between these two opposing camps, Gillies selects two champions, Bacon and Popper. Bacon was a committed inductivist. In contrast, Popper argued (famously) that scientific procedure is a process not of induction, but of conjecture and refutation. The human scientist uses his or her creative insight to produce a conjecture which is accepted unless or until it is falsified by empirical test.

To show how developments in AI might bear on this debate, Gillies introduces "The Turing Tradition" in machine learning, an approach distinguished by its emphasis on logic and its interest in solving practical problems. AI systems in this tradition have, we are told, already become part of scientific method. They have been used to confront practical problems (in, for example, medical diagnosis) and one system (GOLEM) has even discovered a minor law of nature which can be used to predict the secondary structure of a protein from its primary structure. Gillies argues that although these systems have certain Popperian features, they are fundamentally Baconian in nature. Therefore, he concludes, Baconian induction is part of contemporary scientific method.

The argument here relies on three key features which, on Gillies's analysis, the systems of interest possess: (a) at the heart of the learning algorithms lie inductive rules of inference which are mechanically applied; (b) these rules have a falsification component; (c) the form of these rules diverges from that of standardly conceived inductive procedures, in that generalizations are derived not solely from the data, but from the data plus a body of background knowledge, given to the program by the human designer.

Feature (a) is strong evidence for the Baconian interpretation: not simply for the obvious reason that the rules in question are inductive, but because Bacon himself thought of induction as a mechanical (that is, a rule-governed) procedure. However, at first sight, feature (b) appears to be evidence for a straightforwardly Popperian interpretation. Falsification is, after all, the engine of Popperian science. Gillies dissolves this worry by drawing on Bacon's personal account of the inductive procedure by which he arrived at his theory of heat. It becomes clear that Baconian induction involves a stage (which Bacon himself called "exclusion and rejection") in which unsupported conjectures are falsified. Thus falsification, as long as it is part of a mechanical procedure of inferring generalizations from data, is perfectly Baconian.

Gillies's treatment of feature (c), the role of background knowledge, is less compelling. His suggestion is that feature (c) "definitely favours Popper and goes somewhat against Bacon" (p. 69). This judgment depends on a move in which Gillies identifies the idea of "background knowledge" with what Popper meant by "a theory" in his claim that a scientist cannot make observations without already having a theory of the domain under investigation (p. 70). But elsewhere Gillies claims that background knowledge consists of sets of heuristics (for example, p. 52). In A1, heuristics are standardly thought of as informal rules of thumb which are used to guide programs towards solutions, especially in problem-spaces so large that an exhaustive search is impractical. It is far from clear to me that the three notions at issue (background knowledge, theory, and set of heuristics) are equivalent. Indeed, there is evidence from Gillies's own text that the proposed equivalence is problematic. In the discussion of the discovery of sulphonamide drugs, the relevant background knowledge is the thought that dyes capable of staining textiles might also have useful therapeutic properties (p. 12). This is a heuristic, but should not, I think, be counted as a theory (in the relevant sense). Later, during the discussion of ID3 (a machine learning program), the relevant background knowledge is a set of attributes, given in advance to the program, and on which the program depends when it constructs its generalizations (pp. 39-40). This time the background knowledge looks like a theory of the domain, not a set of heuristics. Things are, I fear, more complicated than Gillies suggests, so I would have welcomed a more detailed discussion of the notions in play, even if the eventual conclusion regarding feature (c) had remained the same.

One effect of Gillies's decision to focus on "The Turing Tradition" in machine learning is that his text makes no mention at all of connectionism, the form of Al in which simple processing units are "wired-up" (virtually) into supposedly brain-like networks. Although such networks are currently all the rage as psychological models, this is not their only claim to fame. Their ability to extract generalizations from data are the stuff of legend in the machine learning community. Prima facie, if anything looks like Baconian induction, then connectionist learning does. Because of this, some discussion of connectionism would, I think, have been appropriate.

When Gillies turns to the implications, for the philosophy of logic, of work in logic-programming, the star of the show is undoubtedly PROLOG, an AI programming language developed from classical logic. Gillies argues as follows (pp. 66-8). Classical logic can be seen as "mechanising" the process of checking the validity of a proof. PROLOG goes one stage further, by mechanising the control of the construction of proofs (that is, which rules of inference to apply). Thus PROLOG should be thought of as a system of logic in which control has been introduced into deductive logic. This suggests a new way of thinking about logic, as essentially "inference plus control". With this idea in place, Gillies argues for his second and third major claims (as identified earlier).

The control aspect of PROLOG endows it with a non-classical form of negation. The details need not concern us here. What is important is that, because of this feature, the inferences drawn by PROLOG are not certain (as they are in classical logic), but "merely" reasonable, given what else the system knows. This is not necessarily a weakness. In fact it makes PROLOG highly suitable for solving certain practical problems, such as finding flights using airline timetables, where what the user expects is sufficient reliability, not absolute certainty (flights might always be cancelled). Gillies takes all this as evidence for the empiricist, rather than the a priori, conception of logic, because (he argues) it suggests that different logics may be appropriate in different domains, and that which logic is appropriate can be decided only by looking at the empirical results in a particular domain (p. 97). One might complain here that Gillies has evidence for only the first of these two steps, because the possibility is left open that although there may be a plurality of logics, one could always tell, by carrying out some form of a priori analysis of the proposed logic and the problem domain, whether or not that logic is appropriate for that domain. Later in the book Gillies notes this criticism, but attempts to deflect it by appealing to an example of AI-research in which two logics were pitted against each other in an empirical play-off. Unfortunately the observation that he makes is less than conclusive: "there seems to be no way in which the relative successes of [the two logics] in the two domains examined could have been predicted a priori" (p. 111). Even if one has sympathy with Gillies's conclusion (as I do), one might have hoped for something more here, such as a discussion of exactly what it is about certain domains that makes them resistant to any suitable a priori analysis.

Attention then turns to the current state of inductive logic. Gillies observes that most work in deductive logic has focussed on the analysis of inference. It has been AI (specifically PROLOG) which has introduced control into deductive logic. Things are precisely the reverse in inductive logic, where philosophers of science, having despaired of finding inductive rules of inference, have concentrated on finding methods for calculating the degree to which a piece of evidence confirms a hypothesis. According to Gillies, such confirmation values are part of the control aspect of inductive logic, because they tell us which hypotheses ought to be preferred for making predictions. Thus, in philosophy, the control aspect of inductive logic has been under active investigation, whilst the inference component has been set aside. But now recall that recent advances in machine learning have shown that inductive rules of inference are, in fact, possible. So, concludes Gillies, by conceptualizing both deductive and inductive logic as "inference plus control", and by appealing to developments in AI, the possibility of developing an inductive logic, similar in form to deductive logic, looks much more promising. The challenge is to integrate the lessons from machine learning with those from confirmation theory. Here I am inclined to sound a pessimistic note. Inductive rules of inference, as deployed in AI, are less than completely understood, and confirmation theory is, to be generous, underdeveloped. The road to success looks long and difficult. Of course Gillies is well aware that problems exist (pp. I04-5). Only time, plus a great deal of dedicated effort, will tell whether or not they can be overcome. At the very least, Gillies's suggested framework may have provided a context within which progress can be made.

Finally a note on style: it is an unfortunate fact of academic life that one cannot be confident that a text with an interdisciplinary content will enjoy a style of presentation which facilitates interdisciplinary comprehension. Given this, one should applaud the following qualities of Gillies's presentation: first, whenever he discusses an AI-system, he provides a largely non-technical yet nicely detailed explanation of that system's workings. Second, he takes great pains to render accessible, to the non-specialist, the techniques and results in logic to which he appeals in his argument. Third, his exposition involves a number of clear and engaging analyses of key episodes in the intellectual history of science.

Gillies has produced an insightful, well-written book, which will be welcomed as a useful contribution to contemporary debate by philosophers of logic, philosophers and historians of science, philosophers of AI, and AI researchers "on the ground". The study of logic and the study of scientific method are old philosophical bedfellows. If Gillies is right, they must now make room for a third partner: AI.

MICHAEL WHEELER Department of Experimental Psychology University of Oxford South Parks Road Oxford, OX1 3UD UK
COPYRIGHT 1998 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Wheeler, Michael
Publication:Mind
Article Type:Book Review
Date:Oct 1, 1998
Words:2109
Previous Article:The Conscious Mind: In Search of a Fundamental Theory.
Next Article:Substance: Its Nature and Existence.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters