Printer Friendly

Artificial intelligence as the year 2000 approaches.

I am aware that I have acquired a reputation for being critical of the claims made for artificial intelligence (AI). It is true that I am repelled by some of the hype that I hear and by the lack of self-criticism that it implies. However, the underlying philosophical questions have long been of deep interest to me. I can even claim to be the first AI professional; my definition of a professional being someone who performs a service and accepts a fee in return. I had read Alan Turing's paper "Calculating Machinery and Intelligence" when it appeared in Mind in 1950, and was deeply impressed. Without attempting to emulate Turing's erudition--or the wit in which he clothed it--I wrote a short article on the same subject and offered it to the editor of The Spectator, a British journal devoted to contemporary affairs. The editor accepted it and I duly received a fee.

Apart from his mathematical papers, I still consider the paper in Mind to be the best thing Turning ever wrote. He began with the question "Can machines think?" In that form, he found the question to be unsatisfactorily formulated. An attempt to extract the essential underlying point of interest led him to propose the famous Turing test. Instead of asking whether a particular machine could think, he suggested that one should instead ask whether it could pass this test. The test involved the machine posing as a human being and defying an interrogator to determine whether it was a man or a woman.

Turing admitted that he had no strong arguments to put forward in favor of the view that a digital computer would one day be able to pass his test, although he was inclined to believe that it would. He made the interesting suggestion that the machine might be equipped with a learning program and then taught like a child. The way he put it was that the machine might be programmed to simulate a child's brain rather than an adult's brain. This brings out the point that thinking is intimately connected with learning. Turing was quite aware that the education of a machine would, like that of a child, be a long, drawn-out process. He also saw practical difficulties. The computer would not have legs and could not be sent to school like a normal child. Even if this deficiency could be overcome by "clever engineering" he was afraid the other children might make excessive fun of it!

There is no difficulty in writing a program that will exhibit a simple form of learning, e.g., learning to recognize abbreviations for people's names. The program would contain a list of the abbreviations it already understood. Given a unfamiliar abbreviation, the program would make a guess. It would be told whether it was right or wrong, and would update its list accordingly. This can fairly be called learning, although there is nothing deep about it.

Stimulated by Turing's paper, my colleagues and I tried our hands at writing various learning programs of the kind I have just described. Their limitations soon became obvious. They did what they had been written to do, but no more. For that reason, they were uninteresting as soon as they had been run for the first time. I soon appreciated that a breakthrough was required in the direction of what I called generalized learning programs which would go on learning new things. Perhaps, it would have been better to have called them unrestricted learning programs.

If computers had existed in the late seventeenth century and people had known how to write unrestricted learning programs, then a machine equipped with such a program would have been ready to absorb the work of Newton when it was published, and later that of Faraday and Einstein. It would now be doing its best with black holes! It would have read the novels of Dickens, and would be able to engage in the sort of half-teasing dialogue that Turing's fertile mind delighted in inventing (see box).

At first, I had hoped that when a sufficient number of simple learning programs had been written, it might be possible to discern what they had in common and, armed with this insight, write an unrestricted learning program. This did not happen and it soon became clear to me that it would not happen unless a genius were to arise who could turn the whole subject inside out.

To my surprise, others did not immediately take the point. The programs we wrote were the work of two weeks; others went on to write programs taking two years or longer, but with the same result as far as I could see.

The problem is illustrated well by Arthur Samuel's pioneering work on programs for playing checkers. His main interest was in exploring the method of recursive board search later taken up with such success by writers of chess programs. But he was also interested in seeing whether he could make his program learn from its experience and play a better game in the future.

An essential and frequently repeated step in Samuel's program was to ascribe a measure of goodness to a board position. He devised a number of different ways of computing such a measure. Instead of settling for one of these, he used instead a linear combination, with a weight assigned to eachf measure. He found that he could devise ways in which the weights could be automatically adjusted to optimize performance, but no way in which the program could invent an entirely new measure and add it to the set it already possessed. In other words, he could write a program to deal with mathematical forms, but not with intellectual concepts.

Some very complex learning programs have been written and have been highly impressive when demonstrated to a nontechnical audience. However, they have always turned out to be programs that optimize their performance by modifying their internal state, either by adjusting parameters or by updating data structures.

A high water mark of interest in spectacular AI programs of this type was reached in 1971 with a program written by Terry Winograd, whose purpose was to control a robot capable of stacking blocks. The robot was simplified down to the bare essentials and was simulated by computer graphics. This was a striking feature at the time, since the use of computer graphics was then rather uncommon. Commands were given to the robot in English and the serious purpose of the program was to try out a novel method of extracting information from English sentences. However, the combined appeal of the graphics and the natural language interface caused the work to be widely acclaimed as a demonstration of machine intelligence.

Expert Systems and Turing's Dream

Originally, the term AI was used exclusively in the sense of Turing's dream that a computer might be programmed to behave like an intelligent human being. In recent years, however, AI has been used more as a label for programs which, if they had not emerged from the AI community, might have been seen as a natural fruit of work with such languages as COMIT and SNOBOL, and of the work of E.T. Irons on a pioneering syntax-directed compiler. I refer to expert systems.

In simple expert systems, all the knowledge is incorporated by the programmer in the program, as indeed the alternative name knowledge-based systems clearly brings out. It is as though a child were taught the multiplication table by having a surgical operation performed on its brain. In more elaborate expert systems, some updating of an internal data base takes place during the lifetime of the system. These systems exhibit the same form of learning as the programs discussed earlier, and have the same limitations. Expert systems are indeed a valuable gift that the AI community has made to the world at large, but they have nothing to do with Turing's dream.

Turing predicted in 1950 that his dream would be realized within 50 years; specifically that it would be realized on a computer with 128 MB of memory altogether. Fifty years brings us to the year 2000 and it's clear that Turing's prediction will not be realized. Indeed, it is difficult to escape the conclusion that, in the 40 years that have elapsed since 1950, no tangible progress has been made towards realizing machine intelligence in the sense that Turing had envisaged. Perhaps, the time has come to face the possibility that it never will be realized with a digital computer.

We use the term digital computer to describe the computers we build, although in fact a true digital computer is an abstraction. Any real digital computer must be composed of analogue circuits. Turing realized this and indeed it was very apparent to anyone of his period who witnessed the struggles which engineers were having to make vacuum tube circuits behave in a digital manner. The results were only just good enough as was shown by the tendency of the very early digital computers to make occasional mistakes. At the basic circuit level, engineers still need to recognize that what they design are fundamentally analogue devices, and they make routine use of analogue simulators and other analogue design tools.

A digital computer perforce works within a logical system, and it is known that within such a system there are things that cannot be done. We do not need to evoke Goedel's theorem to prove this; Desargues' theorem about triangles in perspective will do. However, I am not sure of the relevance of arguments at these levels.

On a practical level, the fact that there are limitations to what digital computers can do is illustrated by their inability to solve differential equations directly. The usual way around this difficulty is to replace the differential equation by a difference equation. Unfortunately, difference equations and differential equations are creatures of quite distinct species, and have different properties. For example, a second order linear differential equation with two-point boundary conditions has an infinite number of independent solutions, whereas the corresponding difference equation has a finite number. This is a difference of kind which cannot be removed by going to a smaller interval in the argument. One of its practical consequences is that parasitic solutions intrude themselves and plague the life of the numerical analyst. Parasitic solutions are artifacts that arise purely and simply from the replacement of the differential equation by a difference equation. The mathematician can make the difference equation and the differential equation come together by proceeding to a limit, but this does not help the worker in the numerical domain for whom limits are not accessible.

Brain--Digital or Analogue?

If we are prepared to regard the human brain as a machine, then we have an existence proof that machines can exhibit intelligence. However, this will not help with the problem of whether digital computers can exhibit intelligence unless we are prepared to assert that the human brain is digital in action. If we do this, we are faced by a purely practical consideration. The human neuron is about five orders of magnitude slower than the gates in a modern digital computer; how would it be possible for the brain, if it were digitally organized, to be sufficiently fast? Those who think of the brain as digital will usually say that it must make up for what it lacks in speed by possessing a high degree of parallelism. However, massively parallel computers find it hard to gain a factor of 100 or even 10 in speed. Even the most determined enthusiast for parallel computation may balk at a factor of 100,000.

However, the argument is academic since there is no reason why we should regard the human brain as a digital device. Indeed, the digital vs. analogue dichotomy is wholly inappropriate as an approach to the functioning of the human brain. As I have pointed out, the digital computer is an abstraction--one which a human designer finds useful as a way of organizing his thoughts. There is no reason why a non-human designer should operate the same way. On the evolutionary hypothesis, it is even an error to regard the brain as having been designed to meet a stated requirement. "Blind evolution stumbled on . . . lo! there were men and men could think."

I would suggest that the section of the AI community interested in rivaling the action of the human brain would do well to include some analogue element in its machines. I am without great enthusiasm for neural networks, nevertheless I observe that neural nets can include analogue discriminating circuits.

I do not wish to give the impression that I think Turing's dream will come true but, with analogue rather than digital machines. I make no such prediction. Indeed, it may be that the sort of analogue machines that we are able to construct are themselves subject to limitations, which may or may not parallel those of digital machines. I do, however, suggest we take as a working hypothesis that intelligent behavior in Turing's sense is outside the range of the digital computer.

A negative principle can be of great value in guiding research. If it were not for the First Law of Thermodynamics, all bright students of mechanical engineering would want to work on perpetual motion! A recognition that Turing's dream is not going to be realized with a digital computer would perhaps help students avoid unpromising lines of research.
COPYRIGHT 1992 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Wilkes, Maurice V.
Publication:Communications of the ACM
Date:Aug 1, 1992
Words:2241
Previous Article:Leap-year problems.
Next Article:Computing in the Middle East.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters