Printer Friendly

First specialize then generalize.

The four articles in this special issue provide evidence that Polya's advice for solving difficult problems also describes the history of developments in the logic programming research community. Starting from an extremely general and ambitious goal--the automatic proof of theorems in the full predicate calculus--researchers in the field were able to discover a meaningful related problem--that of proving theorems expressible by Horn clauses. (1) A thorough study of this related problem opened new vistas for attempting to tackle the difficult original problem of automating the formal methods envisioned by Frege in the late nineteenth century.

Alan Robinson, the author of the first article in this issue, is credited with taking the first steps toward logic programming (LP). In essence, his 1965 Journal of the ACM paper showed the way for the developments which occurred in the 1960s and early 1970s leading to the birth of the programming language Prolog. Colmerauer and Kowalski discovered the insightful related problem around 1974 by noting that most of the statements used by their problem solving theorem provers consisted of Horn clauses, or could be easily placed into that special form (see Communications of the ACM 31, 1 (Jan. 1988), 38-43).

In his ground-breaking paper of 1965, Alan Robinson had the amazing foresight to distinguish the importance of two components in automatic theorem proving: a single inference rule called resolution, and the testing for equality of trees called unification. The wisdom of this decision has been evident throughout the development of logic programming and has acquired a special significance in the case of recent work in constraint logic programming, in which unification is replaced by tests of satisfiability of constraints in various domains.

Alan Robinson's article in this issue first presents a cogent summary of the history of logic programming. He then provides novel presentations of unification and resolution based on the possibility of implementing these basic operations using massively parallel architectures. In brief, Robinson's article presents a glimpse of the past and points to the future.

Resolution and unification appear in different guises in various algorithms used in computer science. In the propositional calculus a simple form of resolution is expressed by the inference rule:

if a [right arrow] b and b [right arrow] c then a [right arrow] c, or, (- a [or] b) [and] (- b [or] c) implies (-a [or] c)

Observe the similarity between resolution and the elimination of variables in algebra, e.g.,

a + b = 3 and - b + c = 5 imply a + c = 8

Another intriguing example occurs in matching a procedure definition with its call. Consider, for example:

procedure b: a

. . .

call b ; c

in which a is the body of b, and c is a continuation of a call to b. If one views the definition of b and its call as complementary, a (pseudo) resolution yields:

a ; c

in which concatenation is noncommutative and the resolution corresponds to replacing a procedure call by its body. (Actually the last exmaple provides an intuitive procedural view of resolution as used in Prolog.)

The operation of unification is akin to that of pattern matching, which is ubiquitous in computer science. Robinson's unification algorithm brings rigor to the notion of pattern matching and it has a deep meaning: it replaces a set of fairly intricate predicate calculus axioms specifying the equality of trees, by an efficient algorithm which is easily implementable in a computer. The subject of unification, as considered by Alan Robinson, brought about two remarkable instances of the interaction between experimental and theoretical computer science. The first is the elimination of the so-called occur-check which prevents infinite trees from being unified. This fairly expensive test is theoretically needed to ensure the correctness of proofs. The audacious step of eliminating it, as initially suggested by practical considerations, turned out to be an incentive for finding efficient algorithms for matching cyclic structures. That, in turn, suggested the extensions (and replacement) of unification which are now in use in constraint languages. The second instance of the interaction between experimental and theoretical views of unification is its parallel implementation. Theoretical results have shown that there are extreme cases in which unification is essentially sequential. Nevertheless, as Robinson's current article mentions, there can be substantial speed-ups when unification is implemented using massively parallel computers to solve practical problems. (This situation parallels that of the simplex algorithm which is known to be efficient in practice, but costly in special cases which are mostly of theoretical interest.)

The second article in this issue is by John Grant and Jack Minker. The latter author has played a key role in the development of the area of LP dealing with databases. Here, too, the community's efforts follow the Polya paradigm. In this case the specialized related problem consists of Horn clauses involving only variables and constants. Unification then simplifies considerably: it becomes a simple assignment of a variable to another variable or to a constant. The resulting Prolog-like language is known as Datalog.

Grant and Minker present a concise, easily readable and rigorous introduction to this area. Their article contains several examples illustrating the basic theorems relating declarative, procedural, and fixpoint semantics applicable to the class of Prolog-like languages. (This material should become required reading in more advanced courses in LP). The introduction of negation in Prolog represents a first step toward generalizing Polya's related problem (Horn clauses) back to the more ambitious problem of full first order predicate calculus. The various facets of negation are also covered in the Grant-Minker article.

Under the heading "Applications," Grant and Minker present several salient topics including integrity constraints, query optimization, and updates. These are particularly important practical and theoretical problems pertaining to the implementation and maintenance of very large databases. In a last section of their article entitled "General Deductive Databases," the authors describe state-of-the-art research in attempting to generalize the described theoretical foundations by making them applicable to non-Horn clauses. As Polya said: after the solution of the related problem you may find courage to attack your original problem.

The article by Koichi Furukawa discusses the role of logic programming in Japan's ambitious Fifth Generation Project. As one of the leaders of that project, he is in a privileged position to present examples of the latest Japanese efforts. His article stresses the application aspects of concurrent variants and extensions of LP which had been vigorously pursued by three groups: one at ICOT, Japan, and the other two at Imperial College, England, and Weizmann Institute, Israel.

A premise of their effort can be states as: a programming language worth its salt should be expressive enough to allow its users to write complex but efficient operating systems software (as is the case of the C language). With that goal in mind they incorporated into LP the concepts of don't care non-determinism as advocated by Dijkstra. The resulting languages are called concurrent LP languages. The variants proposed by these groups were implemented and refined; they have now converged to a common model which is a specialized version of the original designs (Polya again!). Some critics may claim that the introduction of don't care non-determinism takes the L out of LP. That is an extreme statement. The success of a language depends on the ease with which relatively difficult problems can be expressed using it. Furukawa's article provides examples that illustrate the language's effectiveness in solving problems that would be awkward to express using classic Prolog. Recent concepts in computer science, such as message passing and lazy evaluation, are inherent to concurrent LP.

The examples presented in Furukawa's article leave no doubts that the Japanese are cognizant of the latest developments in LP and are eager to incorporate novel ideas into their software. This is shown in the examples describing routing in VLSI, genome analysis, the use of database operations in theorem proving, and culminating with a final example combining both constraint and concurrent logic programming. An interesting artifice described in Furukawa's article is the simulation of don't know non-determinism using don't care non-determinism. (This can be viewed as one more instance of following Polya's advice.)

David Scott Warren is the author of the final article in this special issue. He is among the early American researchers to become involved in LP, and has made extensive contributions to the area of analysis of logic programs. Warren's article contrasts with the others in that it focuses on three specific research topics of special significance, whereas the preceding articles are mostly of general interest.

The first topic is the bottom-up evaluation of logic programs. To motivate its importance, it is worthwhile to review the relationship between the Prolog inference mechanism and parsing as used in compilers. The operational behavior of a top-down recursive descent parser resembles the inference rule used by Prolog interpreters. In the former, procedures are called to determine if an input string can be parsed using context-free grammar rules specified by the procedures. In the latter, the procedures are called to test if a query can be 'derived' from a set of program rules resembling those of a context-free grammar.

Early bottom-up parsers were less efficient than their top-down counterparts. However, extensive research done in bottom-up parsing contributed to reversing this situation. Compiler generators (such as YACC) now use very efficient bottom-up techniques. A similar situation may well occur in the development of Prolog processors. Techniques known as magic sets, mentioned briefly in the Grant-Minker article, are described in detail in Warren's article. They may enable the development of processors (perhaps using massively parallel architectures) which operate bottom-up: i.e., starting from a selective set of facts, they construct selective sets of true predicates providing answers to a query. In his article Warren shows that bottom-up processors can be more general than their top-down counterparts since they are able to avoid certain non-terminating situations. Hopefully, the bottom-up processors could achieve efficiencies that would significantly surpass those of current Prolog interpreters.

The second topic considered by Warren is that of abstract interpretation. Work in this area has been of great importance in the optimization and parallelization of logic programs. For example, it is with the help of such techniques that one may achieve high efficiencies when compiling deteministic logic programs, which could then approach the execution speeds of similar Lisp programs generated by optimizing compilers.

The third and last topic presented in Warren's article is the crafty and fascinating work on partial evaluation which, in the case of LP, is referred to as partial deduction. Partial evaluation is a program transformation technique which compiles a given program with some known input data, into an equivalent specialized program which is considerably more efficient than the original program. For example, such techniques have been used to automatically generate very efficient pattern matchers from naive matchers for which the pattern is known, but not the text. The generated matchers achieve the sophistication and efficiency of those proposed by Knuth, Morris and Pratt. Another startling example of partial deduction is that provided by my colleague Timothy Hickey. He considered a logic program involving boolean operations simulating the addition of two binary numbers specified by two lists X and Y. By specializing this program for the case where X equals Y, partial evaluation automatically generated an equivalent program which essentially performs a shift of X to produce 2X, thus bypassing all boolean operations in the original program. These two examples illustrate the great potential of this fairly recent research area.

To summarize: the quality and clarity of the described articles, and their focus on the topics of greatest current interest, should make this special issue a valuable reference for years to come.

to end this introduction, I venture to make some speculations on the future of LP. Some fascinating developments are within sight. The last few years have shown that:

1. Massively parallel architectures are becoming commonly available and the LP research community is heavily involved in parallelism.

2. The development of new constraint languages offers the possibility of extending Prolog to deal with various domains such as reals, booleans and texts. There is an increasing hope that such languages will be useful in symbolic and natural language processing, operations research, numerical analysis, VLSI design, etc.

3. Successful applications will play a substantive role in helping decide the worthiness of such new languages.

4. The numerous proposed generalizations of Horn clauses will bring LP increasingly close to the original goal of programming in the full predicate calculus.

An important consequence of these developments is that formal methods of verification and specification of programs may become practical in the development of large software packages, thus rendering programming less error prone.

To conclude: in the twenty-first century we may come closer to Frege's vision of having the predicate calculus as a universal language for expressing logical reasoning. Even if that does not materialize, refined variants of such a language will benefit from the experience gained by a full century of combined scientific endeavors in logic and in computer science.

JACQUES COHEN is the Zayre/Feldberg professor in the Michtom School of Computer Science at Brandeis University. He holds doctorates from the University of Illinois (Engineering) and the University of Grenoble, France (Computer Science). He has held visiting positions at MIT, Brown and more recently at the University of Aix-Marseilles where the language Prolog originated. His current research interests are in the areas of microanalysis of programs and in compiler development using logic programming with special emphasis on the usage of parallel computers. Author's Present Address: Michtom School of Computer Science, Ford Hall, Brandeis University, Waltham, MA 02254; email: jc@cs.brandeis.edu.
COPYRIGHT 1992 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:overview article to special logical programming section; includes glossary
Author:Cohen, Jacques
Publication:Communications of the ACM
Article Type:Cover Story
Date:Mar 1, 1992
Words:2261
Previous Article:Dynabook revisited - portable computers past, present and future.
Next Article:Logic and logic programming.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters