Printer Friendly

Parlog86 and the dining logicians.

PARLOG86 AND THE DINING LOGICIANS

A classic problem in concurrent programming is that of the "dining philosophers" which challenges the power of any aspiring concurrent program language. Recently, a growing number of logic programming languages have been refined to handle concurrent programming, one in particular is Parlog86.

With the increased sophistication demanded from software, in particular system cum environment software, it is increasingly complex and all too often bug-ridden. In theory, many approaches have been proposed for debugging and increasing confidence in the correctness of a program, some dealing with programming methodology and others with algebraic proof that a program does what the specification intends of it. In practice, for large programs, the most widely used methods are comprehensive test data and tracing of the computation. The art of bug detection with profilers, mutations, bounty hunters, and black teams has become a jargon generator in its own right. Increasingly, software developers begin with a specification of the functionality required by the software. Automatic verified code generation from a specification appears to be an emerging trend with sales of programmer workbenches gaining ground, An attractive alternative is a runnable specification combined with automatic, logically sound transformation from working specifications to efficient programs. There is a body of opinion (e.g., [9] and [20]), holding the belief that logic programs provide such runnable specifications.

Insight into the relationship between specifications and programs can be drawn from some definitions proposed by Hoare [16] in relation to the concurrent programming language CSP. "A specification is a predicate describing all observations of a complex system." The predicate could be generalized into a set of logical formulas or sentences, but Hoare goes on to effectively include this possibility: "Specifications of complex systems can be constructed from specifications of their components by connectives in the Predicate Calculus." Some would argue that the limitation to predicate calculus should be relaxed to encompass other more general forms of logic, for example, modal logics. However, for the purposes of efficient mechanical theorem proving only a subset of predicate calculus, Horn clause calculus, has as yet proved manageable. (Horn clause logic is the subset of predicate calculus that is the theoretical basis of Prolog.) Continuing from the previous definitions, Hoare defines a program: "A program is just a predicate expressed using a restricted subset of connectives, codified as a programming language." If "a restrictive subset of connectives" is replaced by "a restricted subset of predicate calculus," namely, Horn clause logic, then the specification is a logic program, and no coding is necessary.

It is now becoming recognized that Prolog is a good thing for database applications, problem solving, expert systems,natural-language parsing, and compiler writing. However, the present fashion for logic programming in response to the Japanese Fifth Generation initiative has been widely criticized. Typical of the criticism is the following, taken from the Guardian (May 23, 1985): "There is a small class of problems for which Prolog works great and you can do beautiful demonstrations on these problems. But when you have to deal with time-dependent behavior like an operating system, you get away from the nice qualities of Prolog, you have to use the ugly features and you lose the advantage. For commercial users of a language it is such large operating system-type software which they most need to program. Will they want an interpreted language which can't usefully be used in many problems?"

The above criticism of Prolog, while not totally justified, is unjustly extended by association to logic programming. This misconception stems from a confusion between the two, Prolog is a manifestation of logic programming, not the embodiment of it. In its various guises, Prolog is an approximation to the aspirations of logic programming tailored for efficient implementation on sequential architectures, Although Prolog is not well suited to concurrent programming, an inclusive term for system-type software, there are a growing number of other manifestations of logic programming that do fit. These languages are intended for implementation on parallel architectures, and one such concurrent logic programming language, Parlog, is being developed at Imperial College in London. We will discuss the latest refinement, Parlog86.

This account is not intended to be comparative but pedagogical, so that the features of other concurrent logic programming languages such as Concurrent Prolog [29] and GHC [32] and previous manifestations of Parlog that might confuse the initiate are postponed until the end of the article. As Prolog is better known than its theoretical foundation, logic programming, it is more appropriate to introduce Parlog86 from Prolog than from their common parent. It is assumed that the reader has a knowledge of Prolog such as could be obtained from reading the first part of [1]. The declarative reading of a logic program gives partial information about the run-time behavior of the program; it bounds the possible answer set. Semantics based on logical consequence is much more natural than other styles of programming-language semantics, In particular, the meaning of a logic program does not, of necessity, require the construction of models using category theory as do those based on the work of Scott and Strachey or those in the abstract-data-types literature based on initial and final algebras. The logical consequences of a logic program are all those theorems that must be true whenever the premises that constitute the program are true. This is one of the reasons why logic languages are known as very high level and why they have been chosen by the Japanese as the foundation of their Fifth Generation initiative. In concurrent programming, the run-time behaviors of concurrent programs, logic or otherwise, are complex. This complexity is illustrated by an explanation of the dining philosophers problem. The penultimate section of this article gives one solution to the dining philosophers problem written in Parlog86. This is the motivation for the title of the article. In conclusion, the history of the development of Parlog86, its influences, and its relation to the other concurrent and parallel logic programming languages are, without being economical with the truth, explained.

We only hint at many of the difficulties of logic programming and concurrency; some are not discussed at all. A fuller account of these problems and the semantics of Parlog86 can be found in Ringwood [25].

THE DINING PHILOSOPHERS PROBLEM

Though the problem of the dining philosophers [10] appears to have greater entertainment value than practical importance, it has had huge theoretical influence. The problem allows the classic pitfalls of concurrent programming to be demonstrated in a graphical situation. It is a benchmark of the expressive power of new primitives of concurrent programming and stands as a challenge to proposers of these languages.

The problem is set in the monastery of a contemplative order of five monks. Each monk is a philosopher who would be content to engage solely in deep thought if it were not occasionally necessary to acquiesce to the worldly desire for sustenance (a sort of National Security Council). The life of a philosopher is an endless round of eating and thinking. Each philosopher behaves independently of his brothers. Thus, the activities of the inmates of the monastery are in general asynchronous.

A philosopher wishing to eat enters the dining room, takes a seat, eats, and then returns to his cell. The communal dining arrangements in the refectory are shown in Figure 1. Five bowls and five chopsticks are arranged alternately around a circular table. Each philosopher has his own place at the table. In the center of the table is a large bowl of rice that is continually replenished. To eat rice requires two chopsticks. (In the author's experience, eating rice requires at least two chopsticks.) A philosopher may use only those two chopsticks that are on either side of his bowl. A chopstick can only be used by one philosopher at a time.

The following scenarios in the monastery illustrate three classic problems of concurrency: mutual exclusion, deadlock, and lockout.

While one philosopher is using a chopstick to eat, it cannot be used by another philosopher. Thus, two philosophers are competing for each chopstick. This is an example of mutual exclusion: no two philosophers may use the same chopstick simultaneously.

A philosopher can take one chopstick (thus excluding his neighbor from using it) while waiting for a second chopstick to become available. When a philosopher gets possession of a chopstick, he can retain it until he has finished eating. (Philosophers are not generally noted for their social graces.) Given perfect synchronization (the philosophers have pondered the vexed question of the suitability of synchronized swimming for inclusion in the Olympic Games), a scenario could occur in which each philosopher simultaneously takes the chopstick on his left. All philosophers are then waiting for the chopstick on their right, and no philosopher (typically) is prepared to give up the chopstick they already hold. This is an example of deadlock: all philosophers die waiting for a situation that will never occur, such as their neighbor giving up the chopstick on their right.

Deadlock is a situation in which through obduracy all the philosophers starve to death. A more sinister scenario can occur in which two philosophers conspire to starve their mutual neighbor. Suppose that in order to avoid deadlock the philosophers agree (detente) that no one will lay claim to a chopstick unless claim can be made to both required chopsticks simultaneously. A scenario can occur in which, say, philosophers 0 and 2 (the philosophers have also considered the advantages of modulo arithmetic) eat alternately. This prevents philosopher 1 from eating since, when philosopher 0 is eating, philosopher 1's chopstick on the left-hand side is occupied and, when philosopher 2 is eating, philosopher 1's chopstick on the right-hand side is in use. Again, perfect synchronization between philosophers 0 and 2 will ensure that the two chopsticks that philosopher 1 requires to eat with will never be available at the same time. This is an example of lockout: Philosophers 0 and 2 conspire in the demise of philosopher 1 (see Figure 2).

THE SCOPE FOR PARALLELISM IN PROLOG

An example of a "naive" quick-sort program in Prolog will serve both to motivate parallelism and to introduce the syntax, (The more sophisticated difference list version was not used because the simpler form better illustrates the points to be made.)

sort([ ], [ ]);

sort([Pivot ~ List], Sorted)<

partition(List, Pivot, Lesser, Greater) &

sort(Lesser, Lsorted) &

sort(Greater, Gsorted) &

concatenate(Lsorted, [Pivot ~ Gsorted], Sorted).

partition([ ], Pivot, [ ], [ ]);

partition([Item ~ List], Pivot, Lesser, [Item ~ Greater])<

Pivot =< Item &

partition(List, Pivot, Lesser, Greater);

partition([Item ~ List], Pivot,[Item ~ Lesser), Greater)<

Item =< Pivot &

partition(List, Pivot, Lesser, Greater).

concatenate([ ], List, List);

concatenate([Item ~ List1], List2, [Item ~ Lists])<

concatenate(List1, List2, Lists).

The logical reading is that the second argument of sort is an ordered list, ordered by the relation =<, with precisely the same elements as the list in the first argument. Predicate names will be printed in boldface. Lists are denoted recursively, [Head ~ Tail], where Head is the first item on a list and Tail is the remainder of the list, The constant [ ] represents the empty list. The arrow symbol, <, stands for logical implication, and the ampersand, &, for logical conjunction. (The use of the ampersand for logical conjunction and the semicolon as clause separator is different from Edinburgh Prolog syntax. The reason for this seemingly perverse change will become apparent when parallel conjunctions and clause separators are introduced shortly.) In the predicate partition, the nonexclusive nature of the last two clauses introduced by the =< predicate when Pivot and item are equal is intentional.

Prolog tackles a call such as < sort([2, 1, 3], sorted) by following calls depth first in textual order with backtracking on failure. The computational result is that the list to be sorted is first partitioned, the lower sublist of the list is sorted, the upper sublist of the list is sorted, and finally the two sublists are concatenated. This computation appears ripe to take advantage of multiprocessor architectures. Naively, if there were two processors, once the list is divided, one processor could sort one sublist, and the other processor could sort the other sublist. This suggests the introduction of a control primitive, parallel conjunction denoted by a comma, ,. This is used to indicate that two calls can be evaluated in parallel if there are processors available.

(Before proceeding, a word about the difference between concurrency and parallelism is in order [25]. The word concurrent more properly refers to dependent parallelism, where dependent processes need to concur at points in their computation. The word parallelism can then be used to distinguish independent parallelism, where processes are totally independent and never need to cooperate. The two words tend to be used interchangeably, however, to avoid undue repetition of one or the other. This article will persist in this confusion.)

Taking advantage of as many processors as are available, the program becomes

sort([ ], [ ]);

sort([Pivot ~ List], Sorted)<

partition(List, Pivot, Lesser, Greater) &

sort(Lesser, Lsorted),

sort(Greater, Gsorted) &

concatenate(Lsorted, [Pivot ~ Gsorted], Sorted).

(Unfortunately, the conjunction symbol & has the same control meaning as in Edinburgh Prolog, namely, for sequential goal satisfaction; but in concurrent logic languages and herein, , is used to denote parallel conjunction. This use is due to an historical mistake.) The operational reading of the program now becomes "Partition the list to be sorted. When complete, if possible (subject to availability of processors), sort the two sublists in parallel, and finally, when this is complete, concatenate the two lists." (If no additional processors are available, the two sort processes could possibly be interleaved in some way, coroutined, sharing a single processor.) Parallel conjunction is assumed to bind more tightly than sequential conjunction, thus reducing the necessity for parentheses. This form of concurrency is called `and-parallelism' [25].

In this example the two concurrent calls are totally independent; that is, they have no shared variables. If variables are shared between a parallel group of calls and allowed to execute asynchronously, there will be competition between the calls to instantiate the shared variables, a race condition. If the calls try to instantiate the shared variables to different terms, there is a conflict of interest, and the conjunction will fail. The failure may occur only after a great deal of work has been done.

Even when instantiations of shared variables by concurrent calls are apparently trying to cooperate, there can still be conflict, as illustrated by the next modification to the program:

sort([ ], [ ]);

sort([Pivot ~ List], Sorted)<

partition(List, Pivot, Lesser, Greater),

sort(Lesser, Lsorted),

sort(Greater, Gsorted),

concatenate(Lsorted, [Pivot ~ Gsorted], Sorted).

where all the body calls are tried in parallel. The variable Lsorted is shared by sort and concatenate, which resolve asynchronously. If at some stage concatenate is called with an uninstantiated variable as its first argument, it can resolve with the first clause of the relation, perhaps prematurely terminating the list so that the length of the sorted list is less than the list to be sorted. Alternatively, if the second clause for con catenate were used, the list can overrun so that the length of the sorted list exceeds the length of the list being sorted. This problem is, naturally, called the binding problem conflict of and-parallelism.

Despite this drawback with the eagerness (as just explained) of and-parallelism, there is still potential for another form of parallelism. In Prolog the alternative clauses for partition are tried in turn. Only if one fails is the next in textual order considered. Parallelism can be realized by pursuing the alternative branches of the search tree together. This leads to the introduction of another control primitive, denoted by a period, . , which indicates that clauses should be tried simultaneously if processing capacity is available. (A semicolon is as before used to represent a sequential clause trial. Again this is contrary to the use of the period in Edinburgh Prolog and due to another historical mistake.) For example, the control reading of

partition([ ], Pivot, [ ], [ ]);

partition( [Item ~ List], Pivot, Lesser,[Item ~ Greater])<

Pivot =< Item &

partition(List, Pivot, Lesser, Greater).

partition([Item ~ List], Pivot,[Item ~ Lesser], Greater)<

Item =< Pivot &

partition(List, Pivot, Lesser, Greater).

would be "Try the first clause, and only if this fails, try the following two clauses in parallel." Again, it is assumed that parallel control instructions bind tighter than sequential ones. This form of concurrency is, unnaturally,called `or-parallelism.' (Unnaturally because the clause separator does not represent logical disjunction [25].) In this particular example, there is no reason why all three clauses should not be tried concurrently.

In addition to the difficulties of general and-parallelism, or-parallelism presents well-known practical difficulties because of the combinatorial explosion of the search space. Each alternative branch will develop its own bindings for variables. In general a profusion of alternative bindings must be recorded and continually tracked, If or-parallelism were simulated by coroutining on a uniprocessor machine, it would quite likely take the form of breadth-first search, Because of the combinatorial explosion, breadth-first search was considered impractical for logic programming interpreters. For Prolog, with only sequential clause trial (depth-first search), the space problem is traded against time. On failure, bindings that were accumulated by following one alternative branch of the search tree are rescinded before the next alternative branch is tried. Genuine or-parallelism requires maintaining multiple environments, and this is technically difficult to do efficiently in shared-memory multiprocessor architectures and more particularly in distributed systems.

PARLOG86: CONCURRENT LOGIC PROGRAMMING

The emphasis in systems programming is somewhat different from artificial intelligence or database applications. In the latter, the problem is usually to perform some search to find all possible solutions to a query. For Prolog this search strategy is naive: all possible avenues are explored. In systems programming the problem is not to find all solutions to a problem, but to find one as efficiently as possible. For instance, only one solution as to how a file can be written to disk is required. Without appropriately guiding the search and detecting redundant computation, much of the power of parallel processing may be directed toward fruitless search. This suggests following only one alternative branch of the Prolog search tree. This section describes an incremental reconstruction of Parlog86 from this seed of an idea to meet the needs of systems programming.

As a first attempt then, out of all those candidate clauses that could resolve with a call, one clause, the first to successfully unify perhaps, is committed to. This design decision brings indeterminism to logic programming. In the theory of automata, a determinate automaton (e.g., see [12]) traverses one specific path for each possible input. An indeterminate automaton has points in the computation where it chooses one out of several alternative branches to follow. This choice is not dependent on the original input. When a nondeterminate automaton reaches a branch point, it traverses all possible branches either in turn or together. (In logic programming, indeterminism has been called "don't-care nondeterminism" and nondeterminism has been known as "don't-know nondeterminism." The logic-programming literature is, unfortunately, littered with the inappropriate choice of words. This is possibly due to the rapid expansion of the subject.) In this sense Prolog is nondeterministic: all the alternative clauses are tried. Parlog86 is indeterminate: only one clause is pursued.

Allowing the candidate clauses to be those for which the call unifies with the head is too liberal a regime for indeterminism. For example, in the partition relation if one of the two parallel clauses is a candidate the other must necessarily be so. An arbitrary choice at this stage has up to a 50 percent chance of failing. An operating system that had a 50 percent chance of failing should be unthinkable. (In the author's experience, some manufactures have seriously considered it.) In the partition example, the appropriate choice is quickly established by the call to the first relation, =<. If the items compared are different, then one branch will, necessarily, fail. If the items are the same, then it is irrelevant which branch commits. This suggests delaying the choice of which clause to commit to until after the order test. The alternative branches can be pursued in parallel up to and including the =< relation. This requires a new form of control, the commit conjunction, denoted by :. The colon divides the body of a clause into a guard, the extent to which one is prepared to let the or-parallelism continue, and the trust, the part of the clause on which there is no recanting. (The word trust is used in preference to body, as the body of a clause has a perfectly well-defined meaning in Prolog: the consequent of a clause. Here, then, the body of a clause is composed of the guard and trust.)

partition([ ], Pivot, [ ], [ ]).

partition([Item ~ List], Pivot, Lesser,[Item ~ Greater])<

Pivot =< Item:

partition(List, Pivot, Lesser, Greater).

partition([Item ~ List], Pivot,[Item ~ Lesser], Greater)<

Item =< Pivot :

partition(List, Pivot, Lesser, Greater).

The control reading of the new construct is "Try all three clauses in parallel as far as the guard." That is, if the head of a clause can unify with the call, the calls in alternative guards are tried in parallel. If the guard of a clause is empty, the commit conjunction is implicit, as in the first clause. Candidate clauses for committal are those with successful guards. From the candidates, Commit to one with a successful guard, and proceed with the calls in the trust of the chosen clause, discarding the results of any computation of the others. The guard concept introduces an intentional indeterminacy into logic programming: You don't care which of the possible solutions is found. The control effect of the commit conjunction may be compared with the cut of Prolog, but unlike Prolog it is not treated syntactically as if it were a predicate, but rather as a sequential conjunction (cf. &). Furthermore, each clause above has exactly one commit conjunction, whereas in Prolog there can be zero or more cuts in each clause. This modification makes or-parallelism more practical, the overhead being the need to maintain much reduced multiple environments. The above intentional incompleteness demands greater care on the part of the programmer than in Prolog. It should be noted that Prolog is unintentionally incomplete due to the possibility of infinite branches of the search tree. Although any solution found will necessarily be sound, a solution may not be found even when one is implied by the logic. The guard should be carefully chosen so that either

(1) a solution to the call can definitely be found using

that clause, or

(2) no solution can be found using any other clause.

This guarantees that if a solution to a call exits then the call will not fail (see [25]).

A basic paradigm of concurrent programming is that of the producer-consumer, Something of this paradigm is seen in Prolog, despite the language being inherently sequential. The producer is placed textually before the consumer in a conjunction. Because of the textual order of evaluation in Prolog, the flow of activity is necessarily from producer to consumer, (The logic variable may somewhat confuse the novice because a variable may be produced by the producer that is later given a value by the consumer. The rediscovery of this much used Prolog mechanism by concurrent logic programmers has, unfortunately, been renamed back-communication.) In a general logic program, there is little element of control available to the programmer. Prolog provides the missing element of control in refutation-based theorem provers that prevents floundering. The sequencing in Prolog, however, only allows one production line: a single producer produces a, perhaps, partially instantiated structure that is then consumed whole by a consumer. The more general situation of a number of active production lines, active producers and consumers, is not possible in Prolog without overriding the depth-first search strategy. In order to achieve multiple concurrent producer-consumers, what is required is general and-parallelism with cooperating calls. However, as the example of sort and concatenate indicates, there is nothing to prevent consumer from running arbitrarily and conflictingly ahead of producer. Concatenate may build up a list of variables, that are later instantiated by sort; this is acceptable behavior, but it is improbable that concatenate will terminate having produced the correct number of variables to be instantiated by sort. It is more likely that the eagerness of concatenate will overrun or curtail the sorted list, in which case the two calls will be in conflict. What is lacking in general and-parallelism is some form of synchronization, when control synchronization, in the form of sequencing, is not available, that will control the eagerness of consumers.

In and-parallelism it is the eagerness of resolution that is responsible for conflict situations. If a particular pattern does not occur in a call, unification creates it, for example, the premature curtailment or overrunning of the sorted list by concatenate. The unification of a call with the head of a clause serves two functions: input and output. Input is used for parameter passing, syntactic equality testing, and eliminating noncandidate clauses. Output is used for constructing data structures; it is the eagerness of the output of unification that generally leads to conflict. The problem with eager output and backwards reasoning (as performed by Prolog) is that the output substitution is only logically justified if the new resolution-derived goals turn out to be true. If indeterminate control, as opposed to nondeterminate control of Prolog, is employed, once the new resolution-derived goal is found to fail, it is too late: Committal has already taken place; there is no second chance--again, the premature termination or overrunning of concatenate. The safest way to solve the problem of the output eagerness without compromising resolution appears to be to separate the two functions of unification.

Head unification can be restricted to input pattern matching. In this restriction of unification, no variables in the call are bound, only variables in the head of the clause. If arguments in the call of the clause are not sufficiently instantiated to identify the call with the head of a clause, the call suspends, waiting for other cooperating calls to instantiate the call's variables. Thus, head input term matching only serves the purpose of parameter passing, syntactic equality testing, and eliminating noncandidate clauses; no output is attempted, This makes the program clauses lazy, in the sense of data synchronization, (or rather lazier; those with only variables in the head, that is, no patterns to match, are still eager). The replacement of unification by term matching does not compromise the soundness of resolution applied to predicate calculus, but does compromise completeness. (For more details on the theory of concurrent logic programming, see [25].) It may never happen that the variables of the call are sufficiently instantiated for the suspended clause to be a candidate, and this clause will never be chosen for committal, Logical incompleteness is not a new consideration as completeness has already been sacrificed by the committed choice guard control. The benefit is that it allows a data-flow mechanism of synchronization to replace control synchronization as the prime element of control: replacement of the call by the trust of the clause is suspended until the arguments of the call are sufficiently instantiated and the guard is successful.

Having replaced unification by term matching, there still remains the problem of how to effect output. This is solved by noting that some Prolog programs use the identity (syntactic identity) predicate, defined by X = X, which produces output by unifying its two arguments. In Parlog86 this predicate will be assumed to be system defined with the privilege of producing output bindings.

This separation of input and output has an additional advantage in that it can be used to eliminate the multiple environments problem of or-parallelism by delaying any output until a clause has committed. More properly, the replacement of the call by the trust is postponed until the arguments of the call are sufficiently instantiated and the guard succeeds. Using these control mechanisms, the complete concurrent program for sort becomes

sort([ ], Sorted) < Sorted = [ ].

sort([Pivot ~ List], Sorted) <

partition(List, Pivot, Lesser,Greater),

sort(Lesser, Lsorted),

sort(Greater, Gsorted),

concatenate(Lsorted, [Pivot ~ Gsorted], Sorted).

partition([ ], Pivot, Lesser, Greater) <

Lesser = [ ], Greater = [ ].

partition([Item ~ List] ,Pivot, Lesser, Greater) < Pivot =< Item:

Greater = [Item ~ Upper],

partition(List, Pivot), Lesser, Upper).

partition([Item ~ List), Pivot, Lesser, Greater) < Item =< Pivot:

Lesser = [Item ~ Lower],

partition(List, Pivot, Lower, Greater).

concatenate([ ], List, Total) < Total = List.

concatenate([Item ~ List1], List2, Total) < Total = [Item ~ List],

concatenate(List1, List2, List).

With the new form of control, partition builds up its sublists item-by-item as they become available, sort filters them item-by-item, and concatenate consumes them item-by-item. The whole flow of control is activated by the availability of items: There is a steady stream of items from chains of producer-consumers. If there are no items available, the consumers sort and concatenate suspend. This form of concurrency reconstructed has been called stream-and-parallelism and is a form of dynamic pipelining.

The completeness sacrificed by replacing call-head unification by data-flow call-head term matching manifests itself in the restricted mode of use of program clauses. Although the purpose of the Prolog sort program was to find the sorted list, because of the two-way nature of unification, clauses can be used "backwards." That is, the sort program could be used to find all permutations of a sorted list. This behavior is often cited as one of the great advantages of logic programming. In all but the simplest database examples of Prolog, however, this attribute tends to be sacrificed for the sake of efficiency by the use of the cut, or more seriously because of an infinite branch of the search tree. In the above Parlog86 example, program relations were used in only one mode. Although a specific mode of use is implied by individual clauses, it is not true of relations as a whole. Concatenate was only used to join two lists. By adding further clauses, it can be used to indeterminately split a given list as well.

concatenate([ ], List, Total) < Total = List.

concatenate([Item ~ List1], List2, Total)< Total = [Item ~ List].

concatenate(List1, List2, List),

concatenate(List1, List2, [Item ~ Total]) < List1 = [Item ~ Rest],

concatenate(Rest, List2, Total).

concatenate(List1, List2, Total) < List 1 = [ ], List2 = Total,

concatenate(Rest, List2, Total).

Clearly, there are sometimes dangers in this because of the indeterminism introduced by commitment to one clause. If used for testing

< concatenate([1, 2], [3, 4],

[1, 2, 3, 4]).

the call could fail because of the inappropriate commitment to the third or fourth clause. Furthermore, the last call is eager while the others are data triggered. Thus, there is the danger of reintroducing the eagerness that was a criticism of Prolog. In fact, with the exception of the search, the full modal use of Prolog clauses could be reintroduced by making the arguments of the head of all clauses variables and using the equality predicate in the bodies of clauses to achieve unification. This illustration of the nonstrict modes of Parlog also illustrates that it should be emphasized that failure in concurrent logic programming should be regarded as not proven rather than not true. Negation, as failure, does not have anything like the power it has in Prolog. This philosophy is expanded in [25]. The ability to have multimode relations is, however, useful, as the following relation for plus illustrates. It has the three expected modes of use, one mode for each clause:

plus(X, Y, Z) < integer(X),

integer(Y): Z is X + Y.

plus(X, Y, Z) < integer(X),

integer(Z): Y is Z - X.

plus(X, Y, Z) < integer(Y),

integer(Z): X is Z - Y.

The primitive integer is similar to its Edinburgh counterpart, except that it suspends while its argument is a variable, The primitive is the same as its Edinburgh counterpart: along with the syntactic identity predicate, =, it is privileged in being able to perform unification. In this example, the different modes are used safely, but from the preceding example, the concurrent logic programmer clearly has to be more vigilant than his or her Prolog counterpart. There is a greater need for efficient debuggers and some form of type checking in concurrent logic programming than in Prolog.

CONCURRENT PROGRAMMING PARADIGMS

A parallel conjunction of goals can conveniently be represented pictorially. A simple producer-consumer call in Parlog86

< producer(Channel), consumer(Channel).

is illustrated in Figure 3, where producer is specified by

FIGURE3. The producer-consumer Call

producer (Channel) <

Channel =

["Hello World" ~ Restchannel],

producer(Restchannel).

and illustrated in Figure 4.

FIGURE 4. The producer

The output is written under the rewrite arrow,==>, and is to be understood as a postcondition of the committal.

The code and illustration for consumer are similar (see Figure 5)

FIGURE 5. The consumer

consumer([Message ~ Channel])<

consumer(Channel).

except that there is a precondition to the rewrite, matching the pattern [Message ~ Channel], and no postcondition, The input precondition is denoted by the input arrow on consumer.

According to the logic program computation model, a call is ephemeral: it can only reduce (rewrite) itself to other calls. However, from an intuitive point of view a call that calls itself recursively can be viewed as a longlived process. As described in the previous section, a Prolog program can only have one long-lived process at a time without overriding the depth-first execution strategy. Parlog86 can, however, implement many such processes. Producer produces a continuous stream of "Hello World" messages that are received by consumer. This example is the archetype of concurrent programming whereby processes communicate by sending messages over a communication channel.

Parlog86 processes communicate via shared logical variables. Logical variables are single assignment: they can be either instantitated or uninstantiated, but once instantiated their value cannot be modified. A shared logical variable can be implemented either as a memory cell that can accept only one value or as a message buffer. The Parlog86 computation model is thus indifferent to the distinction between the concurrent programming concepts of communication by message passing and communication by shared variable. Message channels (streams) are achieved in Parlog86 by recursive processes sharing partially instantiated recursive terms (most commonly lists). As processes resolve they peel messages off the stream one-by-one.

In this process interpretation of a logic program, a conjunction of calls such as

< producer1(Channel, ...),

producer2(...), ...consumer(Channel, ...).

represents a network of communicating processes with concurrency and sequencing achieved by logical conjunctions, Multiple producers of the same data are possible because output is achieved by unification. The only restriction, if the goal is not to fail, is that what the multiple producers produce must be unifiable. That is, the resultant production is the term formed by unifying all the productions.

In general, resolution creates new processes as in

process(...) < new-process1(...),

new-process2(...), ...

and process termination can be brought about by resolution with unit clauses

process(...).

which terminate without any offspring. Long-lived processes are, as stated, simulated by recursion.

The primary form of synchronization in Parlog86 is suspension on insufficiently instantiated call variables. The latter has the intended effect that each producer process is eager and each consumer is lazy. The input matching restrictions of Parlog86 have an analogy with the concept of blocking receive in concurrent programming message passing systems. There is, however, no analogy for blocking send in Parlog86, so there is difficulty encountered in all conventional concurrent programming without blocking send; namely, the producer can run arbitrarily far ahead of the consumer. This problem is treated in Parlog86 (as in conventional concurrent programming languages) by reversing the role of producer and consumer.

producer([message ~ Channel]) < Message = "Hello world",

producer(Channel).

consumer(Channel) <

Channel = [Message ~ Restchannel],

consumer(Channel).

In this turnaround, consumer outputs a stream of variables that are instantiated by producer. Now consumer can run arbitrarily far ahead of producer, In between these two extremes is the bounded buffer allowing producer to run ahead of consumer up to some fixed limit, or alternatively, consumer to run ahead of producer up to some fixed limit. These methodologies of conventional concurrent programming with blocking receive can always be programmed in Parlog86.

Traditionally, the proponents of declarative programming have emphasized the stateless, side-effect-free nature of logic programming. This emphasis is justified when the problem to be solved can be declared without reference to the state of the computation. In concurrent programming the very nature of the problem usually contains reference to the state of the computation. In a logic program, a call cannot change its state, but only reduce itself to other calls. Realizing long-lived processes as recursive calls, however, permits the

interpretation of local, unshared variables as state variables. The general scheme of a process responding to a message is

process(State1, Inchannel, ..., Outchannel, ...) <

receive_a_message (Message, Inchannel, Newinchannel) &

other_conditions(...):

transform-state(..),

send_reply(Reply, Outchannel, Newoutchannel),

process(State2, Newinchannel, ..., Newoutchannel, ...);

process(State, Channel, ...) <

default(.........).

The process interpretation of this clause is "If a process in a state State 1 receives a message then, subject to other-conditions, which will, in general, depend on the state and the message, send a reply and transform to a new state." The communication channel, or rather the incoming message, is realized as a difference list between Inchannel and Newinchannel in the relation receive-a-message. This skeleton program shows how, by means of local (unshared) variables and recursion, concurrent logic programs admit the notion of state of a process in a side-effect-free way. Again, this is not a useful interpretation in Prolog where only one long-lived process at a time is possible. To summarize, shared variables are interpreted as communication buffers, and unshared variables as states. Communication channels are achieved using recursive data structures and recursive calls.

Alternative clauses and committed choice allow general indeterminancy, a feature that is considered desirable for systems programming [11], The processes receive-a-message and the other-conditions in the guard can be used to implement another feature of concurrent programming: selective acceptance of messages and rescheduling of messages. The sequencing of clauses in this example (the semicolon clause separator) can be used to provide a default case if all other alternative guards fail.

Another important paradigm of concurrent programming is the client-server. Many problems occur in which several processes share the same resource:

server([Transaction ~ More-transactions], Data, ...) <

respond(Transaction, Data, Newdata, ...) :

server(More-transactions, Newdata, ..).

Of the two arguments of server shown, the first is a stream of transactions from client, and the second a data structure, a local variable, that is to be manipulated according to the required form of transaction. Each client could be allocated its own channel for communicating with a resource, but more likely it is convenient to determine or change at run time the number of processes communicating with one another. One way of achieving this is by merging streams:

merge([Item ~ Stream1], Stream2, [Item ~ Outstream]) <

merge(Stream1, Stream2, Outstream).

merge(Stream1, [Item ~ Stream2], [Item ~ Outstream]).

merge(Stream1, Stream2, Outstream).

The merge predicate can be used to indeterministically merge messages from two clients to a server. The problem of how to route the response back to client in the client-server example is conveniently solved by the logical variable:

client([transaction(Reply) ~ More-transactions], ...)<

wait(Reply), .....

wait("Roger Roger")<.

Client sends a transaction that contains an uninstantiated variable to server. When server responds it instantiates the variable to "Roger Roger". The reply is implicitly routed back to client where the wait process is suspended on reply. This so-called back-communication (which is familiar to Prolog programmers, but never interpreted as such) has an analogy in conventional concurrent programming in the remote procedure call.

The process interpretation of logic programming is summarized in Table I.

For perpetual processes with unbounded data structures, a querymay never terminate; that is, the truth of the query is not computable. However, any stage of a derivation can be regarded as a state of the computation. Each state of the computation represents a qualified answer to the initial query, or a theorem derived from the program. (More details of this and other semantic issues of Parlog86 are given in [25].) A feature that is absent from Prolog that makes it unsuitable for systems programming is the ability to observe the arguments from outside a recursion. Because of concurrency, Parlog86 has this ability, and a declarative interpretation of this ability to observe the arguments from outside the recursion can be given by qualified answers or derived theorems [25]. It may turn out that the process interpretation of a logic program together with the semantics of qualified answers will become as important as the procedural interpretation [19].

A PARLOG86 SOLUTION TO THE DINING PHILOSOPHERS PROBLEM

One of the more successful ways proposed for resolving the deadlock situation in the dining philosophers problem (i.e., one that does not give rise to the problem of lockout) is not to allow more than four monks into the refectory at any one time. The goal is represented by the following conjunction and portrayed more vividly in Figure 6.

:-refec(0, Queue, Hungry_monks,

Replete_monks, Musing_monks),

table

(Hungry_monks, Replete_monks,
        [chopstick(0), chopstick(1),
         chopstick(2), chopstick(3),
         chopstick(4) ~ Returned],
        Returned),


cloister([monk(0), monk(1), monk(2),
              monk(3), monk(4) ~
              Musing-monks] , Queue).


The five monks are initially on the stream to the cloister from which they will go to their cells to cogitate. The chopsticks are initially on the stream to the table indicating that they are available for use. The integer 0 in Figure 6 shows the initial number of monks in the refectory. This is represented as a local variable as it is not shared by any other call. Not all clauses will be illustrated, only those that have informative diagrams. The communication between calls is often much more easily comprehended by means of such illustrations.

FIGURE 6. The Dining Philosophers

Monks entering the cloister retire to their individual cells to think for an arbitrary duration, as illustrated in Figure 7.

FIGURE 7. The cloister Clause

cloister([monk(M) ~ Replete_monks], Queue)<

cell(monk(M), Hungry_monk),

cloister(Replete_monks, Rest_queue),

merge(Hunqry_monk, Rest_queue, Queue).

Again the first argument of the cell clause is not shared with any other call and so is a local variable. The monks eventually emerge from their cells driven by pangs of hunger:

cell(monk(M), Hungry_monk)<

Hungry_monk = [monk(M)].

In a simulation of this problem, the arbitrary duration of thought can be effected by a random number genertor, and the whole simulation can be event driven [14].

On emerging from their cells, monks enter the queue for the refectory. To avoid the deadlock situation described earlier in "The Dining Philosophers Problem," a limit of four monks in the refectory at any one time is imposed:

refec(N, [monk(M) ~ Queue], Hungry_monks,

Replete_monks, Musing_monks) <

N < 4 :

Hungry_monks =

[monk(M) ~ Rest_hungry_monks],

refec(N + 1, Queue, Rest_hungry_monks,

Replete_monks, Musing_monks).

The precondition N < 4 is written above the rewrite arrow in Figure 8.

Having satisfied their wordly needs, monks leave the refectory for the cloisters where they can resume their cognitive processes.

refec(N, Queue, Hungry_monks,
         [monk(M) ~ Replete_monks],
         Musing_monks) <


Musing_monks =

[monk(M) ~ Rest_musing_monks),

refec(N - 1, Queue, Hungry_monks,

Replete_monks,

Rest_musing_monks).

The second refec clause, shown in Figure 9, also ensures that a call to refec will never fail since it will always suspend on the fourth argument when called with a variable. Thus, when there arealready four monks in the refectory and a fifth is in the queue, a call to refec will suspend.

Immediately a monk takes his place at the table, he begins to find his chopsticks.

table([monk(M) ~ Hungry_monks],
         Newly_replete_monks,
         Available_chopsticks,
         Newly_returned_chopsticks)<


finder(monk(M), Available_chopsticks,
         Remainder_chopsticks,
         Replete_monk, Used_chopsticks),


table(Hungry_monks, Replete_monks,
         Remainder_chopsticks,
         Returned_chopsticks),


merge(Replete_monk, Replete_monks,

Newly_replete_monks),

merge(Used_chopsticks,
         Returned_chopsticks,
         Newly_returned_chopsticks).


The available chopsticks are channeled through the chopstick finder (the text for this communication was prepared by the monks on a Macintosh-minus) for each monk. Figure 10 illustrates how the used chopsticks, returned to the table when the monk has finished eating, are (unhygienically) merged back into the stream of chopsticks returned by the other monks. When a monk is finished eating, he merges back into the stream of other replete monks.

The stream of chopsticks is filtered by locate:

finder(monk(M), Available_chopsticks,
          Remainder_chopsticks,
          Replete_monk, Used_chopsticks)<


locate(chopstick(M),
          Available_chopsticks,
          One_less_cs),


locate(chopstick(M add_mod_5 1),

One_less_cs, Remainder_cs) &

eat(monk(M), Replete_monk,

Used_chopsticks).

The operator add_mod_5 is used to denote addition modulo 5, and cs is used as an abbreviation for chopstick. The stream of chopsticks is diverted through one and then another locate, filtering the chopsticks required by that particular monk. This allows monks not at the head of the queue, but whose chopsticks are available, to begin eating before monks who arrived earlier, but whose two chopsticks are not yet available. Lockout is prevented, however, because as soon as any one chopstick is available a monk takes it, and the monks locate individual chopsticks on a first-come-first-located basis (the queue of returned chopsticks).

locate(Item, [Item ~ Rest], Rest);

locate(Item, [Different_item ~ Rest],

Remainder_of_list)<

Remainder_of_list =

[Different_item ~ Remainder],

locate(Item, Rest, Remainder).

Monks eat for an arbitrary amount of time. (Again, for simulation this can be effected by a random number generator.)

eat(monk(M), Replete_monk,

Used_chopstick)<

Replete_monk = [monk(M)],

Used_chopsticks = [chopstick(M),

chopstick(M add_mod_5 1)].

This implementation is not of the form of the more usual one [15] where chopsticks and philosophers are processes. Needless to say, the problem can be programmed in this form in Parlog86. This form and the present program indicate the duality between data structures and processes in logic programming, a feature that is known as the reflection principle.

THE DECLARATIVE LANGUAGE RELIGIONS

To the uninitiated, the origins and relationships of logic programming languages may appear as shrouded in mystery as those of the Judeo-Christian religions. Both have had their prophets, heresies, and schisms. (With logic programming, however, the time scale has been somewhat shorter.) With continuing irreverence, the history of the logic religions can be divided into two eras: BPM, before the parallel machine, and AD. A single parent family tree (Figure 11) shows the relationships among the main branches of concurrent logic programming languages.

In the beginning (1972) there was Prolog. A direct ancestor of Parlog86 was IC-Prolog [6], which was definitely BPM and is now considered a minor prophet. With the advantage of hindsight, this variation of Prolog can be viewed as an unsuccessful experiment to break out of the straight jacket of depth-first, textual order control; it implemented coroutining now found in a different form in NU-Prolog and Prolog II. An early investigation into parallel Prolog, the topic introduced earlier in "The Scope for Parallelism in Prolog,"was [24], and the most recent book on the subject is by Connery [8]. This discussion describes the origin of concurrent logic languages as described in "Parlog86: Concurrent Logic Programming," of which Parlog86 is the latest member, The parallel descendants of Prolog will not be mentioned further, and it will be left to someone else to fill in this branch of the family tree. The history of sequential Prolog has been chronicled by at least three authors [7, 21, 26].

The founding member of the family to which Parlog86 belongs, the concurrent logic languages, was first described by van Emden and de Luceana [34]. This data-driven computational formalism was based on a parallel functional programming model due to Kahn [18]. The Relational Language [2] was progressive in introducing guards to the model, but retrograde in abandoning the logical variable. The Relational Language was to all intents and purposes an indeterminate functional language (if that is not a contradiction, e.g., [13]) masquerading as logic language. Guards were inspired by the concurrent programming language, CSP [15], an imperative concurrent programming language that was fashionable at the time. In a logic programming setting, indeterminate guarded commands become indeterminate guarded inferences.

Shapiro [29] suggested a method for reintroducing the logic variable into Relational Language, and so began the era of restoration. Machismo and differing views as to how the logic variable should be restored led to a schism. The restoration in Concurrent Prolog was motivated more by idealism than practicality. In Concurrent Prolog the logic variable is almost reinstated to its glory in Prolog by the use of multiple environment orparallelism (the approach taken by parallel Prologs). The eagerness of unification is restrained by the addition of annotated read-only variables: Unification suspends on an attempt to bind a read-only variable to a nonvariable term. The restoration in Parlog83 [3] was hampered by its functional chains.

The difficulties of implementation and problems with the semantics of the read-only variable of Concurrent Prolog [27] led to a withdrawal to a restrictive subset of the language Flat Concurrent Prolog (FCP) [22] in which guard calls are confined to prescribed primitives. FCP still, however, tenaciously clung onto the dubious read-only variable. The multiple environments were ditched in favor of a single backtrackable stack, as in Prolog. The bindings are, however, local until committal. There have been other proposed successors to this subfamily, for example, Dual FCP and CP [28], that have either disappeared into oblivion, or remain obscure because of combinatorial complexity, The development of Concurrent Prolog has followed the classic Prolog form of search: backtracking on failure.

The development of Parlog has followed a path of grudging restoration of the logic variable. This development was dragged along by the expressive power and return to logic programming of Concurrent Prolog, but constrained by its functional programming proclivities. Parlog83 [3] was inspired by Concurrent Prolog, but still kept to the aberration of embracing the Eastern Religions (East Anglia and Kent Universities are strongholds of functional programming in the United Kingdom). Parlog84 [4], under the growing influence of Concurrent Prolog, finally abandoned the functional religions. The only vestige of functional programming, the three-argument metacall, is useful for systems programming, This metacall always succeeds, but instantiates an additional argument to the result of the computation of its subgoal, success or failure. In addition, it allows the programmer to access the basic control mechanisms of the inference engine, The metacall provides a virtual Parlog machine that can be compared with the virtual machine concept of systems programming. The failsafe metacall does, however, have semantic difficulties. Parlog84 retained the unattractive mode declarations from Parlog83. These were similar to mode declarations that had previously been used as compiler directives in optimized implementations of Prolog. An attempt to clarify the semantics of Parlog84 was the introduction of a modeless language, Kernel Parlog [5], into which Parlog can be compiled. This language, however, made programs almost impossible to understand. For Parlog84, output was performed by assignment to a variable (as in imperative programming languages). Logic variables are, however, single assignment; in Parlog84, an exception is generated on attempt to assign a nonvariable so that multiple producers were not possible.

Japan's Institute for New Generation Computer Technology (ICOT) initial flirtation with Concurrent Prolog found it wanting [31, 33], and subsequent analysis led to the development of Guarded Horn Clauses (GHC) [32, 33]. Although GHC was claimed to be inspired by Concurrent Prolog, it resembles Parlog84 more closely in that it has a single binding environment. It does not have the read-only variable of Concurrent Prolog; a call suspends if an attempt is made to bind a nonlocal variable. Difficulty in implementation of the suspension mechanism has led ICOT to withdraw to a subset (FGHC), Flat Guarded Horn Clauses in analogy with Concurrent Prolog's successor, FCP. At the present time, it appears as if FGHC and not GHC is the great white hope of the parallel inference machine (PIM), being developed as part of the Japanese Fifth Generation initiative.

Some superficial differences between Parlog84 and Parlog86 are accounted for by a syntax change closer to Edinburgh Prolog, and the explicit precedence of parallel connectives over sequential ones. The semantics of Parlog86 is a result of combining the attractive features of GHC and Parlog84. The removal of mode declarations reveals that arguments' positions can be used in different modes in the same clause. The replacement of single assignment by unification makes the language more expressive, allowing multiple producers, provided, of course, that what they produce is unifiable.

The principal difference in the three main branches of the church lies in the way in which synchronization is achieved (transsubstantiation). In GHC a call suspends if call-head unification attempts to bind a nonlocal variable. For Concurrent Prolog a call suspends if unification attempts to bind a read-only variable to a nonvariable having a distinguished class of variables allows test unification of non-read-only variables. That is, the ability of non-read-only variables to unify can be used as a discriminator of which clause to commit to. In Parlog86 the head of a clause is only used for pattern matching. Term matching suspends waiting for nonvariable input.

The development of the concurrent logic programming languages can be compared with the development of Communicating Sequential Processes (CSP) [15, 17] and the Calculus of Communicating Systems [23]. Programming styles, expressiveness, examples, theoretical considerations, and the clarity of one have led to modifications in another. This mutual monitoring has brought the strands of the concurrent logic programming church closer together, but has brought about political differences of opinion as to who invented, or more truthfully introduced, what and when. Parlog86 is much more expressive and easier to understand than its progenitor, the Relational Language, whereas FCP and FGHC are, respectively, less expressive than their idealized parents. Parlog86 is as yet the only one of the trio to stubbornly retain nonflat guards. Flat Parlog (FP) without the addition of metacalls and FGHC differ only in syntax and nomenclature. Despite attempts to overcome the semantic difficulties of the read-only variable [30], FCP programs have less clarity than their corresponding Parlog86 or GHC equivalents. A step toward understanding the semantics of Parlog86 can be found in [25], where atomic flat guards are advocated. Preliminary investigations with the implementation of Parlog on parallel machines suggest that FP may be the way to progress. The lack of expressiveness of FGHC is leading ICOT to reconsider the metacall of Parlog. The three politically driven research groups may yet agree on a common language. Let us hope this happens before the demise of concurrent logic programming (cf. the lockout problem of the dining philosophers).

AUTHOR: G.A. Ringwood

Author's Present Address: G.A. Ringwood, Department of Computing, Imperial College of Science and Technology, University of London, 180 Queen's Gate, London, SW7 2BZ England.

Acknowledgments. The author is indebted to Hitachi for the impetus to write down his ideas on Parlog86 in the form of a much needed tutorial introduction in January 1986. The present article represents changes to the Hitachi report as a result of constructive criticism from many sources. In particular, the author wishes to acknowledge all the current, sometime, onetime, parttime, and honorary members of the Parlog research group, Alastair Burt, Reem Bahgat, Keith Clark, Tom Conlon, Jim Crammond, Andrew Davison, Pat Easton-Orr, Ian Foster, David Gilbert, Michael Hirsch, Steve Gregory, Matthew Huntbach, Yonit Keston, Tony Kusalik, Mellisa Lam, Yuji Matsumoto, and Ken Satoh.

REFERENCES

1. Bratko, I. Prolog Programming for Artificial Intelligence. Addison-Wesley, Reading, Mass., 1986.

2. Clark, K. L., and Gregory, S. A relational language, for parallel programming. In Proceedings of the ACM Conference on Functional Programming Languages and Computer Architecture (Portsmouth, N.H., Oct, 18-22). ACM, New York, 1981, pp. 171-178.

3. Clark, K.L., and Gregory, S. Parlog: A parallel logic programming language. Res. Rep. DOC 83/5. Dept. of Computing, Imperial College, London, 1983.

4. Clark, K., and Gregory, S. PARLOG: Parallel programming in logic. ACM Trams. Program. Lang. Syst. 8,1 (Jan. 1986), 1-49. Res. Rep. DOC 84/4. Dept. of Computing, Imperial College, 1984.

5. Clark, K.L., and Gregory, S. Notes on the implementation of Parlog. J. Logic Program. 2(1) (1985), 17-42.

6. Clark, K.L., and McCabe, F.G. The control facilities of IC-Prolog. In Expert Systems in the Micro-Electronic Age, D. Michie, Ed. Edinburgh University Press, Edinburgh, U.K., 1979.

7. Cohen, J., A view of the origins and development of Prolog. Rep. Dept. of Computer Science, Brandeis Univ., Waltham, Mass., 1986.

8. Connery, J.S. Parallel Execution of Logic Programs, Kluwer, Deventer, The Netherlands, 1987.

9. Davis, R.E. Runnable specification as a design tool. In Logic Programming, K.L. Clark and S.A. Tarnlund, Eds. Academic Press, New York, pp. 141-149.

10. Dijkstra, E.W. Hierarchical ordering of sequential processes. Acta Inf. 1 (1971), 115-138.

11. Dijkstra, E.W. Guarded commands, nondeterminacy and formal derivation of programs. Commun. ACM 18, 8 (Aug. 1975), 453-457.

12. Filman, R.E., and Friedman, D.P. Coordinated Computing. McGraw-Hill, New York, 1984.

13. Friedman, D.P., and Wise, D.S. An indeterminate constructor for applicative programming. In Proceedings of the 7th ACM Symposium on the Principles of Programming Languages (Las Vegas, Nev.). ACM, New York, 1980, pp. 245-250.

14. Gregory, S., Neely, R., and Ringwood, G.A. Parlog for specification verification and simulation. In CHDL 85 (7th International Conference on Computer Hardware Description Languages), C.J. Koomen and T. Moto-oka, Eds, North-Holland, Amsterdam, 1985, pp. 139-148.

15. Hoare, C.A.R. Communicating sequential processes. Commun. ACM 21, 8 (Aug. 1978), 666-677.

16. Hoare, C.A.R. Specifications, programs and implementations. Tech. Monogr. PRG-29, Programming Research Group, Oxford Univ., Oxford, England, 1982.

17. Hoare, C.A.R. Communicating Sequential Processes. Prentice-Hall, Englewood Cliffs, N. J., 1985.

18. Kahn, G. The semantics of a simple language for parallel programming. In Proceedings of IFIP 74. North-Holland, Amsterdam, 1974, pp. 471-475.

19. Kowalski, R. Predicate logic as a computational formalism. In Proceedings of IFIP 74. J.L. Rosenfeld, Ed., North-Holland, Amsterdam, 1974, pp. 569-574.

20. Kowalski, R. The relation between logic programming and logic specification. Phil. Trans. R. Soc. Lond. A 312(1984),345-361.

21. Kowalski, R. The early history of logic programming. Res. Rep. Dept. of Computing, Imperial College, London, 1984.

22. Mierkowsky, C., Taylor, S., Shapiro, E., Levy, J., and Safra, M. The design and implementation of Flat Concurrent Prolog. Tech. Rep. CS85-09. Dept. of Applied Maths, Weizmann Institute, Israel, 1985.

23. Milner, R. A Calculus of Communicating Systems. Lecture Notes in Computer Science, vol. 92. Springer-Verlag, New York, 1980.

24. Pollard, G.H. Parallel execution of Horn Clause programs. Ph.D. thesis. Dept. of Computing, Imperial College, London, 1981,

25. Ringwood, G.A. Pattern-directed, Markovian, linear, guarded definite clause resolution. J. Logic Program. To be published.

26. Robinson, J.A. Logic programming: Past, present and future. Tech. Rep. TR-015. ICOT, Tokyo, 1983.

27. Saraswat, V.J. Problems with Concurrent Prolog. Tech. Rep. CS-86-100. Dept. of Computing Science, Carnegie-Mellon Univ., Pittsburg, PA., 1986.

28. Saraswat, V.J. The concurrent logic programming language CP: Definition and operational semantics. In Proceedings of the 14th. ACM Symposium on the Principles of Programming languages, 1987, pp. 49-62.

29. Shapiro, E.Y. A subset of Concurrent Prolog and its interpreter. Tech. Rep. TR-003. ICOT, Tokyo, 1983.

30. Shapiro, E.Y. Concurrent Prolog: A progress report. Tech Rep. CS86-10. Dept. of Applied Maths, Weizmann Institute, Israel, 1986.

31. Ueda, K. Concurrent Prolog re-examined. Tech Rep. TR-102. ICOT, Tokyo, 1985.

32. Ueda. K. Guarded Horn Clauses. Tech Rep. TR-103. ICOT, Tokyo, 1985.

33. Ueda, K. Guarded Horn Clauses. Ph.D. thesis. Univ. of Tokyo, Tokyo, Japan, 1986.

34. Van Emden, M.H., and De Luceana, G.J. Predicate logic as a language for parallel programming. Rep. CS79-15. Dept. of Computer Science, Univ. of Waterloo, Waterloo, Ontario, 1979. (Also in Logic Programming, K.L. Clark and S.A. Tarnlund, Eds. Academic Press, New York, 1982, pp. 189-198.

CR Categories and Subject Descriptors: D.1.3. [Programming Techniques]: Concurrent Programming: D.1.4 [Programming Techniques]: Sequential Programming: D:3.2 [Programming Languages]: Language Classifications--very high-level languages; I.2.3 [Artificial Intelligence]: Deduction and Theorem Proving--logic programming

General Terms: Languages

Additional Key Words and Phrases: Parallel programming

Received 4/7/86; revised 11/10/86; accepted 7/30/87

Further information concerning publications relating to Parlog and details of the software available can be obtained by writing to Secretary to the Parlog Group, Department of Computing, Imperial College, 180 Queen's Gate, London SW7 2BZ, England.

Copywrite 1988 ACM 0001-0782/88/0100-0010 $1.50.

Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.

FIGURES

1. The Seating Arrangements

2. A Day in the Life of a Monastery

3. The producer-consumer Call

4. The producer

5. The consumer

6. The Dining Philosophers

7. The cloister Clause

8. The First refec Clause

9. The Second refec Clause

10. The table Clause

11. The Main Branches of the Concurrent Logic Programming Family

TABLES

Table I The Process Interpretation of Logic Programming
COPYRIGHT 1988 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1988 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ringwood, G.A.
Publication:Communications of the ACM
Date:Jan 1, 1988
Words:10192
Previous Article:Information for authors.
Next Article:The early years of logic programming.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters