Printer Friendly

Ken Kahn.

When ICOT was formed in 1982, I was a faculty member of the University of Uppsala, Sweden, doing research at Uppsala Programming Methodology and Artificial Intelligence Laboratory (UPMAIL). The creation of ICOT caused great excitement in the laboratory because we shared with the Fifth Generation project a basic research stance --that logic programming could offer much to AI and, in general, symbolic computing. Koichi Furukawa (at that time an ICOT lab manager, now deputy director of ICOT) and some of his colleagues visited UPMAIL that year to present the plan for the Fifth Generation project and to explore possible collaborations.

About a year later I was invited to be a guest researcher at ICOT for a month. My research at that time was on LM-Prolog, an extended Prolog well integrated with Lisp and implemented on MIT-style Lisp Machines (LMI and Symbolics) [1]. One of the driving motivations behind this work was that there were lots of good things in Prolog, but Prolog could be much better if many of the ideas from the Lisp and object-oriented programming communities could be imported into the framework. I was also working on a partial evaluator for Lisp written in LM-Prolog [11]. This program was capable of automatically specializing Lisp programs. One goal of this effort was to generate specializations of the LM-Prolog interpreter, each of which could only interpret a single LM-Prolog program. The performance of these specialized interpreters of programs was comparable to the compiled versions of those programs.

Researchers at ICOT were working on similar things. There was good work going on in partial evaluation of Prolog programs. There was work on ESP, a Prolog extended with objects and macros [2]. Efforts on a system called Mandala had begun which combined ideas of metainterpretation and object-oriented programming in a logic programming framework [5].

While my demonstrations and seminars about LM-Prolog and partial evaluation went well and my discussions with ICOT researchers were productive, the most important event during my visit was my introduction to Concurrent Prolog. Ehud Shapiro, from the Weizmann Institute of Science in Israel was visiting then, working closely with Akikazu Takeuchi of ICOT. Concurrent Prolog was conceived as an extension(1) of Prolog to introduce programmer-controlled concurrency [20]. It was based on the concept of a read-only variable, which I had found very confusing when I had read about it before my ICOT visit. Part of the problem was simple nomenclature: a variable does not become read-only; what happens is that there are occurrences of a variable which only have a read capability instead of the usual situation where all occurrences have read/write privileges.

Shapiro and Takeuchi [21] had written a paper about how Concurrent Prolog could be used as an actor or concurrent object language. I was very interested in this, since I had worked on various actor languages as a doctoral student at MIT. Again, my difficulty in grasping read-only variables interfered with a good grasp of the central ideas in this article. I understood it only after Shapiro carefully explained the ideas to me. After understanding the paper, I felt that some very powerful ideas about concurrent objects or actors were hidden under a very verbose and clumsy way of expressing them in Concurrent Prolog. The idea of incomplete messages in which the receiver (or receivers) fills in missing portions of messages was particularly attractive. Typically, there are processes suspended, waiting for those parts to be filled in. It seemed clear to me that this technique was a good alternative to the continuation passing of actors and Scheme.

At this time the Fifth Generation project was designing parallel hardware and its accompanying kernel language. A distributed-memory machine seemed to make the most sense since it could scale well, while shared-memory machines seemed uninteresting because they were limited to a small number of processing elements.

Shapiro was working on a parallel machine architecture called the Bagel [19]. I collaborated with him on a notation for mapping processes to processors based on the ideas of the Logo turtle. A process had a heading and could spawn new processes forward (or backward) along its heading and could change its heading.

At this time it seemed that single-language machines were a good idea. There was lots of excitement about Lisp machines, which benefited from a tight integration of components and powerful features. During my visit to ICOT it seemed clear to most people that building a Prolog or Concurrent Prolog machine was the way to go. And unlike the Lisp Machines, these new machines would be designed with parallelism in mind.(2)

As I recall, there was some debate at that time about whether the kernel language of parallel inference machines should be based on a parallel ESP (Prolog extended with objects) or something similar to Concurrent Prolog. I argued for Concurrent Prolog because it did actors so well, and it seemed clear to me that actors should be the base concept for programming parallel machines.

Soon afterward, ICOT did decide to go with concurrent logic programming as the basis for their kernel language (KL1), but chose to base it on Guarded Horn Clauses [23] instead of Concurrent Prolog. GHC was based on the idea that unification in the guard must be passive, i.e., produce only local information (bindings of local variables). A passive unification that attempts to bind a nonlocal variable suspends until that variable has received a binding from another process. This notion of passive unification of GHC replaces the more powerful Concurrent Prolog notion of an atomic transaction for the unification of terms containing read-only variables.

It was not until about six years later that I came to agree that this was the wisest choice. At the time I believed the group at the Weizmann Institute was right in its belief that GHC was too weak for supporting systems programming and distributed computing, since it lacked what I believed were some very important features of Concurrent Prolog, such as atomic unification and dynamic read/write capabilities. This mistaken view was held by all the members of the project I led at Xerox PARC during the late 1980s.

A few months after my visit to ICOT I was visiting the Free University of Brussels. I gave a seminar about my research and then was asked to give another one about the Fifth Generation project. There was tremendous interest in this project. I recall explaining the "middle-out" strategy of the project. ICOT concentrated initially on the language and operating system side of things and then moved upward toward applications and downward toward hardware. This seemed like a good strategy at the time, and looking at it today, I think the most significant successes of the Fifth Generation project are in the middle.

At the Free University of Brussels I visited Luc Steels and his students. They were interested in actors but failed to get very excited about Concurrent Prolog. I felt they should be excited, and my response to this was to try to develop a higher-level actor-like language which compiled directly into Concurrent Prolog. I did not get too far in the short time I was there and dropped the idea when I returned to Uppsala, since there was much to do on other projects. It was not until two years later that I again took up the task of designing a higher-level language that preserved the power of actors in Concurrent Prolog, while avoiding the clumsy, verbose means of expression.

The Pre-Vulcan Period at PARC

A few months later I joined Xerox PARC. Multiparadigm programming was very in style. Danny Bobrow, Mark Stefik, and Sanjay Mittal had developed Loops at PARC to support Lisp, objects, and rules. My first task at PARC was to replace the rule component. My design was based on my experience with LM-Prolog and work on integrating logic programming and objects. I was also asked to consult on the joint project between Xerox and Quintus Computer Systems to build Xerox Quintus Prolog, a microcode implementation of Prolog on the Xerox Lisp Machines, based on the Warren Abstract Machine (WAM). At this point, Symbolics was selling a very similar product.(3) Comparing these different systems and running various InterLisp-D benchmarks, I became discouraged because I saw that the rule component of Loops implemented in Lisp could never approach the performance of these other systems.

Long before I came to PARC, Larry Masinter at Xerox PARC had been trying to convince the LOOPS group that the object component should be based on the idea of generic functions and multimethods. He quickly convinced me, partly because it fit much better with the Prolog style of defining predicates. The idea (which later became incorporated in InterLoops, CommonLoops, and the Common Lisp Object System (CLOS)) was that a function could be defined as a collection of methods, each one of which could perform type discrimination on any of its arguments to determine if the method was applicable. Conventional object-oriented programming is based on discriminating only on the first argument. This proposal both generalized object-oriented programming and fit better with Lisp's functional style than earlier Lisp/object combinations. The caller of a function should not need to know whether that function was implemented as ordinary Lisp code or as a collection of multimethods. In addition, I was excited about how Prolog-like computation could fit in by having multimethods that do pattern matching or unification instead of (or in addition to) the type discrimination of object-oriented programming. Methods could be combined in the object-oriented manner so the most specific method was chosen or could be combined in the Prolog manner with a backtrackable choice, where upon failure another method could be tried.

While doing this work on what became CommonLoops (and CLOS) and on my extension, which I called CommonLog, I became more and more dissatisfied with the complexity of the resulting language. And on top of all this complexity, it did not seem (and still does not to me) that it could be extended to give good support for parallel and distributed computation. Also I began to question the very premise of multiparadigm programming; was it so desirable to mix at a very fine-grain, very different ways of thinking about computation? How easy is it for someone to understand such code when on one line it was possible to mix functional, object-oriented, and logic-programming styles?

The Vulcan Years

Because of my dissatisfaction with CommonLoops, I returned to the idea of building a higher-level language for Concurrent Prolog. Together with my PARC colleagues Mark Miller and Danny Bobrow, we quickly designed a language in early 1986, which we called Vulcan. We wrote a paper about it [14], and suddenly there was lots of interest; the paper was reprinted in several books. Other PARC researchers (Eric Dean Tribble and Curtis Abbott) joined the group. Ehud Shapiro began to consult for us. Students (Jim Rauen and Andy Cameron) joined the project. A DARPA official encouraged us to write a grant proposal.(4) I think the strong interest can be explained by the fact that the project combined, in a coherent and simple manner, several fashionable items: object-oriented programming, parallel programming, distributed applications including groupware, and Fifth-Generation computing.

During the Vulcan project there was much important research going on in this area at other research sites. Very related research was going on in the Parlog group at Imperial College, London. In fact, the precursor to Concurrent Prolog, the Relational Language, was developed there [3]. We followed the work there, visited each other, and so on, but we were not influenced as much by the work there. Their Parlog system was too large and complex for our tastes. We did interact with Andrew Davidson on his Pool and Polka languages, which were object-oriented extensions of Parlog. Years later we were strongly influenced by the Strand language, which was largely based on the work on Flat Parlog [4] done at Imperial College and the Weizmann Institute.

In addition to the Imperial College Parlog group, there are other important threads not dealt with in this article. The Swedish Institute of Computer Science (SICS) has had important collaborations with ICOT and PARC, especially on AKL (the Andorra Kernel Language) [8] which is a promising attempt to achieve the initial goal of Concurrent Prolog: the addition of programmer-controlled concurrency to Prolog without the sacrifice of any of its power. The constraint logic programming group at the IBM Watson Research Center is also an important thread whose work and collaborations have influenced and contributed to the research at PARC, ICOT, SICS, and the Weizmann Institute. Instead of breadth of coverage, this article focuses on the interactions among three research groups: ICOT, PARC, and the Weizmann Institute.

During this period we actively interacted with releated research efforts at ICOT and the Weizmann Institute. We developed a vision of distributed open-systems computing which placed Flat Concurrent Prolog (FCP) as the foundation. (Flat Concurrent Prolog is a subset of Concurrent Prolog, which permits a much more light-weight and simple implementation.) We saw FCP as a small yet powerful kernel on which to build many layers of abstraction. We foresaw tools and programming methodologies which relied on the dual readings of logic programs: declarative and process oriented. Clients and servers could be built in FCP in a portable fashion as well as connected together by FCP. The boundary between client and server computation could be very flexible and could be specified at run time. We saw unification as a single, conceptually simple, computational mechanism that provided the functionality of assignment, binding, argument passing, return of values, interprocess communication, atomic transactions, and the construction, testing, and access of records. While our colleagues at ICOT and Weizmann were exploring how these languages could support the exploitation of parallel hardware, we saw the same languages supporting distributed computing. We saw how these languages support secure encapsulated state which is necessary for distributed applications. We were excited about ICOT and Weizmann Institute research, which demonstrated how easy it was to implement various abstractions as short and simple metainterpreters. We saw how these languages are well suited for partial evaluation which could both make code reuse and metainterpreters more practical.

The view taken by the Vulcan Project was to start with observations about large-scale distributed computations and ask whether some of the elements of distributed computing could usefully be "pushed down" to the language level, thereby supporting their use for a wide range of scales from global networks of services to the objects and methods of small-scale computation. One example was the work on secrecy and trust [12,16], where the important large-scale computing concerns of public keys and electronic money were brought down to the programming-language level. Another example is the work on agoric systems [16], where the marketplace of computational services that exists across organizational boundaries is brought down to the scale at which it is of value for adaptive, dynamic, resource management within a single program, so that processing, memory, and communications resources can be dynamically traded off in a more optimal manner.

The Vulcan project continued to grow. In 1988 we hired Jacob Levy whose doctoral dissertation [15] at the Weizmann Institute was on the implementation and comparison of concurrect logic-programming languages. As part of his dissertation he had built a parallel functional-programming language on top of FCP. We hired Vijay Saraswat, whose CMU dissertation [17] was on concurrent constraint programming: an elegant synthesis of concurrent logic programming and constraint logic programming. Saraswat had proposed a language he called Herbrand, that we favored since it was just slightly weaker than FCP but much cleaner, simpler, and probably more efficient to implement.

The Vulcan project had become rather large, and PARC management began to question why Xerox should be funding this. It was a project whose results could clearly benefit the world at large, but it was not very clear how Xerox would benefit in particular. Management lost patience waiting for DARPA, and the project was stopped. Three of us (Saraswat, Levy, and I) continued doing related research.

The Post-Vulcan Years

Saraswat and I began work on a visual syntax for concurrent logic programs. The syntax was based on the topology of drawings. It was designed so that it was well suited not just for program sources but also as the basis for generating animations of program executions [13]. One discovery was that object-oriented programs did not come out so clumsy and verbose when they were drawn instead of typed.

I collaborated with Shapiro on a preprocessor for logic programs designed to support object-oriented programming. My thinking had changed from believing that concurrent logic programs were too low level, to believing they just needed some simple syntactic support. I came to realize that the Vulcan language, by trying to be an actor language, had sacrificed some very important expressive power of the underlying language. In 1989 I presented an invited paper at the European Conference on Object-Oriented Programming on this topic [10]. The essence of the paper is that concurrent logic programming, by virtue of its first-class communication channels, is an important generalization of actors or concurrent objects. Multiple input channels are very important, as is the ability to communicate input channels. During this period I interacted with Kaoru Yoshida of ICOT during her development of A'UM, an object-oriented system on top of FGHC which retains the power of multiple, explicit, input channels [24]. Andrew Davidson at the Imperial College also made this design decision early in his work on Pool and Polka.

We hosted an extended visit by Kazunori Ueda and Masaki Murakami from ICOT. Ueda, the designer of Flat Guarded Horn Clauses (FGHC), slowly won us over to the view that his language, while weaker than Herbrand, was simpler and that there were programming techniques that compensate for its weaknesses. Essentially, we moved from the view of unification as an atomic operation to viewing it as an eventual publication of information. I began to program in the FGHC subset of the Weizmann Institute implementation of FCP. I would have seriously considered using an ICOT implementation had one been available for Unix workstations. (ICOT today is working on porting their work to commerically available multiprocessors and Unix workstations and has made its software freely available.)

AI Limited in the United Kingdom then announced a commercial concurrent logic programming language called Strand88. We became a beta test site and received visits by one of the language designers (Steve Taylor) and later by officials of the company. We were very eager to collaborate because the existence of a commercial product gave these languages a certain realness and respectability within Xerox.(5)

Our first reaction was that they had simplified the language too much: they had replaced unification by single assignment and simple pattern matching. What we once believed was the essence of concurrent logic programming was gone. As was the case with FGHC, we were won over to the language by being shown how its deficiencies were easily compensated for by certain programming techniques and that there were significant implementation and/or semantic advantages that followed. I stopped using the FGHC subset of FCP and became a Strand programmer. I even offered a well-attended in-house tutorial on Strand at PARC.

Saraswat quickly became disenchanted with Strand because the way it provided assignment interfered with it fitting into his concurrent constraint framework and thereby giving it a good declarative semantics. Strand assignment is single assignment, so it avoids the problems associated with assignment in a concurrent setting. But Strand signals an error if an attempt is made to assign the same variable twice. A symptom of these problems is that in Strand X := Y where X and Y are unbound is operationally very different from Y := X. We wanted := to mean the imposition (or "telling") of equality constraints between its arguments.

Saraswat then discovered a two-year-old paper by Masahiro Hirata from the University of Tsukuba in Japan on a language called DOC [9]. The critical idea in the paper was that if every variable had only a single writer then no inconsistencies or failures could occur.

Saraswat, Levy, and I picked up on this idea and designed a concurrent constraint language called Janus [18]. We introduced a syntax to distinguish between an asker and a teller of a variable. We designed syntactic restrictions (checkable at compile time) which guarantee that a variable cannot receive more than one value. We discovered that these restrictions also enable some very significant compiler optimizations, including compile-time garbage collection.

Because we lacked the resources to build a real implementation of Janus, we started collaborative efforts with various external research groups (the University of Arizona, McGill University, Saskatchewan University). Jacob Levy left and started a group at the Technion University in Israel.(6)


Today work continues on Janus implementations. David Gudeman and others at the University of Arizona have produced an excellent high-performance serial implementation [6]. ICOT research on moded FGHC attempts to achieve the goals of Janus by sophisticated program analysis rather than syntactic restrictions [22]. An interesting aspect of this work is how it manages, when appropriate, to borrow implementation techniques from object-oriented programming. Also, work on directed logic variables at the Weizmann Institute was inspired by Janus [15].

Saraswat has had a major impact on the research community with his framework of concurrent constraint programming. He is active in a large joint Esprit/NSF project called ACCLAIM based on his work. At ICOT there is a system called GDCC, which directly builds on his work. His work has also had significant impact on the theoretical computer science community interested in concurrency. Saraswat and a student (Clifford Tse) are working on the design and parallel implementation a programming language called Linear Janus, which is based on concurrent constraint programming and addresses the same goals as Janus but is based on linear logic.

The work of the Vulcan project on exploring the feasibility and advantages of using concurrent logic programming as the foundation for building distributed applications has strongly influenced Shapiro's group at the Weizmann Insitute. In recent years they have moved away from a focus on parallel computing to a focus on distributed-computing foundations and applications.

After leaving Xerox, Tribble and Miller focused primarily on large-scale hypertext systems. However, they have been designing a programming language for distributed computing called Joule, which combines many of the insights from concurrent logic-programming language design, higher-order programming, actors, and capability-based operating systems (in particular the KeyKOS system [7]).

Until very recently, I concentrated my efforts on building an environment for Pictorial Janus, a visual syntax for Janus. The system accepts program drawings in PostScript, parses them, and produces animations of concurrent executions. A postdoc (Markus Fromherz) is using Pictorial Janus to model the behavior of paper paths in copiers. I see this work as making the ideas behind concurrent logic programming more accessible. Programs and their behaviors are much easier to understand when presented in a manner that exploits our very capable perceptual system.

In September 1992, I left Xerox to start my own small company to develop ToonTalk--an animated programming language for children based on the concurrent logic programming research at ICOT, Weizmann, Xerox PARC, and elsewhere. My belief is that the underlying concepts of concurrency, communication, synchronization, and object orientation are not inherently difficult to grasp. What is difficult is learning how to read and write encodings of behaviors of concurrent agents. My research on Pictorial Janus convinced me that encoding the desired behaviors as static diagrams was a good step in the right direction, but not a large enough one.

I believe the next step is to make heavy use of animation, not just to see how a program executes but also to construct and view source programs. In the process of doing my dissertation work on the creation of computer animation from story descriptions, I took several animation courses and made a few films. An important lesson I learned was how effectively well designed animation can communicate complex dynamic behaviors. I believe the designers of programming language syntax and programming environments should be studying Disney cartoons and home entertainment such as Super Mario Brothers.

When I was a member of the Logo project at MIT, I recall Seymour Papert describing the Logo language as an attempt to take the best ideas in programming language research and make them accessible to children. In the late 1960s Lisp was the best source of these ideas. More than 20 years later, I see myself as making another attempt, taking what I see as the best in programming language research--concurrent logic programming, constraint programming, and actors--and, with the help of interactive animation, making these powerful ideas usable by children. If successful, concurrent logic programming will soon be child's play.

A Personal View of the Project

So what is my view of the Fifth Generation project after 10 years of interactions? Personally, I am very glad that it happened. There were many fruitful direct interactions and I am sure several times as many indirect positive influences. Without the Fifth Generation project there might not have been a Vulcan project, or good collaborations with the Weizmann Institute, or the Strand and Janus languages. More globally, the whole computer science research community has benefited a good deal from the project. As Hideaki Kumano, director general, Machinery and Information Industries Bureau, Ministry of International Trade and Industry (MITI) said during his keynote speech at the 1992 FGCS conference:

Around the world, a number of projects received their initial impetus from our project: these include the Strategic Computing Initiative in the U.S., the EC's ESPRIT project, and the Alvey Project in the U.K. These projects were initially launched to compete with the Fifth Generation Computer Systems Project. Now, however, I strongly believe that since our ideal of international contributions has come to be understood around the globe, together with the realization that technology cannot and should not be divided by borders, each project is providing the stimulus for the others, and all are making major contributions to the advancement of information processing technologies.

I think the benefits to the Japanese computer science community were very large. Comparing visits I made to Japanese computer science laboratories in 1979, 1983, 1988, and 1992 there has been tremendous progress. When the project started there were few world-class researchers in Japan on programming language design and implementation, on AI, on parallel processing, and so forth. Today the gap has completely disappeared; the quality and breadth of research I saw in 1992 is equal to that of America or Europe. I think the Fifth Generation project deserves much credit for this. By taking on very ambitious and exciting goals, they got much further than if they had taken on more realistic goals. I do not believe the Fifth Generation project is a failure because they failed to meet many of their ambitious goals; I think it is a great success because it helped move computer science research in Japan to world-class status and nudged computer science research throughout the world in a good exciting direction.

(1) There never was an implementation of Concurrent Prolog that retained Prolog as a sublanguage. Eventually, Concurrent Prolog was redefined as a different language which provided concurrency and sacrificed the ability of Prolog programs to do implicit search.

(2) With the advantage of hindsight, this was a mistake because it cut off FGCS research from tools and platforms of other researchers. This approach was too closed, and only now is ICOT doing serious work on porting their software to standard platforms.

(3) I believe both of these implementation owe their existence to the interest in Prolog that the Fifth Generation project had generated.

(4) In 1987, we did submit a large project proposal just before there were a series of major management changes at DARPA. Each change delayed a decision by many months. After more than a year PARC lost patience and withdrew the proposal.

(5) Xerox also had a long history of business relations with AI Limited on other products.

(6) He now works at Sun Microsystems and is implementing Janus in his spare time.


[1.] Carlsson, M. and Kahn, K. LM-Prolog user manual. Tech. Rep. 24, UPMAIL, Uppsala University, 1983.

[2.] Chikayama, T. Unique features of ESP. In Proceedings of the International Conference on FGCS. 1984.

[3.] Clark, K.L. and Gregory, S. A relational language for parallel programming. Tech. Rep. DOC 81/16, Imperial College, London, 1981.

[4.] Foster, I. and Taylor, S. Flat Parlog: A basis for comparison. Int. J. Parall. Programm. 16, 2 (1988).

[5.] Furukawa, K., Takeuchi., A., Kunifuji, S., Yasukawa, H., Ohki, M. and Ueda, K. Mandala: A logic based knowledge programming system. In Proceedings of the International Conference on Fifth Generation Computer Systems. 1984.

[6.] Gudeman, D., De Bosschere, K. and Debray, S.K., JC: An efficient and portable sequential implementation of Janus. In the 1992 Joint International Conference and Symposium on Logic Programming (Nov. 1992).

[7.] Hardy, N. Keykos architecture. Oper. Syst. Rev. (Sept. 1985).

[8.] Haridi, S. and Janson, S. Kernel Andorra Prolog and its computation model. In Proceedings of the Seventh International Conference on Logic Programming (June 1990).

[9.] Hirata, M. Programming language DOC and its self-description, or, x = x considered harmful. In the Third Conference Proceedings of the Japan Society for Software Science and Technology (1986), pp. 69-72.

[10.] Kahn, K. Objects--A fresh look. In Proceedings of the Third European Conference on Object-Oriented Programming. Cambridge University Press, Cambridge, Mass., 1989, pp. 207-224.

[11.] Kahn, K. The compilation of Prolog programs without the use of a Prolog compiler. In Proceedings of the Fifth Generation Computer Systems Conference. 1984.

[12.] Kahn, K. and Kornfeld, W. Money as a concurrent logic program. In Proceedings of the North American Conference on Logic Programming. The MIT Press, Cambridge Mass., 1989.

[13.] Kahn, K. and Saraswat, V. Complete visualizations of concurrent programs and their executions. In Proceedings of the IEEE Visual Language Workshop. IEEE, New York, (Oct. 1990).

[14.] Kahn, K., Tribble, E., Miller, M. and Bobrow, D. Vulcan: Logical concurrent objects. In Research Directions in Object-Oriented Programming. The MIT Press, Cambridge, Mass., 1987. Also in Concurrent Prolog, MIT Press, Ehud Shapiro, Ed.

[15.] Levy, Y. Concurrent logic programming languages: Implementation and comparison. Ph.D. dissertation, Weizmann Institute of Science, Israel, 1989.

[16.] Miller, M.S. and Drexler, K.E. Markets and computation: Agoric open systems. In The Ecology of Computation. Elsevier Science Publishers/North-Holland, Amsterdam, 1988.

[17.] Saraswat, V. Concurrent constraint programming languages. Ph.D. dissertation, Carnegie-Mellon University, Pittsburgh, Pa., 1989.

[18.] Saraswat, V.A., Kahn, K. and Levy, J. Janus--A step towards distributed constraint programming. In Proceedings of the North American Logic Programming Conference. MIT Press, Cambridge, Mass., 1990.

[19.] Shapiro, E.Y. Systolic programming: A paradigm of parallel processing. In Proceedings of the Fifth Generation Computer Systems Conference. (1984).

[20.] Shapiro, E. A subset of Concurrent Prolog and its interpreter. Tech. Rep. CS83-06, Weizmann Institute, Israel, 1983.

[21.] Shapiro E. and Takeuchi, A. Object oriented programming in Concurrent Prolog. New Gener. Comput. 1, (1983), 25-48.

[22.] Ueda, K. A new implementation technique for Flat GHC. In Proceedings of the Seventh International Conference on Logic Programming (June). 1990.

[23.] Ueda, K. Guarded Horn Clauses. Tech. Rep. TR-103, ICOT, 1985.

[24.] Yoshida, K. and Chikayama, T. A'um--A stream-based concurrent object-oriented language. In Proceedings of the International Conference on Fifth Generation Computer Systems, 1988, pp. 638-649.
COPYRIGHT 1993 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Technical; The 5th Generation Project: Personal Perspectives
Author:Kahn, Ken
Publication:Communications of the ACM
Article Type:Cover Story
Date:Mar 1, 1993
Previous Article:Kazunori Ueda.
Next Article:Takashi Chikayama.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters