Printer Friendly

Abstracts from other ACM publications.

Petri-Net-Based Hypertext: Document Structure with Browsing Semantics

P. David Stotts and Richard Furuta

We present a formal definition of the Trellis model of hypertext and describe an authoring and browsing prototpye called (alpha)Trellis that is based on the model. The Trellis model not only represents the relationships that tie individual pieces of information together into a document (i.e., the adjacencies), but specifies the browsing semantics to be associated with the hypertext as well (i.e., the manner in which the information is to be visited and presented). The model is based on Petri nets, and is a generalization of existing directed graph -based forms of hypertext. The Petri net basis permits more powerful specification of what is to be displayed when a hypertext is browsed and permits application of previously developed Petri net analysis techniques to verify properties of the hypertext, A number of useful hypertext constructs, easily described in the Trellis model, are presented. These include the synchronization of simultaneous traversals of separate paths through a hypertext, the incorporation of access controls into a hypertext (i.e., specifying nodes that can be proven to be accessible only to certain classes of browsers), and construction of multiple specialized (tailored) versions from a single hypertext.

For Correspondence: Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742.

Formative Design Evaluation of SuperBook

Dennis E. Egan, Joel R. Remde, Louis M. Gomez, Thomas K. Landauer, Jennifer Eberhardt, and Carol C Lochbaum

SuperBook is a hypertext browsing system designed to improve the usability of conventional documents. Successive versions of SuperBook were evaluated in a series of behavioral studies, Students searched for information in a statistics text presented either in conventional printed form or in SuperBook form, The best version of SuperBook enabled students to answer search questions more quickly and accurately than they could with the conventional text. Students wrote higher quality "open-book" essays using SuperBook than they did with the conventional text, and their subjective ratings of the documentation strongly favored SuperBook,

This work is a case study of formative design evaluation, Behavioral evaluation of the first version of SuperBook showed how design factors and user strategies affected search and established baseline performance measures with printed text. The second version of SuperBook was implemented with the goal of improving search accuracy and speed. User

January 1989

strategies that had proved effective in the first study were made very easy and attractive to use. System response time for common operations was greatly improved. Behavioral evaluation of the new SuperBook demonstrated its superiority to printed text and suggested additional improvements that were incorporated into "MiteyBook," a SuperBook implementation for PC -size screens. Search with MiteyBook proved to be approximately 25 percent faster and 25 percent more accurate than that obtained with a conventional printed book, For Correspondence: Bellcore, 2L-347, 445 South St., Morristown, NJ 07960.

Context and Orientation in Hypermedia Networks

Kenneth Utting and Nicole Yankelovich

The core of hypermedia's power lies in the complex networks of links that can be created within and between documents. However, these networks frequently overwhelm the user and become a source of confusion. Within Intermedia, we have developed the Web View-a toot for viewing and navigating such networks with a minimum of user confusion and disorientation, The key factors in the Web View's success are a display that combines a record of the user's path through the network with a map of the currently available links; a scope line that summarizes the number of documents and links in the network; and a set of commands that permit the user to open documents directly from the Web View.

For Correspondence: Institute for Research in Information and Scholarship, Brown University, P.O. Box 1946, Providence, RI 02912; kdu(at signal)iris,brown.edu, ny(at signal)iris.brown.edu.

A Data Model for Flexible Hypertext Database Systems Frank Wm. Tompa

Hypertext and other page-oriented databases cannot be schematized in the same manner as record-oriented databases, As a result, most hypertext databases implicitly employ a data model based on a simple, unrestricted graph, This paper presents a hypergraph model for maintaining page-oriented databases in such a way that some of the functionality traditionally provided by database schemes can be available to hypertext databases. In particular, the model formalizes identification of commonality in the structure, set-at-a-time database access, and definition of user-specific views. An efficient implementation of the model is also discussed.

For Correspondence: Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada N2L 3Gl.

Journal of the Association for Computing Machinery July 1989 Recognizing Circle Graphs in Polynomial Time Csaba P. Gabor, Kenneth J. Supowit, and Wen-Lian Hsu

The main result of this paper is an 0(| V | x | E|) time algorithm for deciding whether a given graph is a circle graph, that is, the intersection graph of a set of chords on a circle, The algorithm utilizes two new grapb-theoretic results, regarding necessary induced subgraphs of graphs having neither articulation points nor similar pairs of vertices. Furthermore, as a substep of the algorithm, it is shown how to find in 0(| V | x | E|) time a decomposition of a graph into prime graphs, thereby improving on a result of Cunningham.

For Correspondence: C. P. Gabor, Department of Computer Science, Princeton University, Princeton, NJ 08544; K. J. Supowit, Department of Computer and Information Science, Ohio State University, Columbus, OH 43210; W. -L. Hsu, Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL 60208.

Efficient Implementation of Graph Algorithms Using Contraction

Harold N. Cabow, Zvi Galiz, and Thomas H. Spencer

A graph problem, which we call the (component) merging problem, is defined. Versions of the problem appear as bottlenecks in various graph algorithms. It is shown how to solve the problem efficiently, and how to derive improved solutions for two important special cases of the problem. The solutions make use of several new data structures. The performance of the data structures is sped up by introducing a new algorithmic tool called packets.

New observations are made about B-trees and F-heaps. Specifically, B-trees can be used simultaneously as a priority queue and a concatenable queue. It is also shown that F-heaps support some kinds of split operations with no loss of efficiency.

An immediate application of the solution to the simplest version of the merging problem is an 0(t(m, n)) algorithm for finding minimum spanning trees in undirected graphs without using F-heaps, where t(m, n) = m log sub 2, log sub 2, log sub d, n, the graph has n vertices and m edges, and d = max(m/n,2). Packets also improve the F-heap minimum spanning tree algorithm, giving the fastest algorithm currently known for this problem.

Using efficient solutions to the merging problem and the new observation about F-heaps, an 0(n(t(m, n) + n log n)) algorithm is derived for finding a maximum weighted matching in general graphs. This settles an open problem posed by Tarjan in which the weaker bound of O(nm log(n sup 2/m)) was conjectured. For Correspondence: H. N. Gabow, Department of Computer science, University of Colorado at Boulder, Boulder, CO 80309; Z. Galil, Computer Science Department, Columbia University, New York, NY 10027; T. H. Spencer, Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY 12180-3590.

Optimum Lopsided Binary Trees

Sanjiv Kapoor and Edward M. Reingold

Binary search trees with costs alpa and beta, respectively, on the left and right edges (lopsided search trees) are considered. The exact shape, minimum worst-case cost, and minimum average cost of lopsided trees of n internal nodes are determined for nonnegative alpha and beta; the costs are both roughly log sub p,(n + 1) where p is the unique real number in the interval (1, 2] satisfying l/p sup alpha + l/p sup beta = 1. Search procedures are given that come within a small additive constant of the lower bounds, Almost-optimum algorithms for the lopsided case of unbounded searching are also obtained, Some extensions to nonconstant costs are briefly sketched. For Correspondence: S. Kapoor, Indian Institute of Technology, Delhi, Hauz Khas, New Delhi,110016 India. E. M. Reingold, Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 West Springfield Avenue, Urbana, IL 61801.

Hiearchical Planarity Testing Algorithms

Thomas Lengauer

Using hierarchical definitions, one can describe very large graphs in small space. The blow -up from the length of the hierarchical description to the size of the graph can be as large as exponential, If the efficiency of graph algorithms is measured in terms of the length of the hierarchical description rather than in terms of the graph size, algorithms that do not exploit the hierarchy become hopelessly inefficient, Whether the hierarchy can be exploited to speed up the solution of graph problems depends on the hierarchical graph model. In the literature, hierarchical graph models have been described that allow almost no exploitation of the hierarchy [16]. In this paper, a hierarchical graph model that permits taking advantage of the hierarchy is presented, For this model algorithms are given that test planarity of a hierarchically described graph in linear time in the length of the hierarchical description. For Correspondence: Fachberich 17, Universitat Gesamthochschule Paderborn, Postfach 1621, 4790 Paderborn, West Germany.

A Trade-Off between Space and Efficiency for Routing Tables

David Pelig and Eli Upfal

Two conflicting goals play a crucial role in the design of routing schemes for communication networks. A routing scheme should use paths that are as short as possible for routing messages in the network, while keeping the routing information stored in the processors' local memory as succinct as possible. The efficiency of a routing scheme is measured in terms of its stretch factor-the maximum ratio between the length of a route computed by the scheme and that of a shortest path connecting the same pair of vertices.

Most previous work has concentrated on finding good routing schemes (with a small fixed stretch factor) for special classes of network topologies. In this paper the problem for general networks is studied, and the entire range of possible stretch factors is examined. The results exhibit a trade-off between the efficiency of a routing scheme and its space requirements. Almost tight upper and lower bounds for this tradeoff are presented. Specifically, it is proved that any routing scheme for general n-vertex networks that achieves a stretch factor k >/= 1 must use a total of (ohms(n sub(1+1/(2k+4)bits of routing information in the networks. This lower bound is complemented by a family A(k) of hierarchical routing schemes (for every k >/= 1) for unit-cost general networks, which guarantee a stretch factor ofO(k), require storing a total of O(k sup 3, n sup 1+(1/k)log n) bits of routing information in the network, name the vertices with 0(log sup 2 n). bit names and use 0 (log n)-bit headers.

For Correspondence: Department of Applied Mathematics, The Weizmann Institute, Rehovot 76100 Israel.

Simple Constant-Time Consensus Protocols in Realistic Failure Models

Benny Chor, Michael Merritt, and David B. Shmoys

Using simple protocols, it is shown how to achieve consensus in constant expected time, within a variety of fail-stop and omission failure models. Significantly, the strongest models considered are completely asynchronous. All of the results are based on distributively flipping a coin, which is usable by a significant majority of the processors. Finally, a nearly matching lower bound is also given for randomized protocols for consensus.

For Correspondence: B. Chor, Technion, Haifa, Israel; M. Merritt, AT&T Bell Laboratories, Murray Hill, NJ 07974; D.B. Shmoys, Room 2-376, Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02l39.

Acyclic Fork-Join Queuing Networks

Francois Baccelli, William A. Massey, and Don Towsley

In this paper the class of acyclic fork-join queueing networks that arise in various applications, including parallel processing and flexible manufacturing are studied. In such queuing networks, a fork describes the simultaneous creation of several new customers, which are sent to different queues. The corresponding join occurs when the services of all these new customers are completed. The evolution equations that govern the behavior of such networks are derived. From this, the stability conditions are obtained and upper and lower bounds on the network response times are developed. These bounds are based on various stochastic ordering principles and on the notion of association of random variables.

For Correspondence: F. Baccelli, INRIA-Sophia 06565, Valbonne, France; W.A. Massey, AT&T Bell Laboratories, Murray Hill, NJ 07940; D. Towsley, University of Massachusetts, Amherst, MA 01003.

Optimal Bounds for Decision Problems on the CRCW PRAM

Paul Beame and Johan Hastad

Optimal (ohmes(log n/log log n) lower bounds on the time for CRCW PRAMs with polynomially bounded numbers of processors or memory cells to compute parity and a number of related problems are proven. A strict time hierarchy of explicit Boolean functions of n bits on such machines that holds up to 0(log n/log log n) time is also exhibited. That is, for every time bound T within this range a function is exhibited that can be easily computed using polynomial resources in time T but requires more than polynomial resources to be computed in time T - 1. Finally, it is shown that almost all Boolean functions of n bits require log n - log log n + (ohmes)(l) time when the number of processors is at most polynomial in n. The bounds do not place restrictions on the uniformity of the algorithms nor on the instruction sets of the machines.

For Correspondence: P. Beame, Computer Science Department, FR-35, University of Washington, Seattle, Washington 98195; J. Hastad, Royal Institute of Technology, Stockholm, S-100-44, Sweden.

Computing Surveys June 1989 Control Strategies for Two-Player Games Bruce Abramson

Computer games have been around for almost as long as computers. Most of these games, however, have been designed in a rather ad hoc manner because many of their basic components have never been adequately defined. In this paper some deficiencies in the standard model of computer games, the minimax model, are pointed out and the issues that a general theory must address are outlined. Most of the discussion is done in the context of control strategies, or sets of criteria for move selection. A survey of control strategies brings together results from two fields: implementations of real games and theoretical predictions derived on simplified game-trees. The interplay between these results suggests a series of open problems that have arisen during the course of both analytic experimentation and practical experience as the basis for a formal theory. For Correspondence: Computer Science Department, University of Southern California, Los Angeles, CA 90089.

Explanation-Based Learning: A Survey of Programs and Perspectives

Thomas Ellman

Explanation-based learning (EBL) is a technique by which an intelligent system can learn by observing examples. EBL systems are characterized by the ability to create justified generalizations from single training instances. They are also distinguished by their reliance on background knowledge of the domain under study. Although EBL is usually viewed as a method for performing generalization, it can be viewed in other ways as well. In particular, EBL can be seen as a method that performs four different learning tasks: generalization, chunking, operationalization, and analogy.

This paper provides a general introduction to the field of explanationbased learning. Considerable emphasis is placed on showing how EBL combines the four learning tasks mentioned above. The paper beginswith a presentation of an intuitive example of the EBL technique. Subsequently EBL is placed in its historical context and the relation between EBL and other areas of machine learning is described. The major part of this paper is a survey of selected EBL programs, which have been chosen to show how EBL manifests each of the four learning tasks. Attempts to formalize the EBL technique are also briefly discussed. The paper concludes with a discussion of the limitations of EBL and the major open questions in the field.

For Correspondence: Columbia University, Department of Computer Science, New York, NY 10027.

Applications of Combinatorial Designs in Computer Science

Charles J Colbourn and Paul C. Van Oorschot

The theory of combinatorial designs has been used in widely different areas of computation concerned with the design and analysis of both algorithms and hardware, Combinatorial designs capture a subtle balancing property that is inherent in many difficult problems and hence can provide a sophisticated tool for addressing these problems. The role of combinatorial designs in solving many problems that are basic to the field of computing is explored in this paper. Case studies of many applications of designs to computation are given; these constitute a first survey, which provides a representative sample of uses of designs. More importantly, they suggest paradigms in which designs can be used profitably in algorithm design and analysis.

For Correspondence: Department of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3Gl, Canada.

ACM Transactions on Computer Systems August 1989 Preserving and Using Context Information in Interprocess Communication

Larry L. Peterson, Nick C. Buchholz, and Richard D. Schlichting

When processes in a network communicate, the messages they exchange define a partial ordering of externally visible events. While the significance of this partial order in distributed computing is well understood, it has not been made an explicit part of the communication substrate upon which distributed programs are implemented. This paper describes a new interprocess communication mechanism, called Psync, that explicitly encodes this partial ordering with each message. The paper shows how Psync can be efficiently implemented on an unreliable communications network, and it demonstrates how conversations serve as an elegant foundation for ordering messages exchanged in a distributed computation and for recovering from processor failures.

For Correspondence: Department of Computer Science, University of Arizona, Tucson, AZ 85721.

Integrating Security in a Large Distributed System

M. Satyanarayanan

Andrew is a distributed computing environment that is a synthesis of the personal computing and timesharing paradigms. When mature, it is expected to encompass over 5,000 workstations spanning the Carnegie Mellon University campus. This paper examines the security issues that arise in such an environment and describes the mechanisms that have been developed to address them. These mechanisms include the logical and physical separation of servers and clients, support for secure communication at the remote procedure call level, a distributed authentication service, a file-protection scheme that combines access lists with UNIX mode bits, and the use of encryption as a basic building block. The paper also discusses the assumptions underlying security in Andrew and analyzes the vulnerability of the system. Usage experience reveals that resource control, particularly of workstation CPU cycles, is more important than originally anticipated and that the mechanisms available to address this issue are rudimentary,

For Correspondence: School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213-3890.

Verified Data Transfer Protocols with Variable Flow Control

A. Udaya Shankar

We present and verify a sliding window protocol which uses modulo-N sequence numbers to achieve reliable flow-controlled data transfer between a producer and a consumer connected by unreliable channels. The consumer's data needs are represented by a receive window whose size can vary with time. The producer entity sends segments of data words that lie within the consumer's receive window. The consumer entity sends acknowledgment, selective acknowledgment, and selective reject messages that inform the producer entity of the current receive window size, the data word next expected, and the reception (or lack of reception) of out-of-sequence data segments. Our protocol is, therefore, a proper extension of existing transport and data link protocol standards such as TCP, ISO TP, HDLC, ADCCP, and so forth.

We consider two types of unreliable channels. The first type, referred to as transport channels, can lose, duplicate, and reorder messages to an arbitrary extent, but impose an upper bound on message lifetimes (which can be very large, e.g., days). The second type, referred to as data link channels, can only lose messages, For both types of channels, we obtain the minimum value of N that ensures safe operation without imposing any constraints on the retransmissions of messages or on the data segment sizes. Thus, any retransmission or acknowledgment policy that optimizes the protocol's performance can be used. For transport channels, this value.! N is a function of the maximum message transmission rate, the maximum message lifetime, and the maximum receive window size. For data link channels, this value of N is a function only of the maximum receive window size. We verify progress under three different liveness assumptions: retransmissions initiated by both entities, only by the producer entity, and only by the consumer entity. The protocol also satisfies a convenient noninterfeTence safety property between the acknowledgment, selective acknowledgment, and selective reject messages.

The protocols are specified as event-driven systems and verified hierarchically in two major stages. First, we verify that correct flow-controlled data transfer results if the sequence numbers in the channels satisfy certain correct interpretation bounds, irrespective of the types of errors that the channels may have. Second, for both transport and data link channels, we verify that the correct interpretation bounds are enforced by the corresponding minimum values of N. For the verification of the transport channel case, we use a system model with continuous measures of time in which real-time constraints can be formally specified and verified.

For Correspondence: Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742.
COPYRIGHT 1989 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1989 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Communications of the ACM
Date:Oct 1, 1989
Words:3556
Previous Article:Technical correspondence.
Next Article:ACM news.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters