Printer Friendly

Abstracts from other ACM publications.

ACM Transactions on Database Systems December 1989

Imprecise Schema: A Rationale for Relations with Embedded Subrelations

Howard M. Dreizen and Shi-Kuo Chang

Exceptional conditions are anomalous data which meet the intent of a schema but not the schema definition, represent a small proportion of the database extension, and may become known only after the schema is in use. Admission of exceptional conditions is argued to suggest a representation that locally stretches the schema definition by use of relations with embedded subrelations. Attempted normalization of these relations to 1NF does not yield the static schema typically associated with such transformations. A class of relations, termed Exceptional Condition Nested Form (ECNF), is defined which allows the necessary representation of exceptional conditions while containing sufficient restrictions to prevent arbitrary and chaotic inclusion of embedded subrelations. Queries on a subset of exceptional conditions, the exceptional constraints, are provided an interpretation via an algorithm that transforms ECNF relations into 1NF relations containing two types of null values. Extensions of relational algebraic operators, suitable for interactive query navigation, are defined for use with ECNF relations containing all forms of exceptional conditions.

For Correspondence: H.M. Dreizen, EPIX, Inc., 310 Anthony Trail, Northbrook, IL 60062; S.K. Chang, Computer Science Department, Univ. of Pittsburgh, Pittsburgh, PA 15260.

Integrity = Validity + Completeness

Amihai Motro

Database integrity bas two complementary components: validity, which guarantees that all false information is excluded from the database, and completeness, which guarantees that all true information is included in the database. This article describes a uniform model of integrity for relational databases, that considers both validity and completeness. To a large degree, this model subsumes the prevailing model of integrity i.e., integrity constraints). One of the features of the new model is the determination of the integrity of answers issued by the database system in response to user queries. To users, answers that are accompanied with such detailed certifications of their integrity are more meaningful. First, the model is defined and discussed. Then, a specific mechanism is described that implements this model. With this mechanism, the determination of the integrity of an answer is a process analogous to the determination of the answer itself.

For Correspondence: Computer Science Department, University of Southern California, University Park, Los Angeles, CA 90089-0782.

Using Semantic Knowledge of Transactions to increase Concurrency

Abdel Aziz Farrag and M. Tamer Ozsu

When the only information available about transactions is syntactic information, serializability is the main correctness criterion for concurrency control. Serializability requires that the execution of each transaction must appear to every other transaction as a single atomic step (i.e., the execution of the transaction cannot be interrupted by other transactions). Many researchers, however, have realized that this requirement is unnecessarily strong for many applications and can significantly increase transaction response time. To overcome this problem, a new approach for controlling concurrency that exploits the semantic information available about transactions to allow controlled nonserializable interleavings has recently been proposed. This approach is useful when the cost of producing only serializable interleavings is unacceptably high, The main drawback of the approach is the extra overhead incurred by utilizing the semantic information. We examine this new approach in this paper and discuss its strengths and weaknesses. We introduce a new formalization for the concurrency control problem when semantic information is available about the transactions. This semantic information takes the form of transaction types, transaction steps, and transaction breakpoints. We define a new class of "safe " schedules called relatively consistent (RC) schedules. This class contains serializable as well as nonserializable schedules. We prove that the execution of an RC schedule cannot violate consistency and propose a new concurrency control mechanism that produces only RC schedules. Our mechanism assumes fewer restrictions on the interleavings among transactions than previously introduced semantic-based mechanisms.

For Correspondence: A.A. Farrag, Department of Mathematics. Statistics, and Computing Science' Dalhousie University, Halifax. Nova Scotia, Canada B3H 3J5; M.T. Ozsu, Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada T6G 2H1.

Query Processing Techniques in the Summary-Table-by-Example Database Query Language

Gultekiti Ozsoyoglu, Victor Matos, and Z. Meral Ozsoyoglu

Summary-Table-by-Example (STBE) is a graphical language suitable for statistical database applications. STBE queries have a hierarchical subquery structure and manipulate summary tables and relations with set-valued attributes.

The hierarchical arrangement of STBE queries naturally implies a tuple-by-tuple subquery evaluation strategy (similar to the nested loops join implementation technique) which may not be the best query processing strategy. In this paper we discuss the query processing techniques used in STBE. We first convert an STBE query into an "extended" relational algebra (ERA) expression. Two transformations are introduced to remove the hierarchical arrangement of subqueries so that query optimization is possible. To solve the "empty partition" problem of aggregate function evaluation, directional join (one-sided outer-join) is utilized. We give the algebraic properties of the ERA operators to obtain an "improved" ERA expression. Finally, we briefly discuss the generation of alternative implementations of a given ERA expression.

STBE is implemented in a prototype statistical database management system. We discuss the STBE-related features of the implemented system.

For Correspondence: Department of Computer Engineering and Science and Center for Automation and Intelligent Systems, Case institute of Technology, Case Western Reserve University, Cleveland, OH 44106.

On the Effect of Join Operations on Relation Sizes

Daniele Gardy and Claude Puech

We propose a generating function approach to the problem of evaluating the sizes of derived relations in a relational database framework. We present a model of relations and show how to use it to deduce probabilistic estimations of derived relation sizes. These are found to asymptatically follow normal distributions under a variety of assumptions.

For Correspondence: D. Gardy, Laboratoire de Recherche en Informatique, CNRS UA, Bat. 490, Universite de Paris-Sud, 91405 Orsay, Cedex, France; C. Puech, Laboratoire d'Informatique de l'Ecole Normale Supdrieure, Ecole Normale Superieure, 45 Rue d'Ulm, 75230 Paris Cedex 05, France.

ACM Transactions on Programming Languages and Systems January 1990

A Mechanism for Environment integration

Geoffrey Clemm and Leon Osterzweil

This paper describes research associated with the development and evaluation of Odin-an environment integration system based on the idea that tools should be integrated around a centralized store of persistent software objects. The paper describes this idea in detail and then presents the Odin architecture, which features such notions as the typing of software objects, composing tools out of modular tool fragments, optimizing the storage and rederivation of software objects, and isolating tool interconnectivity information in a single centralized object. The paper then describes some projects that have used Odin to integrate tools on a large scale. Finally, it discusses the significance of this work and the conclusions that can be drawn about superior software environment architectures.

For Correspondence: G. Clemm, Evolutionary Software, 55 Commonwealth Road, Watertown, MA 02172; L. Osterweil, Department of Information and Computer Science, University of California at Irvine, Irvine, CA 92717,

Interprocedural Slicing Using Dependence Graphs

Susan Horwitz, Thomas Reps, and David Binkley

The notion of a program slice, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point p and a variable x; the slice consists of all statements of the program that might affect the value of x at point p. This paper concerns the problem of interprocedural slicing-generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a system dependence graph, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to be sliced with respect to program point p and an arbitrary variable, a slice must be taken with respect to a variable that is defined or used at p.)

The chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent transitive dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammar's nonterminals.

For Correspondence: Computer Sciences Department, University of Wisconsin-Madison, 1210 W. Dayton St., Madison, WI 53706.

Production Trees: A Compact Representation of Parsed Programs

Vance E. Waddle

Abstract syntax trees were devised as a compact alternative to parse trees. because parse trees are known to require excessive amounts of storage to represent parsed programs. However, the savings that abstract syntax trees actually achieve have never been precisely described because the necessary analysis has been missing. Without it, one can only measure particular cases that may not adequately represent all the possible behaviors,

We introduce a data structure, production trees, that are more compact than either abstract syntax trees or parse trees. Further, we develop the necessary analysis to characterize the storage requirements of parse trees. abstract syntax trees, and production trees and relate the size of all three to the size of the program's text. The analysis yields the parameters needed to characterize these storage behaviors over their entire range. We flesh out the analysis by measuring these parameters for a sample of "C" programs. For these programs, production trees were from 1/15 to 1 /23 the size of the corresponding parse tree, 1/2.7 the size of a (minimal) abstract syntax tree. and averaged only 2.83 times the size of the program text,

For Correspondence: IBM T.J. Watson Research Center, P.O.B. 218, Yorktown Heights, NY 10598.

A Modular Technique for the Design of Efficient Distributed Leader Finding Algorithms

E. Korach, S. Kutten, and S. Moran

A general, modular technique for designing efficient leader finding algorithms in distributed. asynchronous networks is developed. This technique reduces the problem of efficient leader finding to a simpler problem of efficient serial traversing of the corresponding network. The message complexity of the resulting leader finding algorithms is bounded by [(f(n) + n)(log[.sub.2]k + 1) (or (f(m) + n)(log[.sub.2]k + 1)], where n is the number of nodes in the network [m is the number of edges in the network], k is the number of nodes that start the algorithm, and f(n)[f(m)] is the message complexity of traversing the nodes [edges] of the network. The time complexity of these algorithms may be as large as their message complexity. This technique does not require that the FIFO discipline is obeyed by the links. The local memory needed for each node, besides the memory needed for the traversal algorithm, is logarithmic in the maximal identity of a node in the network. This result achieves in a unified way the best-known upper bounds on the message complexity of leader finding algorithms for circular, complete, and general networks. it is also shown to be applicable to other classes of networks, and in some cases the message complexity of the resulting algorithms is better by a constant factor than that of previously known algorithms.

For Correspondence: Department of Computer Science, Technion, Israel Institute of Technology, Haifa, Israel 32 00,

A Distributed Deadlock Detection Algorithm for CSP-Like Communication

Shing-Tsaan Huang

An algorithm for detecting deadlocks in distributed systems with CSP-like communication is proposed. Unlike previous works, the proposed algorithm avoids periodically sending deadlock-detecting messages by the processes and requires no local storage for the processes with size predetermined by the number of processes in the system. The algorithm is proven to have the following properties: (0) it never detects false deadlocks; (1) it has only one process in a knot report the deadlock; and (2) it detects every true deadlock in finite time.

For Correspondence: Institute of Computer Science, National Tsing-Hua University, Hsinchu, Taiwan (30043), R.O.C.

A Correctness Proof for Combinator Reduction with Cycles

William M. Farmer, John D. Ramsdell, and Ronald J. Watro

Turner popularized a technique of Wadsworth in which a cyclic graph rewriting rule is used to implement reduction of the fixed point combinator Y. We examine the theoretical foundation of this approach. Previous work bas concentrated an proving that graph methods are, in a certain sense, sound and complete implementations of term methods. This work is inapplicable to the cyclic Y rule, which is unsound in this sense since graph normal forms can exist without corresponding term normal forms. We define and prove the correctness of combinator head reduction using the cyclic Y rule; the correctness of normal reduction is an immediate consequence. Our proof avoids the use of infinite trees to explain cyclic graphs. Instead, we show how to consider reduction with cycles as an optimization of reduction without cycles.

For Correspondence: The MITRE Corporation, Burlington Road, Bedford, MA 01730.

Journal of the Association for Computing Machinery October 1989

Learnability and the Vapnik-Chervonenkis Dimension

Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth

Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space E". The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.

For Correspondence: A Blumer, Department of Mathematics and Computer Science, Tufts University, Medford, MA 02155; A. Ehrenfeucht, Department of Computer Science, University of Colorado at Boulder, Boulder, CO 80302; D. Haussler and M.K. Warmuth, Department of Computer and Information Sciences, University of California at Santa Cruz, Santa Cruz, CA 95064.

P-Uniform Circuit Complexity

Eric W. Allender

Much complexity-theoretic work on parallelism has focused on the class NC, which is defined in terms of logspace-uniform circuits. Yet P-uniform circuit complexity is in some ways a more natural setting for studying feasible parallelism. In this paper, P-uniform NC (PUNC) is characterized in terms of space-bounded AuxPDAs and alternating Turing Machines with bounded access to the input. The notions of general-purpose and special-purpose computation are considered, and a general-purpose parallel computer for PUNC is presented. It is also shown that NC = PUNC if all tally languages in P are in NC; this implies that the NC = PUNC question and the NC = P question are both instances of the ASPACE(S(n)) = ASPACE, TIME(S(n). S(n)[.sup.o[1]]) question. As a corollary, it follows that NC = PUNC implies PSPACE = DTIME(2[.sup.n[.sup.O(1)]] ).

For Correspondence: Department of Computer Science, Rutgers University, Hill Center, New Brunswick, NJ 08903.

Distributed Bisimulations

Ilaria Castellani and Matthew Hennessy

A new equivalence between concurrent processes is proposed. It generalizes the well-known bisimulation equivalence to take into account the distributed nature of processes. The result is a noninterleaving semantic theory; concurrent processes are differentiated from processes that are nondeterministic but sequential. The new equivalence, together with its observational version, is investigated for a subset of the language CCS, and various algebraic characterizations are obtained.

For Correspondence: I. Castellani, INRIA, Ecole des Mines, SophiaAntipolis, 06565 Valbonne Cedex, France; M. Hennessy, Computer Science, Mathematics and Physics Building, University of Sussex, Falmer, Brighton, Sussex BN1 9QH, England

Finding Minimum-Cost Circulations by Canceling Negative Cycles

Andrew V. Goldberg and Robert E. Tarjan

A classical algorithm for finding a minimum-cost circulation consists of repeatedly finding a residual cycle of negative cost and canceling it by pushing enough flow around the cycle to saturate an arc. We show that a judicious choice of cycles for canceling leads to a polynomial bound on the number of iterations in this algorithm. This gives a very simple strongly polynomial algorithm that uses no scaling. A variant of the algorithm that uses dynamic trees runs in O(nm(log n)min{log(nC), m log n }) time on a network of n vertices, m arcs, and arc costs of maximum absolute value C. This bound is comparable to those of the fastest previously known algorithms.

For Correspondence: A.V. Goldberg, Department of Computer Science, Stanford University, Stanford, CA 94305; R.E. Tarjan, Department of Computer Science, Princeton University, Princeton, NJ 08544.

Passes, Sweeps, and Visits in Attribute Grammars

Joost Engelfriet and Gilberto File

Theoretical results are presented on multi-pass (both left-to-right and alternating), multi-sweep, and multi-visit attribute grammars. For each of these, a pure type and a simple type are distinguished: The pure attribute grammars are defined by nondeterministic attribute evaluators, and the simple ones by the corresponding (usual) deterministic evaluators. The time complexity of deciding membership in these classes of attribute grammars is studied. In general, this is harder for the pure classes than for the simple ones, for which it is either polynomial or NP-complete. The expressive power of the eight classes is compared by studying the translations they can compute. It is shown that sweeps are more powerful than passes, and visits are more powerful than sweeps.

For Correspondence: J. Engelfriet, Department of Computer Science, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands; G. File, Facolta di Matematica, Universita di Padova, Via Belzoni 7, 35100 Padova, Italy.

On the Design of Some Systolic Algorithms

Jan L.A. Van de Snepscheut arid Johan B. Swenker

The design of systolic algorithms is discussed, that is, algorithms that may efficiently be executed by a synchronous array of cells that perform local communications only. Systolic algorithms are designed through techniques developed in the context of sequential programming. Heuristics are given that guide the programmer in the design of a variety of efficient solutions.

For Correspondence: Department of Mathematics and Computing Science, Groningen University, The Netherlands.

A Uniform Approach toward Handling Atomic and Structured information in the Nested Relational Database Model

Marc Gyssens, Jan Paredaens, and Dirk Van Gucht

The algebras and query languages for nested relations defined thus far do not allow us to flatten" a relation scheme by disregarding the internal representation of data. In real life, however, the degree in which the structure of certain information. such as addresses, phone numbers, etc., is taken into account depends on the particular application and may even vary in time. Therefore, an algebra is proposed that does allow us to simplify relations by disregarding the internal structure of a certain class of information. This algebra is based on a careful manipulation of attribute names. Furthermore, the key operator in this algebra, called copying," allows us to deal with various other common queries in a very uniform manner, provided these queries are interpreted as operations on classes of semantically equivalent relations rather than individual relations. Finally, it is shown that the proposed algebra is complete in the sense of Bancilhon and Paredaens.

For Correspondence: M. Gyssens and J. Paredaens, Department of Mathematics and Computer Science, University of Antwerp (UIA), Universiteitsplein 1, B-2610 Antwerp, Belgium: D. Van Gucht, Computer Science Department, Indiana University, Lindley Hall 101, Bloomington, IN 47405-4101.

A Transaction-Based Approach to Relational Database Specification

Serge Abiteboul and Victor Vianu

An operational approach to database specification is proposed and investigated. Valid database states are described as the states resulting from the application of admissible transactions, specified by a transactional schema. The approach is similar in spirit to the modeling of behavior by methods and encapsulation in object-oriented systems. The transactions considered are line programs consisting of insertions. deletions, and modifications, using simple selection conditions. The results concern basic properties of transactional schemas, as well as the connection with traditional constraint schemas. in particular, the expressive power of transactional schemas is characterized. Although it is shown that transaction-based specification and constraint-based specification are incomparable, constraints of practical interest that have corresponding transactional schemas are identified. The preservation of constraints by transactions is also studied.

For Correspondence: S. Abiteboul, I.N.R.I.A., Domaine de Rocquencourt, 78153 Le Chesnay, France; V. Vianu. CSE Department, MC 014, University of California at San Diego, La Jolla, CA 92093.

The Periodic Balanced Sorting Network

Martin Dowd, Yehoshua Perl, Larry Rudolph, and Michael Saks

A periodic sorting network consists of a sequence of identical blocks. in this paper, the periodic balanced sorting network, which consists of log n blocks, is introduced. Each block, called a balanced merging block, merges elements on the even input lines with those on the odd input lines.

The periodic balanced sorting network sorts n items in O([log n][.sup.2] ) time using (n/2)(log n)[.sup.2] comparators. Although these bounds are comparable to many existing sorting networks, the periodic structure enables a hardware implementation consisting of only one block with the output of the block recycled back as input until the output is sorted. An implementation of our network on the shuffle exchange interconnection model in which the direction of the comparators are all identical and fixed is also presented.

For Correspondence: M. Dowd, Department of Computer Science, Rutgers University, New Brunswick, Nj 08903; Y. Perl, Department of Computer Science, New jersey Institute of Technology, Newark, NJ 07102; L. Rudolph, Department of Computer Science, Hebrew University, jerusalem, israel; M. Saks, Department of Computer Science and Electrical Engineering, University of California at San Diego, La Jolla, CA.

Spacefilling Curves and the Planar Travelling Salesman Problem

Loren K. Platman and John J. Bartholdi, III

To construct a short tour through points in the plane, the points are sequenced as they appear along a spacefilling curve. This heuristic consists essentially of sorting, so it is easily coded and requires only 0 (N) memory and O(N log N) operations. Its performance is competitive with that of other fast methods.

For Correspondence: School of Industrial and Systems Engineering, Georgia institute of Technology, Atlanta, GA 30332 Using Temporal Hierarchies to Efficiently Maintain Large

Temporal Databases

Thomas Dean

Many real-world applications involve the management of large amounts of time-dependent information. Temporal database systems maintain this information in order to support various sorts of inference (e.g., answering questions involving propositions that are true over some intervals and false over others), For any given proposition, there are typically many different occasions in which that proposition becomes true and persists for some length of time, In this paper, these occasions are referred to as time tokens. Many routine database operations must search through the database for time tokens satisfying certain temporal constraints. To expedite these operations. this paper describes a set of techniques for organizing temporal information by exploiting the local and global structure inherent in a wide class of temporal reasoning problems. The global structure of time is exemplified in conventions for partitioning time according to the calendar and the clock. This global structure is used to partition the set of time tokens to facilitate retrieval. The local structure of time is exemplified in the causal relationships between events and the dependencies between planned activities. This local structure is used as part of a strategy for reducing the computation required during constraint propagation. The organizational techniques described in this paper are quite general, and have been used to support a variety of powerful inference mechanisms. Integrating these techniques into an existing temporal database system has increased, by an order of magnitude or more in most applications, the number of time tokens that can be efficiently handled.

For Correspondence: Department of Computer Science, Brown University, Providence, Rt 02912.

Journal of the Association for Computing Machinery January 1990

Asymptotic Expansions for Large Closed Queuing Networks

Charles Knessl and Charles Tier

In this paper, a new asymptotic method is developed for analyzing closed BCMP queuing networks with a single class chain) consisting of a large number of customers, a single infinite server queue, and a large number of single server queues with fixed (state-independent) service rates. Asymptotic approximations are computed for the normalization constant (partition function) starting directly from a recursion relation of Buzen. The approach of the authors employs the ray method of geometrical optics and the method of matched asymptotic expansions. The method is applicable when the servers have nearly equal relative utilizations or can be divided into classes with nearly equal relative utilizations. Numerical comparisons are given that illustrate the accuracy of the asymptotic approximations.

For Correspondence: Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, M/C 249, Box 4348, Chicago, IL 60680.

Minimal and Complete Word Unification

Joxan Jaffar

The fundamental satisfiability problem for word equations has been solved recently by Makanin. However, this algorithm is purely a decision algorithm. The main result of this paper solves the complementary problem of generating the set of all solutions. Specifically, the algorithm in this paper generates, given a word equation, a minimal and complete set of unifiers. It stops if this set is finite.

For Correspondence: IBM Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598.

Completeness of Rewrite Rules and Rewrite Strategies for FP

Joseph Y. Halpern, John H. Williams, and Edward L. Wimmers

This paper treats languages whose operational semantics is given by a set of rewrite rules. For such languages, it is important to be able to determine that there are enough rules to be able to compute the correct meaning of all expressions, but not so many than the system of rules is inconsistent. A formal framework is developed in which to give a precise treatment of these completeness and soundness issues, which are then investigated in the context of an extended version of the functional programming language FP. The rewrite rules of FP are shown to be sound and complete with respect to three different notions of completeness. The latter half of the paper considers rewrite strategies. In order to implement a language based on rewrite rules, it does not suffice to know that there are "enough" rules in the language; a good strategy for determining the order in which to apply them is also needed. But what is "good?" Corresponding to each notion of completeness, there is a notion of a good rewrite strategy. These notions of goodness are examined and characterized, and examples of a number of natural good strategies are given. Although these results are presented in the context of FP, the techniques (some of which are nontrivial extensions of techniques first used in the context of [lamba]-calculus) should apply well beyond the realm of FP rewriting systems.

For Correspondence: IBM Almaden Research Center, 650 Harry Road, San jose, CA 95120-6099

Nondeterministic Polynomial-Time Computations and Models of Arithmetic

Attila Mate

A semantic, or model-theoretic, approach is proposed to study the problems P =? NP and NP =? co-NP. This approach seems to avoid the difficulties that recursion-theoretic approaches appear to face in view of the result of Baker et al. on relativizations of the P =? NP question; moreover, semantical methods are often simpler and more powerful than syntactical ones. The connection between the existence of certain partial extensions of nonstandard models of arithmetic and the question NP =? co-NP is discussed. Several problems are stated about nonstandard models, and a possible link between the Davis-Matijasevid-PutnamRobinson theorem on Diophantine sets and the NP =? co-NP question is mentioned.

For Correspondence: Department of Mathematics, Brooklyn College of The City University of New York, Brooklyn, NY 11210.

Some Computational Aspects of Circumscription

Phokion G. Kolaitis and Christos H. Papadimitriou

The effects of circumscribing first-order formulas are explored from a computational standpoint. First, extending work of V. Lifschitz, it is shown that the circumscription of any existential first-order formula is equivalent to a first-order formula. After this, it is established that a set of universal Horn clauses has a first-order circumscription if and only if it is bounded (when considered as a logic program); thus it is undecidable to tell whether such formulas have first-order circumscription. Finally, it is shown that there are first-order formulas whose circumscription has a coNP-complete model-checking problem.

For Correspondence: P. G. Kolaitis, Computer and Information Sciences. University of California, Santa Cruz, Santa Cruz, CA 95064; C. H. Papadimitriou, Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093-0114.

Polynomial-Time implication Problems for Unary inclusion Dependencies

Stavros S. Cosmadakis, Paris C. Kanellakis, and Moshe Y. Vardi

Unary inclusion dependencies are database constraints expressing subset relationships. The decidability of implication for these dependencies together with embedded implicational dependencies, such as functional dependencies, are investigated. As shown by Casanova et al., the unrestricted and finite implication problems are different for the class of functional and unary inclusion dependencies; also, for this class and for any fixed k, finite implication has no k-ary complete axiomatization. For both of these problems, complete axiomatizations and polynomial-time decision procedures are provided: linear time for unrestricted implication and cubic time for finite implication. It follows that functional and unary inclusion dependencies form a semantically natural class of first-order sentences with equality, which although not finitely controllable, is efficiently solvable and docile. Generalizing from these results, it is shown that the interaction between functional and inclusion dependencies characterizes: (1) unrestricted implication of unary inclusion and all embedded implicational dependencies; (2) finite implication of unary inclusion and all full implicational dependencies; (3) finite implication of unary inclusion and all embedded tuple-generating dependencies. As a direct consequence of this analysis, most of the applications of dependency implication are extended, within polynomial-time, to database design problems involving unary inclusion dependencies. Such examples are tests for lossless joins and tests for complementarity of projective views.

For Correspondence: S. S. Cosmadakis, IBM Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598; P. C. Kanellakis, Department of Computer Science, Brown University, P.O. Box 1910, Providence, RI 02912; M. Y. Vardi, IBM Almaden Research Center, 650 Harry Road, San jose, CA 95120-6099.

ACM Transactions on Graphics October 1989

Blending Algebraic Surfaces

Joe Warren

new definition of geometric continuity for implicitly defined surfaces is introduced. Under this definition, it is shown that algebraic blending surfaces (surfaces that smoothy join two or more surfaces) have a very specific form. In particular, any polynomial whose zero set blends the zero sets of several other polynomials is always expressible as a simple combination of these polynomials. Using this result, new methods for blending several algebraic surfaces simultaneously are derived.

For Correspondence: Department of Computer Science, Rice University, Houston, TX 77251.

The Displacement Method for implicit Blending Surfaces in Solid Models

Alyn P. Rockwood

To date, methods that blend solids, that is, B-rep or CSG models, with implicit functions require successive composition of the blending functions to handle an arbitrary solid model. The shape of the resulting surfaces depends upon the algebraic distances defined by these functions. To achieve meaningful shapes, previous methods have relied on blending functions that have a pseudo-Euclidean distance measure. These methods are abstracted, resulting in some general observations. Unfortunately, the functions used can exhibit unwanted discontinuities. A new method, the displacement form of blending, embeds the zero surface of the blending functions in a form for which algebraic distance is C' continuous in the entire domain of definition.

Characteristics of the displacement form are demonstrated using the superelliptic blending functions. Intuitive and mathematical underpinnings are provided.

For Correspondence: Silicon Graphics Computer Systems, 2011 N. Shoreline Blvd., Mountain View, CA 94039-7311.

On Local implicit Approximation and its Applications

Jung Hong Chuang and Christoph M. Hoffmann

A method is proposed for computing an implicit approximant at a point to a parametric curve or surface. The method works for both polynomially and rationally parameterized curves and surfaces and achieves an order of contact that can be prescribed. In the case of nonsingular curve points, the approximant must be irreducible, but in the surface case additional safeguards are incorporated into the algorithm to ensure irreducibility. The method also yields meaningful results at most singularities. In principle, the method is capable of exact implicitization and has a theoretical relationship with certain resultant-based elimination methods.

For Correspondence: Department of Computer Science, Purdue University, West Lafayette, IN 47907.

Automatic Parameterization of Rational Curves and Surfaces IV: Algebraic Space Curves

Shreeram S. Abhyankar and Chanderjit L. Bajaj

For an irreducible algebraic space curve C that is implicitly defined as the intersection of two algebraic surfaces, f(x, y, z) = 0 and g(x, y, z) = 0, there always exists a birational correspondence between the points of C and the points of an irreducible plane curve P, whose genus is the same as that of C. Thus C is rational iff the genus of P is zero. Given an irreducible space curve C = f [Intersection] g), with f and g not tangent along C, we present a method of obtaining a projected irreducible plane curve P together with birational maps between the points of P and C. Together with [4], this method yields an algorithm to compute the genus of C, and if the genus is zero, the rational parametric equations for C. As a biproduct, this method also yields the implicit and parametric equations of a rational surface S containing the space curve C.

The birational mappings of implicitly defined space curves find numerous applications in geometric modeling and computer graphics since they provide an efficient way of manipulating curves in space by processing curves in the plane. Additionally, having rational surfaces containing C yields a simple way, of generating related families of rational space curves.

For Correspondence: S. S. Abhyankar, Department of Mathematics, Purdue University, West Lafayette, IN 47907; C. L. Bajaj, Department of Computer Science, Purdue University, West Lafayette, IN 47907.

Rational Continuity: Parametric, Geometric, and Frenet Frame Continuity of Rational Curves

Michael E. Hohmeyer and Brian A. Barsky

The parametric, geometric, or Frenet frame continuity of a rational curve has often been ensured by requiring the homogeneous polynomial curve associated with the rational curve to possess either parametric, geometric, or Frenet frame continuity, respectively. In this paper, we show that this approach is overly restrictive and derive the constraints on the associated homogeneous curve that are both necessary and sufficient to ensure that the rational curve is either parametrically, geometrically, or Frenet frame continuous.

For Correspondence: Computer Science Division, Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA 94720.

A Generalized Ball Curve and its Recursive Algorithm

H. B. Said

The use of Bernstein polynomials as the basis functions in Bezier's UNISURF is well-known. These basis functions possess the shape-preserving properties that are required in designing free-form curves and surfaces. These curves and surfaces are computed efficiently using the de Casteljau Algorithm. Ball uses a similar approach in defining cubic curves and bicubic surfaces in his CONSURF program. The basis functions employed are slightly different from the Bernstein polynomials. However, they also possess the same shape-preserving properties. A generalization of these cubic basis functions of Ball, such that higher order curves and surfaces can be defined and a recursive algorithm for generating the generalized curve are presented. The algorithm could be extended to generate a generalized surface in much the same way that the de Casteljau Algorithm could be used to generate a Bezier surface.

For Correspondence: School of Mathematical and Computer Sciences, Universiti Sains Malaysia, 11800 Penang, Malaysia.

ACM Transactions on Graphics January 1990

Extending the Radiosity Method to include Specularly Reflecting and Translucent Materials

Holly E. Rushmeier and Kenneth E. Torrance

An extension of the radiosity method is presented that rigorously accounts for the presence of a small number of specularly reflecting surfaces in an otherwise diffuse scene, and for the presence of a small number of specular or ideal diffuse transmitters. The relationship between the extended method and earlier radiosity and ray-tracing methods is outlined. It is shown that all three methods are based on the same general equation of radiative transfer. A simple superposition of the earlier radiosity and ray-tracing methods in order to account for specular behavior is shown to be physically inconsistent, as the methods are based on different assumptions. Specular behavior is correctly included in the present method. The extended radiosity method and example images are presented.

For Correspondence: H. E. Rushmeier, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332; K. E. Torrance, Sibley School of Mechanical and Aerospace Engineering, Upson Hall, Cornell University, Ithaca, NY 14853.

Performing Geometric Transformations by Program Transformation

Robin A. Nicholl and Tina M. Nicholl

Problems in geometry often possess symmetry properties that may be exploited in order to develop solutions. Algorithms based on these symmetry properties will frequently use geometric transformations to transform one case into another (symmetric) case. One advantage of this approach is that the algorithm avoids enumeration of cases and thus is shorter and generally easier to read. One disadvantage is that some additional execution time is required to perform these transformations, We describe how simple program equivalences may be used as program transformations to eliminate this additional execution time from programs that use geometric transformations. This approach has been used to develop an efficient implementation of a new algorithm for the two-dimensional line-clipping problem.

For Correspondence: Department of Computer Science, The University of Western Ontario, London, Canada N6A 5B7. Knot insertion for Beta-Spline Curves and Surfaces

Barrry Joe

Discrete Beta-splines arise when a Beta-spline curve is subdivided; that is, extra knots are inserted so that the curve is expressed in terms of a larger number of control vertices and Beta-splines. Their properties and an algorithm for their computation are given in "Discrete Beta-Splines" by joe (Computer Graphics, vol. 21, pp. 137-144). We prove a stronger version of one of these properties, from which a new algorithm for computing discrete Beta-splines is obtained. This algorithm can also be used to compute discrete B-splines. We give a comparison of operation counts for this algorithm versus other algorithms, and for two methods to compute the new control vertices of Beta-spline and B-spline curves and surfaces.

For Correspondence: Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada T6G 2H1.

Simulation of Simplicity: A Technique to Cope with Degenerate Cases in Geometric Algorithms

Herbert Edelsbrunner and Ernst Peter Mucke

This paper describes a general-purpose programming technique, called Simulation of Simplicity, that can be used to cope with degenerate input data for geometric algorithms. It relieves the programmer from the task of providing a consistent treatment for every single special case that can occur. The programs that use the technique tend to be considerably smaller and more robust than those that do not use it. We believe that this technique will become a standard tool in writing geometric software.

For Correspondence: Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 West Springfield Ave., Urbana. IL 61801.

The implementation of an Algorithm to Find the Convex Hull of a Set of Three-Dimensional Points

A. M. Day

A detailed description of the implementation of a three-dimensional convex hull algorithm is given. The problems experienced in the production and testing of a correct and robust implementation of a geometric algorithm are discussed. Attention is paid to those issues that are often brushed over in the theoretical descriptions but cause errors in a real computation. These include degeneracies such as coplanar points, floating-point errors, and other special, but not necessarily degenerate, cases.

For Correspondence: School of Information Systems, University of East Anglia, Norwich NR4 7TJ, England.

ACM Transactions on Computer Systems February 1990

Performance of Firefly RPC

Michael D. Schroeder and Michael Burrows

In this paper we report on the performance of the remote procedure call (RPC) implementation for the Firefly multiprocessor and analyze the implementation to account precisely for all measured latency. From the analysis and measurements, we estimate how much faster RPC could be if certain improvements were made. The elapsed time for an intermachine call to a remote procedure that accepts no arguments and produces no results is 2.66 ms The elapsed time for an RPC that has a single 1440-byte result (the maximum result that will fit in a single packet) is 6.35 ms. Maximum intermachine throughput of application program data using RPC is 4.65 Mbits/s, achieved with four threads making parallel RPCs that return the maximum-size result that fits in a single RPC result packet. CPU utilization at maximum throughput is about 1.2 CPU seconds per second on the calling machine and a little less on the server. These measurements are for RPCs from user space on one machine to user space on another, using the installed system and a 10 Mbit/s Ethernet. The RPC packet exchange protocol is built on 1P/UDP, and the times include calculating and verifying UDP checksums. The Firefiles used in the tests had 5 MicroVAX II processors and a DEQNA Ethernet controller.

For Correspondence: Digital Equipment Corporation, Systems Research Center, 130 Lytton Avenue, Palo Alto, CA 94301.

A Logic of Authentication

Michael Burrows, Martin Abadi, and Roger Needham

Authentication protocols are the basis of security in many distributed systems, and it is therefore essential to ensure that these protocols function correctly. Unfortunately, their design has been extremely error prone. Most of the protocols found in the literature contain redundancies or security flaws. A simple logic has allowed us to describe the beliefs of trustworthy parties involved in authentication protocols and the evolution of these beliefs as a consequence of communication. We have been able to explain a variety of authentication protocols formally, to discover subtleties and errors in them, and to suggest improvements. In this paper we present the logic and then give the results of our analysis of four published protocols, chosen either because of their practical importance or because they serve to illustrate our method.

For Correspondence: M. Burrows and M. Abadi, Systems Research Center, Digital Equipment Corporation, 130 Lytton Avenue, Palo Alto, CA 94301: and R. Needham, University of Cambridge Computer Laboratory, Corn Exchange Street, Cambridge CB2 3QG, U.K.

Lightweight Remote Procedure Call

Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy

Lightweight Remote Procedure Call (LRPC) is a communication facility designed and optimized for communication between protection domains on the same machine. In contemporary small-kernel operating systems, existing RPC systems incur an unnecessarily high cost when used for the type of communication that predominates-between protection domains on the same machine. This cost leads system designers to coalesce weakly related subsystems into the same protection domain, trading safety for performance. By reducing the overhead of same-machine communication, LRPC encourages both safety and performance. LRPC combines the control transfer and communication model of capability systems with the programming semantics and large-grained protection model of RPC. LRPC achieves a factor-of-three performance improvement over more traditional approaches based on independent threads exchanging messages, reducing the cost of same-machine communication to nearly the lower bound imposed by conventional hardware. LRPC has been integrated into the Taos operating system of the DEC SRC Firefly multiprocessor workstation.

For Correspondence: Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195.

A System for Computer Music Performance

David P. Anderson and Ron Kuivila

A computer music performance system (CMPS) is a computer system connected to input devices (including musical keyboards or other instruments) and to graphic and audio output devices. A human performer generates input events using the input devices. The CMPS responds to these events by computing and performing sequences of output actions whose intended timing is determined algorithmically. Because of the need for accurate timing of output actions, the scheduling requirements of a CMPS differ from those of general-purpose or conventional realtime systems.

This paper describes the scheduling facilities of FORMULA, a CMPS used by many musicians. In addition to providing accurate timing of output action sequences, FORMULA provides other basic functions useful in musical applications: (1) per-process virtual time systems with independent relationships to real time; (2) process-grouping mechanisms and language-level control structures with time-related semantics, and (3) integrated scheduling of tasks (such as compiling and editing) whose real-time constraints are less stringent than those of output action computations.

For Correspondence: D. P. Anderson, Computer Science Division, EECS Dept., University of California, Berkeley, CA 94720; R. Kuivila, Music Department, Wesleyan University, Middletown, CT 06457.

ACM Transactions on information Systems October 1989

Work at Home for Computer Professionals: Current Attitudes and Future Prospects

Margrethe H. Olson

The subject of this paper is work performed in the home with computer and communications technology, also known as telecommuting. The article reports on two studies of work at home: a quasi-experimental field study of organizational telecommuting pilot programs, and an attitude survey comparing computer professionals who work at home to employees doing similar jobs in traditional office settings. The results of the field study demonstrated that working in the home had little impact on employee performance; however, supervisors were not comfortable with remote workers and preferred their employees to be on site. In the survey, work in the home was related to lower job satisfaction, lower organizational commitment, and higher role conflict. The survey also included computer professionals who worked at home in addition to the regular work day. The author suggests that performing additional unpaid work in the home after regular work hours may be an important trend that merits further investigation. The studies demonstrate that while computer and communications technology have the potential to relax constraints on information work in terms of space and time, in today's traditional work environments, corporate culture, and management style limit acceptance of telecommuting as a substitute for office work.

For Correspondence: DMR Group Melbourne, Level 8, 1 Southbank Blvd., Riverside Quary, South Melbourne 3205, Victoria, Australia. The 3DIS: An Extensible Object-Oriented information

Management Environment

Hamideh Afsarmanesh and Dennis McLeod

The 3-Dimensional Information Space 3DIS) is an extensible object-oriented framework for information management. It is specifically oriented toward supporting the database requirements for data-intensive information system applications in which (1) information objects of various levels of abstraction and modalities must be accommodated, (2) descriptive and structural information (metadata) is rich and dynamic and (3) users who are not database experts must be able to design, manipulate, and evolve databases. In response to these needs, the 3D[S provides an approach in which data and the descriptive information about data are handled uniformly in an extensible framework. The 3DIS provides a simple, geometric, and formal representation of data which forms a basis for understanding, defining, and manipulating databases. Several prototype implementations based upon the 3DIS have been designed and implemented and are in experimental use.

For Correspondence: H. Afsarmanesh, Computer Science Department, California State University, Dominguez Hills, Carson, CA 90747; hamideh@lyra.usc.edu; D. McLeod, Computer Science Department, University of Southern California. Los Angeles, CA 90089-0782: mcleod@pollux.usc.edu.

C-TODOS: An Automatic Tool for Office System Conceptual Design

B. Pernici, F. Barbic, M. G. Fugini, R. Maiocchi, J. R. Rames, and C. Rolland

Designers of office information systems, which share various features with information systems and software development, need to carefully consider special issues such as document and communication flows, user roles, user interfaces, and available technology.

The ESPRIT Project, Automatic Tools for Designing Office Information Systems (TODOS) proposes an integrated environment for office design with tools for requirements collection and analysis, conceptual design, rapid prototyping, and architecture selection.

Conceptual design is a central phase of office system design: It provides correct and complete functional requirements from which the office prototype will be developed and the final architecture chosen. C-TODOS, the conceptual design support tool developed within TODOS, is presented in this paper. The purpose of C-TODOS is to give the designer tools for supporting conceptual modeling activities with the goal of obtaining correct, consistent, and good quality office-functional specifications.

This paper presents C-TODOS within the TODOS development environment and describes the basic features of the tool: the TODOS Conceptual Model, the Specification Database, and the Modeling, Query, and Consistency Checking Modules. The use of C-TODOS, through illustration of the development of a test case, and possible future research are discussed.

For Correspondence: B. Pernici, F. Barbic, and R. Maiocchi, Politecnico di Milano, Dipartimento di Elettronica, Piazza L. da Vinci, 32, 1-20133, Milan, Italy: Telefax (39) 02-23993587; M. G. Fugini, Politecnico di Milano, Milan, Italy and Universiti di Brescia, Brescia, Italy; J. R. Rames, Thom'6 Telesystemes, Paris, France; C. Rolland, Universite de Paris I, Paris, France.
COPYRIGHT 1990 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Association for Computing Machinery
Publication:Communications of the ACM
Article Type:column
Date:May 1, 1990
Words:8075
Previous Article:Alan J. Perlis 1922-1990: a founding father of computer science as a separate discipline.
Next Article:Technical correspondence: uncertainty in computer application.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters