# Abstracts from other ACM publications.

Abstracts from Other ACM PublicationsACM Transactions on Programming Languages and Systems

April 1989

Incremental Dynamic Semantics for Language-Based

Programming Environments

Gail E. Kaiser

Attribute grammars are a formal notation for expressing the static semantics of programming languages--those properties that can be derived from inspection of the program text. Attribute grammars have become popular as a mechanism for generating language-based programming environments that incrementally perform symbol resolution, type checking, code generation, and derivation of other static semantic properties as the program is modified. However, attribute grammars are not suitable for expressing dynamic semantics--those properties that reflect the history of program execution and/or user interactions with the programming environment. This paper presents action equations, an extension of attribute grammars suitable for specifying the static and the dynamic semantics of programming languages. It describes how action equations can be used to generate language-based programming environments that incrementally derive static and dynamic properties as the user modifies and debugs the program.

For Correspondence: Columbia University, Department of Computer Science, New York, NY 10027.

Efficient High-Level Iteration with Accumulators

Robert D. Cameron

Accumulators are proposed as a new type of high-level iteration construct for imperative languages. Accumulators are user-programmed mechanisms for successively combining a sequence of values into a single result value. The accumulated result can either be a simple numeric value such as the sum of a series or a data structure such as a list. Accumulators naturally complement constructs that allow iteration through user-programmed sequences of values such as the iterators of CLU and the generators of Alphard. A practical design for high-level iteration is illustrated by way of an extension of Modula-2 called Modula Plus. The extension incorporates both a redesigned mechanism for iterators as well as the accumulator design. Several applications are illustrated including both numeric and data structure accumulation. It is shown that the design supports efficient iteration both because it is amenable to implementation via in-line coding and because it allows high-level iteration concepts to be implemented as encapsulations of efficient low-level manipulations.

For Correspondence: School of Computing Science, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada.

Designing Families of Data Types Using Exemplars

Wilf R. LaLonde

Designing data types in isolation is fundamentally different from designing them for integration into communities of data types, especially when inheritance is a fundamental issue. Moreover, we can distinguish between the deisgn of families--integrated types that are variations of each other--and more general communities where totally different but cohesive collections of types support specific applications (e.g., a compiler). We are concerned with the design of integrated families of data types as opposed to individual data types; that is, on the issues that arise when the focus is intermediate between the design of individual data types and more general communities of data types. We argue that design at this level is not adequately served by systems providing only inheritance hierarchies and that systems which additionally provide a coupled subtype specification hierarchy are still not adequate. We propose a system that provides n unlimited number of uncoupled specification hierarchies and illustrate it with threed a subtype hierarchy, a specialization/generalization hierarchy, and a like hierarchy. We also resurrect a relatively unknown Smalltalk design methodology that we call programming-by-exemplars and argue that it is an important addition to a designer's grab bag of techniques. The methodology is used to show that the subtype hierarchy must be decoupled from the inheritance hierarchy, something that other researchers have also suggested. However, we do so in the context of exemplar-based systems to additionally show that they can already support the extensions required without modification and that they lead to a better separation between users and implementers, since classes and exemplars can be related in more flexible ways. We also suggest that class-based systems need the notion of private types if they are to surmount their current limitations. Our points are made in the guise of designing a family of List data types. Among these is a new variety of lists that have never been previously published: prefix-sharing lists. We also argue that there is a need for familial classes to serve as an intermediary between users and the members of a family.

For Correspondence: School of Computer Science, Carleton University, Ottawa, Ontario K1S 5B6, Canada.

Local Atomicity Properties: Modular Concurrency Control for

Abstract Data Types

William E. Weihl

Atomic actions (or transactions) are useful for coping with concurrency and failures. One way of ensuring atomicity of actions is to implement applications in terms of atomic data types: abstract data types whose objects ensure serializability and recoverability of actions using them. Many atomic types can be implemented to provide high levels of concurrency by taking advantage of algebraic properties of the type's operations, for example, that certain operations commute. In this paper we analyze the level of concurrency permitted by an atomic type. We introduce several local constraints on individual objects that suffice to ensure global atomicity of actions; we call these constraints local atomicity properties. We present three local atomicity properties, each of which is optimal: no strictly weaker local constraint on objects suffices to ensure global atomicity for actions. Thus, the local atomicity properties define precise limits on the amount of concurrency that can be permitted by an atomic type.

For Correspondence: MIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139.

ECCS and LIPS: Two Languages for OSI Systems Specification

and Verification

Vincenza Carchiolo, Antonella Di Stefano, Alberto Faro, and Giuseppe Pappalardo

An issue of current interest in the Open Systems Interconnection (OSI) field is the choice of a language well suited to specification and verification. For this purpose, two languages based on Milner's communication calculi are proposed, respectively intended for the specification of asynchronous and synchronous OSI systems. A formal verification method, relying upon the algebraic foundations of the two languages, is introduced and illustrated by means of examples based on nontrivial protocols and services.

For Correspondence: V. Carchiolo, A. Di Stefano, and A. Faro, Istituto di Informatica e Telecommunicazioni. Facolta di Ingegneria, Universita di Catania, Viale A. Doria 6, 95125 Catania, Italy; G. Pappalardo, Computing Laboratory, The University of Newscastle upon Tyne NE1 7RU, UK.

Uniform Self-Stabilizing Rings

James E. Burns and Jan Pachl

A self-stabilizing system has the property that, no matter how it is perturbed, it eventually returns to a legitimate configuration. Dijkstra originally introduced the self-stabilization problem and gave several solutions for a ring of processors in his 1974 Communications of the ACM paper. His solutions use a distinguished processor in the ring, which effectively acts as a controlling element to drive the system toward stability. Dijkstra has observed that a distinguished processor is essential if the number of processors in the ring is composite. We show, by presenting a protocol and providing its correctness, that there is a self-stabilizing system with no distinguished processor if the size of the ring is prime. The basic protocol use [Theta]([n.sup.2]) states in each processor when n is the size of the ring. We modify the basic protocol to obtain one that uses [Theta]([n.sup.2]/ln n) states.

For Correspondence: J. E. Burns, School of Information and Computer Science, Georgia Institute of Technology, Atlanta, GA 30332-0280; J. Pachl, IBM Research Division, Zurich Research Laboratory, Saumerstrasse 4, 8803 ruschlikon, Switzerland.

ACM Transactions on Database Systems

June 1989

A Framework for Effective Retrieval

C. T. Yu, W. Meng, and S. Park

The aim of an effective retrieval system is to yield high recall and precision (retrieval effectiveness). The nonbinary independence model, which takes into consideration the number of occurrences of terms in documents, is introduced. It is shown to be optimal under the assumption that terms are independent. It is verified by experiments to yield significant improvement over the binary independence model. The nonbinary model is extended to normalized vectors and is applicable more general queries.

Various ways to alleviate the consequences of the term independence assumption are discussed. Estimation of parameters required for the nonbinary independence model is provided, taking into consideration that a term may have different meanings.

For Correspondence: Department of Electrical Engineering and Computer Science, University of Illinois at Chicago, IL 60680.

NFQL: The Natural Forms Query Language

David W. Embley

A means by which ordinary forms can be exploited to provide a basis for nonprocedural specification of information processing is discussed. The Natural Forms Query Language (NFQL) is defined. In NFQL data retrieval requests and computation specifications are formulated by sketching ordinary forms to show what data are desired and update operations are specified by altering data on filled-in forms. The meaning of a form depends on a store of knowledge that includes extended abstract data types for defining elementary data items, a database scheme defined by an entity-relationship model, and a conceptual model of an ordinary form. Based on this store of knowledge, several issues are addressed and resolved in the context of NFQL. These issues include automatic generation of query expressions from weak specifications, the view update problem, power and completeness, and a heuristic approach to resolving computational relationships. A brief status report of an implementation of NFQL is also given.

For Correspondence: Brigham Young University, Provo, UT 84602.

Efficient Optimization of Simple Chase Join Expressions

Paolo Atzeni and Edward P. F. Chan

Simple chase join expressions, are relational algebra expressions, involving only projection and join operators, defined on the basis of the functional dependencies associated with the database scheme. They are meaningful in the weak instance model, because for certain classes of schemes, including independent schemes, the total projections of the representative instance can be computed by means of unions of simple chase join expressions. We show how unions of simple chase join expressions can be optimized efficiently, without constructing and chasing the corresponding tableaux. We also present efficient algorithms for testing containment and equivalence, and for optimizing individual simple chase join expressions.

For Correspondence: P. Atzeni, IASI-CNR, Viale Manzoni 30, 00185 Rome, Italy; E. P. F. Chan, Department of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada.

File Organization Using Composite Perfect Hashing

M. V. Ramakrishna and Per-Ake Larson

Perfect hashing refers to hashing with no overflows. We propose and analyze a composite perfect hashing scheme for large external files. The scheme guarantees retrieval of any record in a single disk access. Insertions and deletions are simple, and the file size may vary considerably without adversely affecting the performance. A simple variant of the scheme supports efficient range searches in addition to being a completely dynamic file organization scheme. These advantages are achieved at the cost of a small amount of additional internal storage and increased cost of insertions.

For Correspondence: M. V. Ramakrishna, Computer Science Department, Michigan State University, East Lansing, MI 48824-1027; P.-A Larson, Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1.

Maintaining Availability in Partitioned Replicated Databases

Amr El Abbadi and Sam Toueg

In a replicated database, a data item may have copies residing on several sites. A replica control protocol is necessary to ensure that data items with several copies behave as if they consist of a single copy, as far as users can tell. We describe a new replica control protocol that allows the accessing of data inspite of site failures and network partitioning. This protocol provides the database designer with a large degree of flexibility in deciding the degree of data availability, as well as the cost of accessing data.

For Correspondence: A. El Abbadi, Department of Computer Science, University of California, Santa Barbara, CA 93106; S. Toueg, Department of Computer Science, Cornell University, Ithaca, NY 14853.

ACM Transactions on Mathematical Software

June 1989

Finding All Isolated Solutions to Polynomial Systems Using

HOMPACK

Alexander P. Morgan, Andrew J. Sommese, and Layne T. Watson

Although the theory of polynomial continuation has been established for over a decade (following the work of Garcia, Zangwill, and Drexler), it is difficult to solve polynomial systems using continuation in practice. Divergent paths (solutions at infinity), singular solutions, and extreme scaling of coeeficients can create catastrophic numerical problems. Further, the large number of paths that typically arise can be discouraging. In this paper we summarize polynomial-solving homotopy continuation and report on the performance of three standard path-tracking algorithms (as implemented in HOMPACK) in solving three physical problems of varying degrees of difficulty. Our purpose is to provide useful information on solving polynomial systems, including specific guidelines for homotopy construction and parameter settings. The m-homogenous strategy for constructing polynomial homotopies is outlined, along with more traditional approaches. Computational comparisons are included to illusrate and contrast the major HOMPACK options. The conclusions summarize our numerical experience and discuss areas for future research.

For Correspondence: A. P. Morgan, Mathematics Department, General Motors Research Laboratories, Warren, MI 48090-9057; A. J. Sommese, Mathematics Department, University of Notre Dame, Notre Dame, IN 46556; L. T. Watson, Department of Computer Science, Virginia Polytechnic Institute & State University, Blacksburg, VA 24061.

An Algorithm for Generating Interpolatory Quadrature Rules of the

Highest Degree of Precision with Preassigned Nodes for General

Weight Functions

T. N. L. Patterson

The construction of an algorithm is described for generating interpolatory quadrature rules of the highest degree of precision with arbitrarily preassigned nodes for general constant signed weight functions. It is of very wide application in that to operate, only the definition of the 3-term recurrence relation for the orthogonal polynomials associated with the weight function need be supplied. The algorithm can be used to produce specific individual quadrature rules or sequences of rules by iterative application.

For Corespondence: Department of Applied Mathematics and Theoretical Physics. The Queen's University of Belfast, Belfast, BT7 1NN, Northern Ireland.

Table-Driven Implementation of the Exponential Function in IEEE

Floating-Point Arithmetic

Ping Tak Peter Tang

Algorithms and implementation details for the exponential function in both single- and double-precision of IEEE 754 arithmetic are presented here. With a table of moderate size, the implementations need only working-precision rithmetic and are probably accurate to within 0.54 ulp as long as the final result does not underflow. When the final result suffers gradual underflow, the error is still no worse than 0.77 ulp.

For Correspondence: Mathematics and Computer Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60430-4801.

ALGORITHM

Dynamic Huffman Coding

Jeffrey Scott Vitter

We present a Pascal implementation of the one-pass algorithm for constructing dynamic Huffman codes that is described and analyzed in a companion paper [3]. The program runs in real time; that is, the processing time for each letter of the message is proportional to the length of its codeword. The number of bits used to encode a message of t letters is less than t bits more than that used by the well-known two-pass algorithm. This is best possible for any one-pass Huffman scheme. In practice, it uses fewer bits than all other Huffman schemes. The algorithm has applications in file compression and network transmission.

For Correspondence: Department of Computer Science, Brown University, Providence, R.I. 02912.

ALGORITHM

FORTRAN Codes for Estimating the One-Norm of a Real or

Complex Matrix, with Applications to Condition Estimation

Nicholas J. Higham

FORTRAN 77 codes SONEST and CONEST are presented for estimating the 1-norm (or the [infinity]-norm) of a real or complex matrix, respectively. The codes are of wide applicability in condition estimation since explicit access to the matrix, A, is not required; instead, matrix-vector products Ax and [A.sup.T]X are computed by the calling program via a reverse communication interface. The algorithms are based on a convex optimization method for estimating the 1-norm of a real matrix devised by Hager [Condition estimates, SIAM J. Sci. Stat. Comput. 5 (1984), 311-316]. We derive new results concerning the behavior of Hager's method, extend it to complex matrices, and make several algorithmic modifications in order to improve.

For Correspondence: Department of Mathematics, University of Manchester, Manchester M13 9PL, England.

Printer friendly Cite/link Email Feedback | |

Title Annotation: | programming languages and systems, database systems, and mathematical software |
---|---|

Author: | Kaiser, Gail E.; Cameron, Robert D.; LaLonde, Wilf R.; Weihl, William E.; Burns, James E.; Yu, C. T. |

Publication: | Communications of the ACM |

Article Type: | technical |

Date: | Sep 1, 1989 |

Words: | 2642 |

Previous Article: | CASE productivity perceptions of software engineering professionals. |

Next Article: | ACM news. |