Abstracts from other ACM publications.
ACM Computing Surveys
Programming Languages for Distributed Computing Systems
When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper.
We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages.
How to Write Parallel Programs: A Guide to the Perplexed
We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a data structure; agenda parallelism, which specifies an agenda of tasks for parallel execution; and specialist parallelism, in which specialist agents solve problems cooperatively. The programming paradigms center on live data structures that transform themselves into result data structures; distributed data structures that are accessible to many processes simultaneously; and message passing, in which all data objects are encapsulated within explicitly communicating processes. There is a rough correspondence between the conceptual classes and the programming methods, as we discuss. We begin by outlining the basic conceptual classes and programming paradigms, and by sketching an example solution under each of the three paradigms. The final section develops a simple example in greater detail, presenting and explaining code and discussing its performance on two commercial parallel computers, an 18-node shared-memory multiprocessor, and a 64-node distributed-memory hypercube. The middle section bridges the gap between the abstract and the practical by giving an overview of how the basic paradigms are implemented.
We focus on the paradigms, not on machine architecture or programming languages. The programming methods we discuss are useful on many kinds of parallel machines, and each can be expressed in several different parallel programming languages. Our programming discussion and the examples use the parallel language C-Linda for several reasons:
ACM Transactions on Graphics
A Geometric Characterization of Parametric Cubic Curves
In this paper, we analyze planar parametric cubic curves to determine conditions for loops, cusps, or inflection points. By expressing the curve to be analyzed as a linear combination of control points, it can be transformed such that three of the control points are mapped to specific locations on the plane. We call this image curve the canonical curve. Affine maps do not affect inflection points, cusps, or loops, so the analysis can be applied to the canonical curve instead of the original one. The main paradigms are all simple to express in Linda; efficient Linda implementations exist on a wide variety of parallel machines; and a wide variety of parallel programs have been written in Linda.
Conception, Evolution, and Application of Functional
The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas -- such as parallelism, nondeterminism, input/output, and state-oriented computations -- are examined with the goal of predicting the future development and application of functional languages.
The Family of Concurrent Logic Programming Languages
Concurrent logic languages are high-level programming languages for parallel and distributed systems that offer a wide range of both known and novel concurrent programming techniques. Being logic programming languages, they preserve many advantages of the abstract logic programming model, including the logical reading of programs and computations, the convenience of representing data structures with logical terms and manipulating them using unification, and the amenability to metaprogramming. Operationally, their model of computation consists of a dynamic set of concurrent processes, communicating by instantiating shared logical variables, synchronizing by waiting for variables to be instantiated, and making nondeterministic choices, possibly based on the availability of values of variables.
This paper surveys the family of concurrent logic programming languages within a uniform operational framework. It demonstrates the expressive power of even the simplest language in the family and investigates how varying the basic synchronization and control constructs affect the expressiveness and efficiency of the resulting languages.
In addition, the paper reports on techniques for sequential and parallel implementation of languages in this family, mentions their applications to date, and relates these languages to the abstract logic programming model, to the programming language PROLOG, and to other concurrent computational models and programming languages.
Since the first three points are fixed, the canonical curve is completely characterized by the position of its fourth point. The analysis therefore reduces to observing which region of the canonical plane the fourth point occupies. We demonstrate that for all parametric cubes expressed in this form, the boundaries of these regions are conics and straight lines. Special cases include Bezier curves, B-splines, and Beta-splines. Such a characterization forms the basis for an easy and efficient solution to this problem.
Blending Parametric Surfaces
A blending surface is a surface that smoothly connects two given surfaces along two arbitrary curves, one on each surface. This is particularly useful in the modeling operations of filleting a sharp edge between joining surfaces or connecting disjoint surfaces. In this paper we derive a new surface formulation for representing surfaces which are blends of parametric surfaces. The formulation has the advantage over the traditional rational polynomial approach in that point and normal values have no gaps between the blending surface and the base surfaces. Shape control parameters that control the cross-sectional shape of the blending surface are also available. In addition, the base surfaces are not restricted to any particular type of surface representation as long as they are parametrically defined and have a well-defined and continuous normal vector at each point. The scheme is extensible to higher orders qf geometric continuity, although we concentrate on G(1).
Automatic Parsing of Degenerate Quadric-Surface Intersections
In general, two quadric surfaces intersect in a nonsingular quartic space curve. Under special circumstances, however, this intersection may "degenerate" into a quartic with a double point, or a composite of lines, conics, and twisted cubics whose degrees, counted over the complex projective domain, sum to four. Such degenerate forms are important since they occur with surprising frequency in practice and, unlike the generic case, they admit rational parameterizations. Invoking concepts from classical algebraic geometry, we formulate the condition for a degenerate intersection in terms of the vanishing of a polynomial expression in the quadric coefficients. When this is satisfied, we apply a multivariate polynomial factorization algorithm to the projecting cone of the intersection curve. Factors of this cone which correspond to intersection components "at infinity" may be removed a priori. A careful examination of the remaining cone factors then facilitates the identification and parameterization of the various real, affine intersection elements that may arise: isolated points, lines, conics, cubics, and singular quartics. The procedure is essentially automatic (avoiding the tedium of case-by-case analyses), encompasses the full range of quadric forms, and is amenable to implementation in exact (symbolic) arithmetic.
A Multisided Generalization of Bezier Surfaces
In this paper we introduce a class of surface patch representations, called S-patches, that unify and generalize triangular and tensor product Bezier surfaces by allowing patches to be defined over any convex polygonal domain; hence, S-patches may have any number of boundary curves. Other properties of S-patches are geometrically meaningful control points, separate control over positions and derivatives along boundary curves, and a geometric construction algorithm based on de Casteljau's algorithm. Of special interest are the regular S-patches, that is, S-patches defined on regular domain polygons. Also presented is an algorithm for smoothly joining together these surfaces with [C.sup.k] continuity.
A Round Trip to B-Splines via de Casteljau
B-splines are constructed from Bezier curves solely using de Casteljau's construction. Divided differences are not being used, nor is Mansfield's recurrence formula presupposed. Yet, it is shown how to differentiate, subdivide, and evaluate a B-spline. These results are derived from the corresponding techniques of Bezier curves.
ACM Transactions on Mathematical Software
The Influence of Relaxed Supernode Partitions on
the Multifrontal Method
In this paper we present an algorithm for partitioning the nodes of a graph into supernodes, which improves the performance of the multifrontal method for the factorization of large, sparse matrices on vector computers. This new algorithm first partitions the graph into fundamental supernodes. Next, using a specified relaxation parameter, the supernodes are coalesced in a careful manner to create a coarser supernode partition. Using this coarser partition in the factorization generally introduces logically zero entries into the factor. This is accompanied by a decrease in the amount of sparse vector computations and data movement and an increase in the number of dense vector operations. The amount of storage required for the factor is generally increased by a small amount. On a collection of moderately sized 3-D structures, matrices speedups of 3 to 20 percent on the Cray X-MP are observed over the fundamental supernode partition which allows no logically zero entries in the factor. Using this relaxed supernode partition, the multifrontal method now factorizes the extremely sparse electric power matrices faster than the general sparse algorithm. In addition, there is potential for considerably reducing the communication requirements for an implementation of the multifrontal method on a local memory multiprocessor.
The Multifrontal Method and Paging in Sparse Cholesky
In this paper, we show that the multifrontal method can have significant advantage over the conventional sparse column-Cholesky scheme on a paged virtual memory system. A more than tenfold reduction in paging activities can be achieved, which saves as much as 20 percent in factorization time. We also introduce a hybrid sparse factorization method, which uses a mixture of column-Cholesky and submatrix-Cholesky operations. By switching to the use of frontal matrices from column-Cholesky operations at appropriate columns, we demonstrate that the proposed hybrid scheme has an advantage over the sparse column-Cholesky method in reducing paging activities and over the multifrontal method in its adaptability to the amount of available working storage.
A Comparison of Adaptive Refinement Techniques for Elliptic
Adaptive refinement has proved to be a useful tool for reducing the size of the linear system of equations obtained by discretizing partial differential equations. We consider techniques for the adaptive refinement of triangulations used with the finite element method with piecewise linear functions. Several such techniques that differ mainly in the method for dividing triangles and the method for indicating which triangles have the largest error have been developed. We describe four methods for dividing triangles and eight methods for indicating errors. Angle bounds for the triangle division methods are compared. All combinations of triangle divisions and error indicators are compared in a numerical experiment using a population of eight test problems with a variety of difficulties (peaks, boundary layers, singularities, etc.). The comparision is based on the L-infinity norm of the error versus the number of vertices. It is found that all of the methods produce asymptotically optimal grids and that the number of vertices needed for a given error rarely differs by more than a factor of two.
ODRPACK: Software for Weighted Orthogonal Distance
In this paper, we describe ODRPACK, a software package for the weighted orthogonal distance regression problem. This software is an implementation of the algorithm described in  for finding the parameters that minimize the sum of the squared weighted orthogonal distances from a set of observations to a curve or surface determined by the parameters. It can also be used to solve the ordinary nonlinear least squares problem. The weighted orthogonal distance regression procedure has an application to curve and surface fitting and to measurement error models in statistics. The algorithm implemented is an efficient and stable trust region (Levenberg-Marquardt) procedure that exploits the structure of the problem so that the computational cost per iteration is equal to that for the same type of algorithm applied to the ordinary nonlinear least squares problem. The package allows a general weighting scheme, provides for finite difference derivatives, and contains extensive error checking and report generating facilities.
[C.sup.1] Surface Interpolation
A method of bivariate interpolation and smooth surface fitting is developed for rapidly varying z values given at points irregularly distributed in the x-y plane. The surface is constructed by means of [C.sup.1] triangular interpolants defined on a triangulation of the convex hull of the points set. The needed partial derivative values are estimated by a new method based on a minimization criterion making use of a tension parameter. This method, which is shown to be efficient and accurate, gives the user an interactive tool to control the behavior of the interpolant surface and to dampen unwanted oscillations near steep gradients. The algorithm of this proposed method is described.
Indefinite Integration with Validation
We present an overview of two approaches to validated one-dimensional indefinite integration. The first approach is to find an inclusion of the integrand, then integrate this inclusion to obtain an inclusion of the indefinite integral. Inclusions for the integrand may be obtained from Taylor polynomials, Tschebyscheff polynomials, or other approximating forms which have a known error term. The second approach finds an inclusion of the indefinite integral directly as a linear combination of function evaluations plus an interval-valued error term. This requires a self-validating form of a quadrature formula such as Gaussian quadrature. In either approach, composite formulae improve the accuracy of the inclusion.
The result of the validated indefinite integration is an algorithm which may be represented as a character string, a subroutine in a high-level programming language such as Pascal-SC or Fortran, or as a collection of data. An example is given showing the application of validated indefinite integration in constructing a validated inclusion of the error function, erf(x).
BTPEC: Sampling from the Binomial Distribution
The FORTRAN implementation of an exact, uniformly fast algorithm for generating the binomial random variables is presented. The algorithm is numerically stable and is faster than other published algorithms. The code uses only standard FORTRAN statements and is portable to most computers; it has been tested on the IBM 370, 3033, 4381, DEC VAX 11/780, SUN 3/50, CDC 6500-6600, ENCORE Multimax, and Apple Macintosh Plus. A driver program is also included.