Printer Friendly

Toward a complete transformational toolkit for compilers.

PIM is an equational logic designed to function as a "transformational toolkit" for compilers and other programming tools that analyze and manipulate imperative languages. It has been applied to such problems as program slicing, symbolic evaluation, conditional constant propagation, and dependence analysis. PIM consists of the untyped lambda calculus extended with an algebraic data type that characterizes the behavior of lazy stores and generalized conditionals. A graph form of PIM terms is by design closely related to several intermediate representations commonly used in optimizing compilers. In this article, we show that PIM's core algebraic component, [PIM.sub.t], possesses a complete equational axiomatization (under the assumption of certain reasonable restrictions on term formation). This has the practical consequence of guaranteeing that every semantics-preserving transformation on a program representable in [PIM.sub.t] can be derived by application of [PIM.sub.t] rules. We systematically derive the complete [PIM.sub.t] logic as the culmination of a sequence of increasingly powerful equational systems starting from a straightforward "interpreter" for closed [PIM.sub.t] terms. This work is an intermediate step in a larger program to develop a set of well-founded tools for manipulation of imperative programs by compilers and other systems that perform program analysis.

Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processors -- code generation; compilers; optimization; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages -- algebraic approaches to semantics

General Terms: Languages, Performance, Theory

Additional Key Words and Phrases: Compiler intermediate representation, completion, imperative language, partial evaluation, program transformation, term-rewriting


1.1 Overview

Many intermediate representations (IRs) have been proposed as the basis for algorithms used in optimizing compilers for imperative languages. Typically, such IRs are intended to distill a source program down to certain essential semantic components, either to provide a framework for program transformations, to serve as a starting point for analysis algorithms, or to render some program property self-evident from the IR's structure. In studying the body of work that describes or exploits such IRs, it is often difficult to clearly differentiate the role of the IR from the role of the algorithm that uses it, to contrast the expressive or analytic capabilities of one IR with another, or to understand the relationship between a program's semantics and the IR manipulations that (implicitly) reason about it.

This article is an effort to contribute to a better understanding of these issues by presenting a formal account of properties of an equational logic called PIM [Field 1992]. PIM consists of a term language and an associated set of equations which together are designed to function as a "transformational toolkit" for compilers and other programming tools that analyze and manipulate Algol-class imperative languages. PIM is a combination of the untyped lambda calculus and an algebraic data type, [PIM.sub.t], that characterizes the behavior of lazy stores [Cartwright and Felleisen 1989] and generalized conditionals. [PIM.sub.t] is parameterized in such a way that a variety of base data types (e.g., integers, booleans, pairs, lists, streams) and "memory models" (e.g., byte addressed or "symbolically" addressed by variable name) may be accommodated in a manner that is orthogonal to the remainder of PIM. Lambda terms are used to model procedural constructs and loops, but are not needed to model other types of control flow. Extended with appropriate memory models and operations on base types, we believe that the constructs of PIM are sufficient to model the principal dynamic semantic elements of Algol-class languages.

PIM is intended to represent a family of related IRs, each a class of graph representations of PIM terms that is useful for a different application. Several members of this family are by design closely related to existing IRs, particular those based on the Program Dependence Graph (PDG) or Static Single Assignment (SSA) form [Ballance et al. 1990; Click 1995; Cytron et al. 1991; Ferrante et al. 1987; Weise et al. 1994; Yang et al. 1990]. Our long-term goal is to show that a large class of program analysis algorithms can be recast into a combination of semantics-preserving transformations and abstract interpretations on PIM graphs.

As a first step toward that goal, we restrict our attention in this article to studying the formal properties of various equational systems for [PIM.sub.t]. Our principal result is exhibiting a complete equational axiomatization of [PIM.sub.t]'s semantics, under the assumption of certain reasonable restrictions on term formation. Formally, we show that there exists an [Omega]-complete equational axiomatization of [PIM.sub.t]'s final algebra semantics. We systematically derive the complete logic for [PIM.sub.t] as the culmination of a sequence of increasingly powerful equational systems starting from a straightforward "interpreter" for [PIM.sub.t]. We also derive several confluent and terminating rewriting systems by orienting instances of rules from the equational systems. Such systems can be used not only as partial decision procedures for [PIM.sub.t], but also as algorithms for building various classes of IRs based on different [PIM.sub.t] normal forms.

Obtaining a positive answer to the completeness question, albeit restricted to the first-order core of PIM, gives us some confidence that our transformational toolkit has an adequate supply of tools. In Field [1992], it was shown that many aspects of the construction and manipulation of compiler intermediate representations could be expressed by partially evaluating PIM graphs using rewriting rules formed from oriented instances of PIM equations. Until now, however, we could not be certain that all the equations required to manipulate [PIM.sub.t] terms were present (with or without restrictions on term formation). Obtaining a complete equational axiomatization for [PIM.sub.t] is also an important prerequisite to designing a decision procedure.

We are aware of three other completeness results for logics for imperative languages, all of which, like our result, are restricted to first-order subsystems: Hoare et al. [1987] present (implicitly) a completeness result for an equational logic that PIM resembles in some respects. However, their language cannot represent abstract (i.e., unknown) stores or addresses, a property we feel is important for representing transformations and analyses commonly required in optimizing compilers. For example, computations on addresses are needed for representing pointer manipulations; abstract stores are needed, e.g., to reason about calls to procedures whose effect on the store is unknown. Mason and Talcott [1989] show that their logic for reasoning about equivalence in a Lisp-like language is complete. Unlike PIM, their logic is a sequent calculus. In addition, their logic axiomatizes a strict store semantics whose properties differ from PIM's lazy store. Finally, Selke [1989b] sketches a completeness result similar to that of Mason and Talcott for a sequent calculus for reasoning about PDGs. As with the language of Hoare et al., the only unknowns about which Selke's system can reason are base values.

1.2 Organization

The remainder of this article proceeds as follows. In Section 2, we discuss PIM's relation to other work. In Section 3, we introduce the system [PIM.sup.0.sub.t], which axiomatizes the operational behavior of PIM's first-order core. [PIM.sup.0.sub.t] has the property that orienting its equations from left to right yields an interpreter for closed programs producing observable "base" values as results. We also discuss the behavior of [PIM.sub.t]'s functions using examples written in a simple programming language and focus particularly on describing PIM's lazy-store semantics. Section 4 presents several detailed examples of equivalence reasoning in PIM and gives an overview of the relative power of various equation systems we develop in the sequel. In Section 5, we briefly discuss the higher-order system derived from embedding [PIM.sub.t] in a lambda calculus. While the higher-order system is not the focus of this article, this section gives a feel for the use of PIM in representing real programming languages. Section 6 discusses the relationship between [Omega]-complete equational systems and partial evaluation. In Section 7, we present some basic definitions and illustrate the process by which we develop a complete logic using an example involving a stack data type. Next, in Section 8, we develop an enrichment of [PIM.sup.0.sub.t] called [PIM.sup.0.sub.t] that characterizes [PIM.sub.t]'s final-algebra semantics, i.e., one that equates terms that behave the same in all contexts that generate observable values. Such a semantics is natural for developing semantics-preserving transformations on program fragments. In Section 9, we produce our main result: an [Omega]-complete axiomatization of [PIM.sup.+.sub.t], [PIM.sup.=.sub.t]. As before, [PIM.sup.=.sub.t] is obtained simply by enriching [PIM.sup.+.sub.t] with additional equations. The resulting system equates all open [PIM.sub.t] programs that behave the same under every substitution for their free variables and thus yields the complete transformational toolkit we seek. In Section 10, we show how rewriting systems formed by orienting subsystems of [PIM.sup.=.sub.t] can be used to prove program equivalences and produce normal forms with various interesting structural properties. We then develop several rewriting systems based on [PIM.sup.=.sub.t] and prove that they are confluent and terminating. Finally, Section 11 covers possible extensions and open problems.


There has been considerable previous work on calculi and logics of program equivalence. In addition, the program analysis literature contains numerous accounts of intermediate representations and algorithms intended to accommodate efficient and accurate analysis of imperative programs. Drawing to a considerable extent on ideas from both areas, PIM was designed to bridge the gap between the latter area's practical aims and the former's logical rigor. In this section, we review previous work and its relation to PIM.

A graph form of PIM is by design closely related to intermediate representations used in optimizing compilers, such as the PDG [Ferrante et al. 1987], SSA form [Cytron et al. 1991], GSA form [Ballance et al. 1990], the PRG [Yang et al. 1990], the VDG [Weise et al. 1994], and the representation of Click [1995]. PIM was inspired in particular by the work of Cartwright and Felleisen [1989], who give a foundational account of PDG construction in a denotational (i.e., model-theoretic) setting. Our goal is to provide an operational realization of an extended version of Cartwright and Felleisen's semantics, one that can be used not only for reasoning about equivalences, but also for carrying out partial evaluation and implementing certain classes of program analysis.(1)

Selke [1989a] and Ramalingam and Reps [1989] (the latter building on earlier work of Horwitz et al. [1988]) provide semantics for variants of PDGs. Selke's operational semantics combines graph rewriting(2) and propagation of assigned values through graph edges. Ramalingam and Reps give a mathematical semantics for PRGs in terms of mutually recursive equations on streams of values. Both of these view a PDG as a program in itself, which is then evaluated via the semantics. By contrast, PIM can be used both to derive various PDG-like IRs (by transforming a fairly conventional store-based translation of the program) and to evaluate the resulting IR.

For structured programs, most of the nontrivial steps required to translate a program to the PIM analogue of one of the IRs mentioned above can be carried out as source-to-source transformations in PIM itself, once an initial PIM graph has been constructed from the program using a simple syntax-directed translation. For example, when constructing the PIM equivalent of GSA form [Ballance et al. 1990], various equational transformations on PIM graphs correspond to computation of reaching definitions, building of data and control dependence edges, placement of so-called [Phi] nodes, and determination of "gating" predicates.

For unstructured programs, the PIM analogue of a traditional IR may be constructed either by first restructuring the program (e.g., using a method such as that of Ammarguellat [1992]) or by using a continuation-passing transformation (e.g., one similar to that used by Weise et al. [1994]) in which unstructured transfers of control such as gotos or breaks are treated as function calls. The translation process is then completed by application of the same transformations used for structured programs. We prefer the restructuring approach to the continuation-passing transformation, since it obviates the need to use higher-order "interprocedural" methods when analyzing first-order programs.

There has been considerable work on calculi for modeling imperative features in applicative languages [Felleisen and Friedman 1989; Felleisen and Hieb 1992; Odersky et al. 1993; Swarup et al. 1991]. While the details of each of these approaches differ considerably, their imperative features are inextricably bound with their applicative constructs. Felleisen and Friedman [1989], Felleisen and Hieb [1992], and Swarup et al. [1991] use lambda terms and [Beta]-reduction to sequence assignments; Odersky et al. [1993] use lambda terms in a "monadic" style to perform computations on dereferenced mutable variables. By contrast, PIM does not rely on the use of lambda expressions or monads to sequence assignments. This permits the use of stronger axioms for reasoning about store-specific sequencing, and it makes it possible to study the properties of the imperative features of the language completely independently from the functional ones.

PIM is an equational logic, rather than, for example, a sequent calculus such as that of Mason and Talcott [1989] or Selke [1989a; 1989b], the latter system of which, like PIM, was based on the work of Cartwright and Felleisen [1989]. While sequent calculi are natural for carrying out proofs of program equivalence (especially those performed by hand), equational logics have the useful property that every intermediate state of a proof is itself a program. Equational logics are also particularly amenable to mechanical implementation, using such techniques as term graph rewriting [Barendregt et al. 1987], Knuth-Bendix completion, and (equational) unification or narrowing. The latter two properties make it possible to use PIM not only to prove equivalences, but also to model the "standard" operational semantics of a language (using a terminating and confluent rewriting system on ground terms) or to serve as a "semantics of partial evaluation" (by augmenting the operational semantics by oriented instances of the full logic).

Yang et al. [1989] present an algorithm that uses structural properties of the PRG IR to determine equivalence among parts of two variants of the same program which are to be combined by a program integration tool. However, even for the restricted class of programs that can be modeled by [PIM.sub.t], not all valid equivalences among programs represented by PRGs can be proved by structural equivalence.

The logics of Hoare et al. [1987] (which PIM resembles most closely in many respects) and Boehm [1985] manipulate languages that are syntactically similar to "real" programming languages, in the sense that operations that affect the store (i.e., side-effects) are intermixed in the term structure with pure operations on values. By contrast, PIM contains an explicitly threaded store whose operations are distinguished syntactically from operations on values. While this separation of concerns renders PIM wholely unsuitable as a programming language, we believe that this characteristic makes it easier to represent languages in which expressions with and without side-effects are intermixed in complicated ways (e.g., C). This also means that it is usually straightforward to extend PIM with new operations and axioms on base values, or change the "memory model" used to represent addresses, since we need have no concern about how these operations are interrelated. Finally, PIM differs from Hoare et al. [19871 and Boehm [1985] in allowing for computed addresses (which arise from pointers and arrays) and providing operations on stores (e.g., store concatenation), the latter of which, while not present in most real programming languages, are extremely useful for modeling various natural transformations on IRs.

For further details on the correspondence between PIM and traditional IRs, see Field [1992]. For an example of a practical application of PIM, see Field et al. [1995], which describes a novel algorithm for program slicing.


3.1 [Mu] C

Consider the program fragments [P.sub.1]-[P.sub.5] depicted in Figure 1. They are written in a C language subset that we will call [Mu] C, whose complete syntax is given later in Figure 25 on page 678. [Mu] C has standard C syntax and semantics, with the following exceptions:

-- Metavariables, such as ?P or ?X, are used to represent unknown values. Such a variable may be thought of as a simple form of program input (where each occurrence of a metavariable represents the same input value) or as an (immutable) formal parameter of a function.

-- All data in [Mu] C are assumed to be integers or pointers (to integer variables or to other pointers).

-- It is assumed that no address arithmetic is used.(3)


To make [Mu] C examples such as those in Figure 1 easier to follow, we will add type "declarations" within comment delimiters for the variables and metavariables used therein; however, these declarations are not part of the syntax of [Mu] C. We will use "const" in the declarations of metavariables to emphasize that their values are immutable.

3.2 PIM Terms and Graphs

A directed term graph [Barendregt et al. 1987] form [S.sub.P1] of the PIM representation of [P.sub.1] is depicted in Figure 2. [S.sub.P1], is generated by a simple syntax-directed translation, details of which are given in Appendix A. We will discuss [S.sub.P1] in detail in Section 3.4.


A term graph may be viewed as a conventional tree-structured term by traversing it from its root and replacing all shared subgraphs by separate copies of their term representations. Cycles are also admissible in term graphs, and they correspond to infinite terms [Kennaway et al. 1994]. Such cycles arise naturally in conjunction with loops and recursive functions. Shared PIM subgraphs are constructed systematically as a consequence of the translation process. In addition, when term-rewriting is extended in a natural way to term graphs [Barendregt et al. 1987], subgraph sharing can be created as a side-effect of the rewriting process.

The properties of the equational systems we consider in this article are completely independent of whether a PIM term is represented by a tree or a graph. However, when used in implementations, graphs are preferred since the graph representation of a program is invariably much more compact than the tree representation (consider, for example, the size of the tree form of graph [S.sub.P1], in Figure 2). It is also frequently easier to discern the correspondence between related [Mu] C constructs and PIM structures when graphs are used in examples; this correspondence will be emphasized by depicting parent nodes in PIM terms below their children. The "upside down" graph orientation also makes the similarity between PIM graphs and traditional IRs more apparent.

3.3 [PIM.sub.t]: Core PIM

In this article, we focus on the first-order core subsystem of PIM, denoted by [PIM.sub.t]. The full version of PIM discussed in Field [1992] and slightly revised in Field et al. [1995], augments [PIM.sub.t] with lambda expressions, an induction rule, and certain additional higher-order merge distribution rules that in the more general form given in Field et al. [1995] propagate conditional "contexts" inside expressions computing base values or addresses. PIM's higher-order constructs allow loops and procedures to be modeled in a straightforward way; we will briefly review these applications in Section 5. In addition, Field et al. [1995] show how specialized instances of PIM's induction rule can be used to facilitate loop-related transformations required to compute various kinds of program slices.

Without the higher-order extensions, [PIM.sub.t] is not Turing-complete. However, we believe that the constructs in [PIM.sub.t] alone are sufficient to model the principal control- and data-flow aspects of loop- and function-free programs in Algol-class languages. As we shall see, these constructs have a nontrivial equational axiomatization. Understanding their properties is a prerequisite to studying the properties of higher-order variants.

The signature of [PIM.sub.t] terms is given in Figure 3; we will describe the intended interpretation of each of the function symbols in the signature in Section 3.4 (this signature differs slightly from the corresponding signaturen Field [1992]; the differences principally relate to a simplification in the structure of merge expressions). The sort structure of terms restricts the form of addresses and predicates in such a way that neither may be the result of an arbitrary PIM computation. Although our completeness result depends on this restriction, the equations in the complete equational system [PIM.sub.t.sup.=] remain valid even when the term formation restrictions are dropped.(4) This means, for example, that we can still use our system to reason about situations in which addresses or predicates are generated by arbitrary computations (or situations in which the values of such expressions are entirely unknown), even though there may be additional valid equations (arising as a consequence of the structure of those computations) that we will not necessarily be able to prove.


Figure 4 depicts the equations(5) of the system [PIM.sup.o.sub.t]. [PIM.sup.0.sub.t] is intended to function as an operational semantics for [PIM.sub.t], in the sense that when its equations are oriented from left to right, they form a rewriting system that is confluent on ground terms of sort V, the sort of observable "base" values. [PIM.sup.0.sub.t] also serves to define the initial algebra semantics for [PIM.sub.t].


PIM can be viewed as a parameterized data type with formal sorts V and A. These sorts are intended to be instantiated as appropriate to model the data manipulated by a given programming language. Figure 5 depicts the signature of a small set of functions and auxiliary sorts used to actualize the parameter sorts of PIM to model the integer data manipulated by [Mu]C programs. In this instantatiation of PIM, we allow A and B to be subsets of V. However, our completeness results hold only under the more restrictive assumption that these sets are mutually disjoint.

From the point of view of the formal results presented in the sequel, the additional functions in Figure 5 are treated as uninterpreted "inert" constructors. In practice, of course, sufficient additional equations would be added to axiomatize the properties of the additional language-specific datatypes (e.g., pairs, lists, records, or streams). In addition, the representation of addresses would have to be extended to allow for such constructs as heap-allocated data (which require a function that generates new addresses[6]) and arrays (which require that the address of each array element be distinguished from the address of other elements and the address of the base of the array).


The [PIM.sub.t] signature given in Figure 3 and the equation systems we will present in the sequel are sufficiently simple in structure to be implementable with minimal difficulty by many existing equational rewriting systems and theorem provers (e.g., ASF+SDF [van Deursen et al. 1996] and TIP [Fraus 1994]). Other than the possible need to convert infix operators to prefix form, the only other requirements that [PIM.sub.t] imposes are the ability to encode an infinite set of addresses and to implement Eqs. (A1) and (A2). This can easily be done using a finite number of auxiliary symbols and equations, e.g., by encoding addresses using strings over a fixed alphabet.

3.4 PIM's Parts

In this section, we outline the behavior of PIM's functions and the equations of [PIM.sup.0.sub.t] using program [P.sub.1] and its PIM translation, [S.sub.p1], depicted in Figure 2.

3.4.1 Stores. The graph [S.sub.p1], is a PIM store structure,[7] an abstract representation of memory. [S.sub.p1], is constructed from the sequential composition of substores corresponding to the statements comprising [P.sub.1]. The operator "[o.sub.s]" is used to concatenate two stores. The subgraphs reachable from the boxes labeled [S.sub.1]-[S.sub.4] in [S.sub.p1], correspond to the four assignment statements in [P.sub.1].

The simplest form of store is a cell such as

[S.sub.1 [equivalent] {addr(p) [right arrow] [meta(P)]}.

A store cell of this form associates an address expression (here addr(p)) with a value (here meta(P), the translation of the [Mu]C metavariable ?P). The operator "[.]" encapsulates the value being stored in a merge structure; we will discuss such structures in greater detail below. Constant addresses such as addr(p) represent ordinary variables. More generally, address expressions may be used when addresses are computed, e.g., in pointer-valued expressions. [Phi.sub.s] is used to denote the, null store. Equations (L1) and (L2) of [PIM.sup.o.sub.t] indicate that null stores disappear when composed with other stores. Equation (L3) indicates that the store composition operator is associative.

Stores may be guarded, i.e., executed conditionally. The subgraph labeled [S.sub.6] in Figure 2 is such a store and corresponds to the if statement in [P.sub.1]. The guard expression denoted by [V.sub.1] corresponds to the if's predicate expression. Consistent with standard 6 semantics, the guard [V.sub.1] tests whether the value of the variable p is nonzero. When guarded by the true predicate, T, a store structure evaluates to itself. If a store structure is guarded by the false predicate, F, it evaluates to the null store structure. These behaviors are axiomatized by Eqs. (L5) and (L6).

3.4.2 Lazy Store Retrieval. A reference to the value of a program variable is modeled in PIM in the usual manner by retrieving the variable's value from the store using the variable's address as a lookup key. However, the details of the retrieval operation are somewhat unusual, since PIM axiomatizes a variant of Cartwright and Felleisen's [1989] lazy store semantics.

In a traditional (strict) store semantics, the address to which an assignment is made and the value being assigned are evaluated at the point of assignment. Cartwright and Felleisen observed that it is possible to define an alternative semantics in which the evaluation of an assignment's address, its value, or both are deferred until a value in the store is retrieved by a reference to an address.

Cartwright and Felleisen's aim in defining the notion of a lazy store was to provide a (denotational) semantic foundation for program manipulations performed in building Program Dependence Graphs (PDGs) [Ferrante et al. 1987]. However, they also observed that the termination behavior of lazy stores differs from that of strict stores: an assignment that stores a divergent value which is never subsequently retrieved does not cause nontermination in a lazy store semantics, whereas the program would diverge in a strict store semantics. The possible introduction of "gratuitous termination" under a lazy store semantics seems to be of little concern in practice, however, since it is similar to other commonly used transformations that also have the potential to remove sources of divergence (e.g., loop-invariant code hoisting followed by elimination of "empty" loops).

PIM's operational axiomatization of lazy stores decomposes store retrieval into two distinct steps: dereferencing (using the "@" operator) and selection (using the "!" operator). We see both operators in expressions of the form

(s @ a)!

in [S.sub.p1], where each such expression corresponds to a reference to the value of some variable in [P.sub.1].

An expression of the form s @ a produces an ordered list of all the values that are stored in s at address a prior to dereferencing. This retrieval behavior is codified by Eqs. (S1)-(S4), (A1), and (A2). The list of values produced by the dereferencing operation is deemed a merge structure, since values stored at the same address are merged together.

3.4.3 Merge Structures: Sets of Reaching Definitions. The merge structures created by store dereferencing are best thought of as representations of sets of definitions (i.e., assignments) that reach a particular use of some address. For a given use of an address, the set of reaching definitions contains all those assigned values that could be retrieved at the point of use in some valid execution. Due to the existence of conditionals and computed addresses, the set of reaching definitions for a given use of an address cannot always be narrowed to a singleton in an open program (i.e., a program containing metavariables). A merge structure orders the reaching definition set it represents by "least recent" assignment to "most recent." When applied to a closed merge structure m, the selection operator "!" yields the rightmost element in the list represented by m, i.e., the most recently assigned value.

The simplest nonempty form of merge structure is a merge cell. The boxes labeled [M.sub.1], [M.sub.2], [M.sub.3], [M.sub.4], and [M.sub.6] in Figure 2 are all merge cells. As with store structures, nontrivial merge structures may be built by prepending guard expressions or by composing merge substructures using the merge composition operator, [o.sub.m]. [Phi.sub.m] denotes the null merge structure. Some of the characteristics of merge structures are shared by store structures, as indicated by the "polymorphic" Eqs. (L1)-(L6). In the sequel, we will therefore often drop subscripts distinguishing related store and merge constructs when no confusion will arise. Nontrivial merge structures can be used to model various forms of conditional expressions; however, they arise most frequently as results of PIM simplifications of other type of expressions.

When the selection operator "!" is applied to a nonempty merge structure m, m must first be evaluated until it has the form

m' [o.sub.m] [v],

i.e., one in which an unguarded cell is rightmost. At this point, the entire expression m! evaluates to v. This behavior is axiomatized by Eqs. (M2) and (M3). Equation (M4) states that attempting to apply the selection operator to a null merge structure yields the special error value "?".


4.1 Example: Manipulating Lazy Stores

To get a feel for equational manipulations in PIM, consider the programs [R.sub.1] and [R.sub.2] depicted in Figure 6. In both examples, we will concentrate on the PIM representation of the final reference to the variable x. In the case of [R.sub.1], the PIM representation of the value of x (produced by the translation in Appendix A) is given by the expression

(([S.sub.1], [S.sub.2]) [o.sub.s], [S.sub.3]) @ addr(x)!


[S.sub.1] = F [much greater than.sub.s] {addr(x) [right arrow] [17]}

[S.sub.2] = T [much greater than.sub.s] {addr(y) [right arrow] [18]}

[S.sub.3] = T [much greater than.sub.s] {addr(x) [right arrow] [17]}.

For clarity, we have eliminated several null stores introduced by the translation process and have simplified the representation of the integers 0 and 1 used in the conditional statements to their boolean PIM normal forms, F and T, respectively.


It should be easy to see that PIM stores [S.sub.1], [S.sub.2], and [S.sub.3] represent the correspondingly labeled statements in [R.sub.1]. The PIM representation of the final reference to x first dereferences the composite store using address addr(x), then uses the selection operator "!" as discussed above to extract the first reaching definition of x.

Using the equations of [PIM.sup.0.sub.t] in Figure 4, we can simplify the portion of [R.sub.1] corresponding to the reference to x as follows (the specific equations of [PIM.sup.0.sub.t] used at each step are indicated at the right):


Note how the lazy store semantics implemented by operators "@" and "!" retrieves the value of the last assignment to x without simplifying any of the first two assignment statements at all.

While the notion of a lazy store semantics might seem novel, but otherwise inconsequential in the case of [R.sub.1], [R.sub.2] (Figure 6) illustrates its considerable utility in simplifying open programs. The PIM representation of the final value of x in [R.sub.2] is given by the expression


As with [R.sub.1], we can use the rules of PIM to simplify the initial expression representing the final value of x. However, since the PIM term representing [R.sub.2] contains free variables (representing the values of metavariables ?P and ?Q), the simplification process is somewhat more involved, requiring additional rules not needed for simplifying closed programs. The additional rules are found in Figure 13 and are part of the [Omega]-complete system [PIM.sup.=.sub.t] we derive in Section 9.

The simplification of the PIM term representing [R.sub.2] proceeds as shown in Figure 7. We see in the step labeled (S5) that the predicates are "pushed inside" the store cells. This enables the store to be simplified in subsequent steps without requiring that the predicates themselves be simplified. In the next four steps (using (S1), (A1), (A2), (L5), (L6), and (L2)), the definitions reaching the final use of x are effectively "gathered together." We can observe from the term in the equation labeled (L2) that two assignments of 17 reach the use of x and that the two assignments are evaluated under different conditions. However, since the predicates implementing the conditions are logically complementary, we can conclude that 17 is always assigned to x. This fact is inferred in the next three steps using rules (L11), (B13), (B15), and (L5). These rules effectively convert the set of two reaching definitions into a single composite reaching definition. This single reaching definition is finally "selected" in the last step using rule (M3).


The collective effect of the rules implementing PIM's lazy store semantics is to enable addresses and predicates (which guard various substores) to be manipulated "orthogonally." This behavior is vital for enabling the final value of x to be simplified to a constant without requiring that the predicates themselves be simplified.

Returning to program [P.sub.1], (Figure 1), we can reason about its PIM representation [S.sub.P1], (Figure 2) in a manner similar to that of [R.sub.2]. [S.sub.P1], can be simplified to the store structure [S'.sub.P1] depicted in Figure 8. The merge structure denoted by M' in [S'.sub.P1] represents all the assignments made to variable x in [P.sub.1] and can be read roughly as "if the variable to which this expression is bound is ever used, the resulting value will be that of the metavariable ?P if ?P is nonzero (i.e., "true" in C semantics); otherwise the resulting value will be ?P." [S'.sub.P1] is very similar to several compiler IRs; we will revisit this point in more detail in Section 10.1. It should be easy to see that M' can be further simplified to the expression [meta(P)], although the resulting graph does not correspond as closely to a compiler IR as does [S'.sub.P1].

It is important to note that PIM's axiomatization of lazy stores is independent of whether a lazy (i.e., "outermost") strategy is used to rewrite PIM expressions. Indeed, such a strategy was not used in the simplification of [R.sub.2] above. It is, however, desirable to use a lazy strategy and graph reduction techniques in rewriting implementations of PIM to ensure that unnecessary rewriting steps are avoided.

4.2 The Relative Power of PIM Subsystems

In order for PIM to make good on its claim to being a "program logic," there must be some semantics about which the logic can reason. The equations of [PIM.sup.o.sub.t] in Figure 4 serve this function for any closed expression of sort V. We can generate an interpreter from these equations simply by orienting the equations in Figure 4 from left to right, then applying them until a normal form is reached. (It is easily seen that the system is terminating, i.e., noetherian and yields unique normal forms on closed terms).


An interpreter generated from [PIM.sup.0.sub.t] could have been used to mechanically simplify the expression representing the final value of x in program [R.sub.1] of Figure 6 to 17.

For another example, consider program [P.sub.2] in Figure 1. Its PIM representation, [S.sub.p2], is the same as [S.sup.p1], except that ?P and ?X are replaced with 0 and 1, respectively. The value of the variable x in the final store produced by evaluation of [S.sub.p2], i.e., the final value of x after executing [P.sub.2], is represented by the expression

([S.sub.p2] @ addr(x))!

and evaluates to the constant 0. The interpreter generated by [Pim.sup.0.sub.t] can be used to evaluate the final value of any of [P.sub.2]'s variables in a similar manner.

Consider now the program [P.sub.4] of Figure 1. Although it should be clear that [P.sub.4] behaves the same as [P.sub.2], the equations of [PIM.sup.0.sub.t] are insufficient to equate the PIM translations of the two programs. We will require a more powerful system than [PIM.sup.0.sub.t] to axiomatize a final-algebra semantics, in which all behaviorally equivalent closed terms (such as those representing [P.sub.2] and [P.sub.4]) are equated. [PIM.sup.+.sub.t], the equational axiomatization of [PIM.sup.0.sub.t] final-algebra semantics, will be the subject of Section 8.

Finally, consider program [P.sub.5] of Figure 1. Although it is behaviorally equivalent to both [P.sub.1] and [P.sub.3], one cannot deduce this fact using [PIM.sup.+.sub.t] alone. Intuitively, this is due to the fact that [P.sub.1], [P.sub.3], and [P.sub.5] are all open programs. To equate these terms, as well as to prove all other valid equations on open terms, we will need the [Omega]-complete system [PIM.sup.=.sub.t], which will be developed in Section 9.

In Section 10, we will present several confluent and terminating subsystems of [PIM.sup.=.sub.t]. In addition to serving as partial decision procedures for equivalence, these systems and variants thereof can also be used to yield normal forms (i.e., terms or graphs to which no further rewriting rules are applicable) that function in a manner similar to that of a traditional compiler IR and to implement partial evaluation for optimization purposes.


A complete exposition of the higher-order aspects of PIM is outside the scope of this article. However, to see how [PIM.sub.t] can be embedded in a higher-order framework, consider the [Mu]C program L depicted in Figure 9.

/* int i; */

{ i = 10;

while (i) i -= 1;



Figure 9. A simple program containing a loop.

L contains a simple loop and an initialization. The general scheme for [Mu]C while loop translation is given by rule ([Stmt.sub.6]) in Figure 26 of Appendix A. This scheme embeds a [PIM.sub.t] expression representing the loop body in a recursive lambda expression. Recursion is expressed formally using a Y combinator.(8) In practice, it is advantageous to substitute a self-referential looping graph representation for an explicit Y combinator [Peyton Jones 1987]. Note that ([Stmt.sub.6]) must account for the possibility that the loop predicate has side-effects; this results in a representation that is somewhat more complicated than it would be otherwise.

Figure 10 depicts the PIM graph [S.sub.L] that results from translating program L. The store labeled [S.sub.1] represents the loop itself and is modeled by a lambda expression that takes an incoming store [Chi]s and produces a store [S.sub.2] representing all the store updates that occur during loop iteration. [S.sub.2] is guarded by the loop's termination predicate, [V.sub.1]. The predicate has no side-effects; the translation rule ([Stmt.sub.6]) accounts for this fact by generating a null substore that is omitted in Figure 10 for clarity. [S.sub.2] is the composition of a store [S.sub.3] representing the update of i in the current iteration and a store [S.sub.4] representing updates in all subsequent iterations. [S.sub.4] contains a recursive reference to the loop's lambda expression, which is applied to a store [S.sub.5] representing the incoming store for the next iteration. [S.sub.5] thus consists of the current incoming store [Chi]s composed with [S.sub.3], the store update produced by the current iteration. Loop iteration is initiated the first time around by applying [S.sub.1] to [S.sub.0], a store representing the initialization of program variable i.


Procedures can be represented in a manner similar to loops. As with loops, local storage within a procedure invocation can be represented simply by introducing a store expression representing the side-effects of the procedure within the lambda expression representing the procedure.

In order to accommodate various common loop simplifications, some form of inductive reasoning is required. Field et al. [1995] provides several induction rules that implement various classes of useful loop transformations. These rules are derived from a more general induction rule given in Field [1992], which in turn was inspired by work of Wand and Wang [1990].

While we have found that augmenting lambda terms with [PIM.sub.t] terms allows a natural operational characterization of a variety of higher-order program constructs, more work remains to be done to study the formal properties of higher-order variants of PIM. In Section 11, we outline some possible future directions.


Since many valid program transformations do not result from the application of evaluation rules alone, an operational semantics does not form an adequate basis for program optimization and transformation. For instance, consider the equation

if (p) then e else e = e.

Some version of this equation is valid in most programming languages (at least if we assume p terminates), yet transforming an instance of the left-hand side in a program to the right-hand side cannot usually be justified simply by applying an evaluation rule.

The open equations (equations containing variables, such as the one above) valid in a language are not in general equationally derivable from a specification of the language's semantics, but require stronger rules of inference (such as structural induction) for their proofs. It is possible, however, to trade rules of inference for equations. That is, by adding finitely many equations (possibly involving auxiliary sorts and functions) to the specification, the full equational theory of the language may be brought within the scope of equational reasoning [Bergstra and Heering 1994]. Such an enriched specification is called [Omega]-complete. In a strict sense, [Omega]-completeness can be achieved only if the equational theory in question is recursively enumerable, but this is to be expected and of no concern for [PIM.sub.t] although it will probably become a problem for the full higher-order version of PIM.

Furthermore, equational reasoning (of which term rewriting is a special case) applies equally to closed and open terms. The latter corresponds to transformation of incomplete programs with the variables in the input term representing missing data or code. Evaluation of incomplete programs is usually called partial evaluation [Ershov 1982; Jones et al. 1993]. In the present context, we are not concerned with binding-time analysis or self-application, but, following Heering [1986], simply assert that

partial evaluation = rewriting of open terms with respect to the intended semantics.

While partial evaluation has often been advocated as a means for optimizing programs [Berlin and Weise 1990; Chambers and Ungar 1989; Ershov 1982; Ershow and Ostrovski 1987; Nirkhe and Pugh 1992], we know of few results that precisely characterize the transformational capabilities that a particular partial evaluator implements. In our setting, finding an w-complete specification amounts to showing that one's partial evaluator has all the rules it needs at its disposal; it will thus be our goal in the sequel to find an [Omega]-complete axiomatization for [PIM.sub.t].


In this section we give a brief summary of some basic facts of algebraic specification theory which are essential to an understanding of what follows. Good surveys are Meinke and Tucker's [1992] survey, Meseguer and Goguen's [1985] survey, and Wirsing's [1990] survey. We assume some familiarity on the part of the reader with equational logic and initial-algebra semantics.

7.1 [Omega]-completeness

Definition 7.1.1. An algebraic specification S = [[Sigma],E) with nonvoid many-sorted signature [Sigma], set of equations E, and initial algebra I(S) is [Omega]-complete if I(S) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for open [Sigma]-equations [t.sub.1] = [t.sub.2].

According to the definition, all equations valid in the initial algebra of an [Omega]-complete specification may be deduced using only equational reasoning. No structural induction is needed. Trading structural induction for equational reasoning from an enriched equational base has two potential advantages:

-- An existing rewrite implementation of equational logic can be used, although an A-, AC-, or some other E-rewriting capability may be needed depending on the specification involved. -- Rewriting may have a sense of direction lacking in structural induction. It may perform useful simplifications of terms without having been given an explicit inductive proof goal beforehand.

One way of proving [Omega]-completeness of a specification is to show that every congruence class modulo E has a representative in canonical form (not necessarily a normal form produced by a rewrite system) such that two distinct canonical forms [t.sub.1] and [t.sub.2] can always be instantiated to ground terms [Sigma]([t.sub.1]) and [Sigma]([t.sub.2]) that cannot be proved equal from E. Another way is to show by induction on the length (in some sense) of equations that equations valid in I(S) are provable from E. We use both methods in this article. Additional information about proof techniques for [Omega]-completeness as well as examples of their application can be found in Heering [19861, Lazrek et al. [1990], and Bergstra and Heering [1994).

7.2 Final Algebra Semantics

Final algebra semantics does not make a distinction among elements that have the same observable behavior. We need the following definitions:

Definition 7.2. 1. Let [Sigma] be a many-sorted signature and let S, T [Epsilon] sorts ([Sigma]). A [Sigma]-context of type S [right arrow] T is an open term of sort T containing a single occurrence of a variable ?? of sort S and no other variables.

The instantiation C(?? := t) of a [Sigma]-context C of type S [right arrow] T with a [Sigma]-term t of sort S will be abbreviated to C(t). If t is a ground term, C(t) is a ground term as well. If t is a [Sigma]-context of type S' [right arrow] C(t) is a [Sigma]-context of type S' [right arrow] T.

Definition 7.2.2. Let S = [Sigma], E) be an algebraic specification with non-void many-sorted signature [Sigma], set of equations E, and initial algebra I(S). Let O [subset or equal to] sorts ([Sigma]). The final algebra [F.sub.O] (S) is the quotient of I(S) by the congruence [equivalent] O defined as follows:

(1) [t.sub.1], [t.sub.2] ground terms of sort S [element of] O:


Item (2) says that terms of nonobservable sorts (sorts not in O) that have the same behavior with respect to the observable sorts (sorts in O) correspond to the same element of [F.sub.o] (S). It is easy to check that [[equivalent.sub.o]] is a congruence.

Definition 7.2.2 corresponds to the case M = I(S) of N(M) as defined in Meseguer and Hoguen [1985, p. 488]. Observable sorts are called visible in Meseguer and Goguen [1985] and primitive in Wirsing [1990, Sec. 5.4].

7.3 Steps Toward a Completeness Result

Our completeness result will require two basic technical steps:

(1) Finding an initial algebra specification of the final model [F.sub.v] ([PIM.sup.o.sub.t]). [F.sub.v] ([PIM.sup.o.sub.t]) is the quotient of the initial algebra I([PIM.sup.o.sub.t]) by behavioral equivalence with respect to the observable sort V of base values. We add an equational definition of the behavioral equivalence to [PIM.sup.o.sub.t], resulting in an initial algebra specification of [F.sub.v] ([PIM.sup.o.sub.t]).

(2) Making the specification obtained in step (1) [Omega]-complete to improve its ability to cope with program transformation and partial evaluation.

We illustrate these steps for a simple data type before attacking [PIM.sup.o.sub.t] itself.

7.4 A Simple Example

We perform steps (1) and (2) (Section 7.3) for the specification of a stack data type shown in Figure 11.


Step (1). Consider the final model [F.sub.v] (STACK) in the sense of Definition 7.2.2 with O = {V}. We give an initial algebra specification of it [Bergstra and Tucker 1995; Heering 1985]. The normalized contexts of type S [right arrow] V are


The constant al is special in view of Equation (St1) (Figure 11). By Eqs. (St1)-(St4)

[C.sub.n] (push ([a.sub.1], 0)) = [C.sub.n] (0)

for all n [greater than or equal to] 1. This means push ([a.sub.1], 0) and 0 have the same V-observable behavior, and the equation

push ([a.sub.1], 0) = 0

which is not valid in I(STACK), holds in [F.sub.v] (STACK). The rewrite system obtained by interpreting the equations of [STACK.sup.+] = STACK + (1) as left-to-right rewrite rules is easily seen to be confluent and terminating.(9) The corresponding ground normal forms of sort S are


Equation (1) happens to be the only additional equation we need, and [STACK.sup.+] is the initial algebra specification we are looking for.

Proof. It is sufficient to check that two distinct terms in normal form (2) are observationally distinct. Let


be two distinct normal forms, i.e., p [is not equal to] q or [a.sub.ik] [is not equal to] [a.sub.jk] for some k [is greater than or equal to] 1. In the first case (p [is greater than or equal to] q say), [t.sub.1] and [t.sub.2] are distinguished by the context [C.sub.p] = top([pop.sup.p-1](??)), since


In the second case they are distinguished by [C.sub.k], since


Step (2). We give [OMEGA]-complete enrichment [STACK.sup.=] of [STACK.sup.+]. The equation

push(top(s),pop(s)) = s

is easily seen to be valid in I([STACK.sup.+]) by verifying it for terms in normal form (2). It is not equationally derivable from [STACK.sup.+] since the corresponding rewrite system is not applicable. Again, it happens to be the only additional equation we need. We show that [STACK.sup.=] = [STACK.sup.+] + (3) is [OMEGA]-complete.

Proof. Normal forms of sort V are (1) constants [a.sub.1], ..., [a.sub.N] (N [is greater than] 1); (2) variables v .... ; and (3) terms [C.sub.n](t) = top([pop.sup.n-1](t)) with t a variable of sort S. We have to check all combinations of normal forms. These are easily seen to be different in I(STACK=). For instance, the normal forms [C.sub.1](s) and [C.sub.2](s) are distinguished by the substitution s = push([a.sub.2],0). Note the importance of N [is greater than] 1.

Normal forms of sort S are (1) ground normal forms (2); (2) variables s .... ; (3) terms [pop.sup.n](t) with t a variable of sort S and n [is greater than] 1; (4) terms Push([a.sub.i], t) with t a normal form of sort S containing at least one variable; (5) terms push([t.sub.1], [t.sub.2]) with [t.sub.1] a variable of sort V and [t.sub.2] a normal form of sort S; and (6) terms push([C.sub.n]([t.sub.1]), [t.sub.2]) with [t.sub.1] a variable of sort S and [t.sub.2] a normal form of sort S [is not equal to] [pop.sup.n]([t.sub.1]) in view of Equation (3).

As before, we have to check all combinations of normal forms. The only nontrivial cases are (4)-(6). Let the length of an equation be the number of nonlogical symbols in it. For instance, Equation (3) has length 6. We proceed by induction on the length of equations: the equations of length [is less than or equal to] 6 valid in I(STACK+) are provable from Eqs. (St1)-(St4), Equation (1), and Equation (3). Assume the valid equations of length [is less than] n are provable. Let push([t.sub.1], [t.sub.2]) = [t.sub.3] be a valid equation of length n. Then top([t.sub.3]) = [t.sub.1] and pop([t.sub.3]) = [t.sub.2] are valid equations of length [is less than] n and hence are provable by assumption. Hence, Push([t.sub.1], [t.sub.2]) = [t.sub.3] itself is provable, since push([t.sub.1], [t.sub.2]) = Push([t.sub.1]([t.sub.3]), pop([t.sub.3])) = [t.sub.3] by Equation (3).


We give an initial algebra specification [PIM.sup.+.sub.t] of the final model [F.sub.v]([PIM.sup.0.sub.t]). [PIM.sup.0.sub.t] is shown in Figures 3 and 4. The additional equations of [PIM.sup.+sub.4] are shown in Figure 12.(10)



Proof. We prove (M2'). The proof of (S8) is similar. The normalized contexts of type M [right arrow] V are


(M2) is rendered superfluous by (M2'). Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We have


Proof. We show that two distinct ground normal forms are observationally distinct. (1) Ground normal forms of sort M are


[0.sub.m] and [?] are distinguished by the context ([[c.sub.1]] [o.sub.m] ??)!, the others by ??!. (2) Ground normal forms of sort S are


[M.sub.j] in normal form (4), [M.sub.j] [is not equal to] [0.sub.m] in view of (S2)).

Two distinct normal forms of sort S can be distinguished with respect to M

by a suitable store dereference of the form ?? @ [Alpha.sub.k] for some k.

Hence, they can be distinguished with respect to V according to (1).

(3) Sorts A and B are not affected. Any identification of elements of these sorts would immediately lead to collapse of the base values.

We note that repeating the above step for [PIM.sup.+.sub.t] does not yield further equations, since [F.sub.v]([PIM.sup.+.sub.t]) = I([PIM.sup.+.sub.t]).


We give an [Omega]-complete enrichment [PIM.sup.=.sub.t] of [PIM.sup.+.sub.t]. The additional equations of [PIM.sup.=.sub.t] are shown in Figure 13. As before, [Rho] in Eqs. (Ln) is assumed to be one of m or s. The reader will have no difficulty verifying the validity of the additional equations of [PIM.sup.=.sub.t] in the initial algebra I([PIM.sup.+.sub.t]) by structural induction. Somewhat unexpectedly, their automatic inductive verification succeeded only with considerable difficulty [Naidich and Dinesh 1996].


The [Omega]-completeness proof uses both proof methods mentioned in Section 7.1. It basically proceeds by considering increasingly complex open terms and their canonical forms. The latter are determined up to some explicitly given set of equations and are considered distinct only if they are not equal modulo these equations. The fact that two distinct canonical forms can be instantiated to ground terms that cannot be proved equal from [PIM.sup.=.sub.t] is not explicitly shown in each case, but has been verified, and the reader will have no difficulty repeating this.

We successively consider the following terms:

-- boolean terms without ??; -- boolean terms with ??; -- open merge structures with ?? but without @ or !; -- unrestricted open merge structures; -- open terms of sort V; -- open store structures without @ or ?? and without variables of sort S-; -- open store structures with @ and ?? and with variables of sort S, but without variables of sort A; -- unrestricted open store structures.

In two cases (boolean terms with ?? and unrestricted open store structures) the proof is not based on canonical forms, but proceeds by induction on the number of different address variables in an equation (its "length"). The same method (with a different definition of length) is used in Section 7.4 to show the [Omega]-completeness of [STACK.sup.=]. In the case of boolean terms with ?? we also obtained a canonical form (included below), but we failed to obtain one for unrestricted open store structures. It would certainly be preferable to have one, but the [Omega]-completeness proof does not depend on it. The proof would only get a more constructive character.

In the following we concentrate on the various canonical forms. The (rather lengthy) proofs that they can actually be reached by equational reasoning from [PIM.sup.=.sub.t] as well as the proofs of the two inductive cases are available in Appendix B.

9.1 Boolean Terms without ??

The only booleans are T and F. Suitable canonical forms are the well-known disjunctive normal forms without nonessential variables (variables whose value does not matter). See, for example, Bergstra and Heering [1994, Theorem 3.1].

9.2 Boolean Terms with ??

These require Eqs. (A3)-(A6) in addition to (A1)-(a2). (A5) and (A6) are substitution laws. (S9) and (S9) are similar laws for guarded store and merge structures which will be needed later on. The transitivity of ?? is given by the equation


which is an immediate consequence of (A5) or (A6) in conjunction with (B11). Note that the number of address constants ai is infinite. Otherwise an equation [V.sup.K.sub.i=1](a ?? [a.sub.i]) = T would have been needed.

A suitable canonical form is the disjunctive normal form without nonessential variables mentioned before with the additional condition that the corresponding multiset of address constants and variables is minimal with respect to the multiset extension of the strict partial ordering


A multiset gets smaller in the extended ordering by replacing an element in it by arbitrarily many (possibly 0) elements which are less in the original ordering [Klop 1992, p. 38]. The canonical form is determined up to symmetry of ?? and up to associativity and commutativity of [disjunction] and [conjunction] as before.

9.3 Open Merge Structures with ?? but without @ or !

These are similar to the if-expressions treated in Heering [1986, Sec. 3.3], but there are some additional complications. First, we have




Equation (7) is a generalization of (M2'). Unfortunately, the even more general equation


is not valid for p = T and [m.sub.2] = [0.sub.m]. Instead we have the weaker analogue




This affects the canonical forms of subterms involving variables of sort M, making them somewhat more complicated than would otherwise be the case.

Equation (L10) has the equivalent conditional form


since, assuming [p.sub.1] [conjunction] [p.sub.2] = F, we have


Equation (9) is often more readily applicable than (L10).

A suitable canonical form (OMCF) for open merge structures without @ or ! is shown in Figure 14.





(i) [P.sub.i] boolean canonical form [is not less than] F, [P.sub.i] ?? [P.sub.j] = F (i [is not equal than] j) (ii) [V.sub.i] a variable of constant of sort V, [V.sub.i] [is not equal to] [V.sub.j] (i [is not equal to] j) (iii) [Q.sub.i] in boolean canonical form [is not equal to] F, [Q.sub.i] ?? [Q.sub.j] = F (i [is not equal to] j) (iv) [M.sub.i] an open merge structure [m.sub.i1] [o.sub.mi2] [o.sub.m]... consisting of [is greater than or equal to] 1 different variable [m.sub.i1], [m.sub.i2],... of soft M, and [M.sub.i] [is not equal to] [M.sub.j] (i [is not equal to] j).

Fig. 14 (OMCF). Canonical form for open merge structures without @ or !. The canonical form is determined up to associativity and commutativity of ?? and ??, symmetry of ??, and associativity and conditional commutativity of [o.sub.m] (Eqs. (L3) and (9) with p = m).

9.4 Unrestricted Open Merge Structures

A suitable canonical form (OMCFgen), very similar to (OMCF), is shown in Figure 15.





(i) As for canonical form (OMCF) (ii) As for canonical form (OMCF) (iii) As for canonical form (OMCF) (iv) [M.sub.i] and open merge structure consisting of [is greater than or equal to] different variables, which may be either ordinary variables of soft M or compound variables s @ A, and [M.sub.i] [is not equal to] [M.sub.j] (i [is not equal to] j) (v) The corresponding multiset of address constants and variables is minimal with respect to the ordering (6).

Fig. 15. (OMCF gen). Canonical form form unrestricted open merge structures. Apart from requirements (iv) and (v), it is the same as canonical form (OMCF).

9.5 Open Terms of Sort V

A suitable canonical form (OVCF) is shown in Figure 16.

M! with M in canonical form (OMCF gen) without subterm P ?? m [?].

Figure 16. (OVCF). Canonical form for open terms of sort V.

9.6 Open Store Structures without @ or ?? and without Variables of Sort S

We first note the following immediate consequences of (S8).


Equation (S7) is a conditional commutative law.(11) Equation (10) is similar, but with an appropriate guard rather than a condition.

A suitable canonical form (OSCF) is shown in Figure 17. An important special case (OSCFsimple) is shown in Figure 18.





(i) [A.sub.i] a constant or variable of sort A (ii) [M.sub.i] a merge structure without ?? in canonical form (OMCF) [is not equal to] [Theta.sub.m] (iii) [II.sub.i] the canonical form [is not equal to] F of


with [is not euqal to] (A.sub.i] ?? [A.sub.k]) denoting one of [A.sub.i] ?? [A.sub.k]) (iv) [Pi.sub.i] ?? [Pi.sub.j] = F ([A.sub.i] = [A.sub.j] modulo (S9), i [is not less than] j) (v) ?? [Pi.sub.j] = T (1 [is not less than or equal to] i [is not less than or equal to] n) [A.sub.j] = [A.sub.i] modulo (S9) (vi) The corresponding multiset of address constants and variables is minimal with respect to the ordering (6).

Fig. 17. (OSCF). Canonical form for open store structures without @ or ?? and without variables of sort S. The canonical form is determined up to associativity of [o.sub.s] (Equation (L3) with p = s). Furthermore, as a consequence of requirements (iii) and (iv) at least one of the conditional commutative laws (9), (S7), (10) applies to any pair of adjacent store cells, and the canonical form is unconditionally commutative. Unlike the original term, the canonical form is not ?? -free.



(i) All [Alpha.sub.ij] different in view of requirement (iv) of canonical form (OSCF) (ii) As requirement (ii) of canonical form (OSCF).

Fig. 18. (OSCF simple). Special case of (OSCF) if all addresses are known.

9.7 Open Store Structures with @ and ?? and with Variables of Sort S, but without Variables of Sort A

The main equation we need is (S12). Note that in case of a finite number K of address constants [Alpha..sub.i], the stronger equation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] would hold.

Since there are no address variables, any occurrences of ?? can be eliminated by (A1)-(A2) and the extension (OSCFvar) of the simple canonical form (OSCFsimple) shown in Figure 19 applies.


with (i) [Sigma] [o.sub.s] ((T ?? s {[alpha.sub.i1] [arrow right] [M.sub.1]}) [o.sub.s]...[o.sub.s] (T ?? s {[] [arrow right] [M.sub.n]})) with (i) [Sigma] in canonical form (OMCF) with k = 0, or rather its equivalent for sort S (ii) the rightmost part in canonical form (OSCF simple), but with merge structure components [M.sub.i] in canonical form (OMCFgen) [is not equal to] [Theta.sub.m] rather than (OMCF) (iii) p ?? q = F for any p ?? s (...[o.sub.s] s) in [Sigma} and q ?? m ((s @ [Alpha.sub.ij]) [o.sub.m] ...) in [M.sub.j].

Fig. 19. (OSCFvar). Canonical form for open store structures with @ and ?? and with variables of sort S, but without variables of sort A.

9.8 Main Theorem

Let [PIM.sup.=.sub.t] = [PIM.sup.+.sub.t] + the equations of Figure 13. In view of the foregoing we have the following

PROPOSITION 9.8.1. [PIM.sup.=.sub.t] is w-complete.

PROOF. Provided in Appendix B.


10.1 Rewriting PIM Graphs

By orienting equation instances of [PIM.sup.=.sub.t] and implementing the resulting rules on graphs, we obtain a term-graph-rewriting system [Barendregt et al. 1987]. Such systems can be designed to produce normal forms with a variety of interesting properties. For example, the graph [S'.sub.p.1] depicted in Figure 8 is obtained by first normalizing the graph [S.sub.p.1] (Figure 2) with respect to the system [PIM.sup.[right arrow].sub.t] developed in Section 10.2, then using instances of Equation (S8) of [PIM.sup.+.sub.t] to permute addresses with respect to a fixed ordering. [S.sub.p.1], is the normal form of the PIM representations of both [P.sub.1] (i.e., [S.sub.p.1]) and [P.sub.3] (Figure 1); therefore, it is immediate that they are behaviorally equivalent.

The normalization process can be used not only to discover equivalences not apparent from the initial PIM representations, but also to "build" useful graph-based compiler IRs as a side-effect [Field 1992]. For example, the composition operator in the subgraph M' of [S'.sub.p.1] is very similar to an instance of the [Lambda] node of GSA form [Ballance et al. 1990]. If we ignore the guard, we can also interpret the composition operator in M' as an SSA form [Phi] node [Cytron et al. 4991]. The principal difference between these IRs and the class of normal forms exemplified by [S'.sub.p.1] is that variable uses are linked directly to the expressions that define their value, even when, for example, a chain of copying assignments intervenes (VDGs [Weise et al. 1994] also have this property).

Figure 20 depicts the graph [S".sub.p.1], a further simplification of [S'.sub.p.1] . [S".sub.p.1] is also a simplified form of the PIM translations of programs [P.sub.3] and [P.sub.5] in Figure 1. As with [S'.sub.p.1], [S".sub.p.1] is produced by a rewriting system, namely, [PIM.sup.[right arrow].sub.t] augmented with an oriented instance of [PIM.sup.=,.sub.t] Equation (L11), followed as before by address permutation using (S8). Depending on the application, it may be more appropriate to use systems that produce normal forms similar to compiler IRs, such as [S'.sub.p.1], rather than simplifying further to forms such as [S".sub.p.1]


Consider finally the [Mu] C programs depicted in Figure 21. All of these programs are behaviorally equivalent; this fact may be deduced by inspection of the normal form graph [S'.sub.p.6] shown in Figure 22 (produced by augmenting the system used to produce [S'.sub.p.1] with an oriented instance of Equation (L11)). We know of no intermediate representation in the compiler literature for which the representations of [P.sub.6]-[P.sub.9] would be the same.

In general, the design of a PIM-based rewriting system will be governed by the tradeoffs between properties desired of normal forms and the time complexity of normalization strategies that yield those normal forms. As several PIM operators are commutative and associative, one cannot rely entirely on structural properties of normalized graphs to detect all valid program equivalences. However, rewriting systems are an important stepping-stone to more powerful decision procedures and allow structural identity to be used to detect many more equivalences than would be possible otherwise.

In the next section, we obtain confluent and terminating rewriting systems from [PIM.sup.=.sub.t]. While such systems constitute partial decision procedures for term equivalence, they are also good starting points for designing (possibly nonconfluent) systems that derive terms or graphs with useful structural properties (e.g., in which variable definitions are linked directly to uses). The development of confluent and terminating systems can be mechanized to a certain extent by Knuth-Bendix completion [Dershowitz and Jouannaud 1990]. However, producing systems that yield graphs with particular structural properties requires a more ad hoc process of introducing (or deleting) rules derived from [PIM.sup.=.sub.t] until the desired property is achieved.

10.2 Confluent Subsystems of [PIM.sup.=.sub.t]

Our goal in this section will be to derive the strongest possible confluent and terminating subsystems of [PIM.sup.=.sub.t]. Much of this work described in this section was carried out with the assistance of the TIP inductive theorem-proving system [Fraus 1994], which was used to perform Knuth-Bendix completion and to aid in inductive verification of equations incorporated in [PIM.sup.=.sub.t]. While TIP was able to inductively verify many [PIM.sup.=.sub.t] equations, it was unable to successfully use (S8) as a lemma during semiautomatic proofs (other theorem provers available to us had similar limitations), and the full mechanical verification succeeded only with considerable difficulty [Naidich and Dinesh 1996]. Here, we indicate progress in converting part of [PIM.sup.=.sub.t] into a term-rewriting system. We first consider the completion of [PIM.sup.0.sub.t], then treat the additional equations of [PIM.sup.+.sub.t], and finally those of [PIM.sup.=.sub.t].

Knuth-Bendix Completion of [PIM.sup.0.sub.t].. The rewriting system obtained by interpreting the equations of [PIM.sup.0.sub.t] as left-to-right rewriting rules and with AC-declarations for [conjunction] and [disjunction] is confluent and terminating with the addition of the rule


which originates from a critical pair generated from the rules (S1) and (S2). [PIM.sup.0.sub.t] is ground confluent even without this rule. Left-associative orientation of (L3) (i.e., left-to-right orientation of the equation) is significant for [PIM.sup.0.sub.t], as a right-associative orientation of (L3) in conjunction with rule (M2) causes the completion procedure to add an infinite number of rules:


Knuth-Bendix Completion of [PIM.sup.+.sub.t].. When (M2') is substituted for (M2), the orientation of (L3) becomes irrelevant, since the context in which the pattern m [m.sup.o] [v] could be matched is now immaterial. Using a left-associative orientation of (L3) for the merge case, the completion procedure adds only the rule (MA0). We note that this is a special case of (L4) below.

Adding (S8) is, however, a difficult problem, since the equation is (conditionally) commutative. We therefore proceed by first splitting (S8) into (S6) and (S7) (see Section 9.6). (S7) is difficult to orient, but (S6) has an obvious orientation and is in acceptable form for mechanical analyzers. After attempting TIP's completion procedure on the system with (S6) and (M2'), we see immediately that the critical pairs that result from (S6) and (S4), using (S1), give rise to a special case of (L7) for [Rho] = m. Unfortunately, both (S6) and (L7) are left-nonlinear rules (when oriented left-to-right). Obtaining a left-linear completion is often preferable to a left-nonlinear completion, since -- a left-linear system admits an efficient implementation, without the need for equality tests during matching; -- when a left-linear system is embedded in the untyped lambda calculus (as is necessary to extend PIM to arbitrary source programs), it is straightforward to show that the combined system remains confluent [Muller 1992]. We therefore consider left-nonlinear equations separately and proceed for the moment without (S6) and (L7).

Adding the boolean equations (B7), (B18), and (B19), along with the oriented versions of Eqs. (L4) and (L8), results in a confluent and terminating system. (L8) requires us to use multiset ordering, due to the permutability of the guards. To also accommodate (L3), we use the generalized recursive path ordering with status, which in TIP is called the "multiset ordering based on the lexicographic ordering."

Adding (M7) or (M8) requires that (L3) be oriented in the right-associative direction. This is caused by the generation of the rule (MA2), which is similar to (M2) (see the completion of [PIM.sup.0.sub.t] above) but with a pattern [?] in the context []! appearing on the left. Also, adding (M7) and (M8) generates the rules (MA1) to (MA5). (MA1) and (MA5) are due to the right-associative ordering of (L3). The resulting system [PIM.sup.right arrow.sub.t] is shown in Figure 23. [PIM.sup.right arrow.sub.t]- is confluent, terminating, and left-linear.


Enhancing the Rewriting Systems. Further enrichments to [PIM.sup.right arrow.sub.t] seem to require left-nonlinear rules in order to achieve confluence. Adding (L7), we require the additional rules (MB1)-(MB4) shown in Figure 24. If we then add (S6), we need the rule (SB1), also shown in Figure 24. Adding all the rules in Figure 24 to those of [PIM.sup.right arrow.sub.t], we get the system [PIM.sup.right arrow.sub.t].


If we enrich [PIM.sup.[right arrow].sub.t] with Eqs. (B10), (B14), and (B16), oriented left-to-right, the completion procedure of the LP system [Garland and Guttag 1991] adds the absorption law

p [disjunction] (p [conjunction] p1) [right arrow] p. (BA1)

Finally, both [PIM.sup.right arrow.sub.t] and [PIM.sup.right arrow.sub.t] produce normal forms modulo associativity and commutativity of [conjunction] and [disjunction], i.e., with respect to (B8), (B9), (B12), and (B13). Note that [PIM.sup.right arrow.sub.t] does not require rewriting modulo associativity and commutativity, since it can be enhanced with the symmetric variants of the rules (B3)-(B6) and the two associativity rules for [conjunction] and [disjunction] (see Table I).


Problematic Equations. Attempts to obtain further enriched confluent and terminating rewriting systems have been unsuccessful thus far. Adding both (B16) and (B17) results in a nonterminating system. Even adding one of them causes problems for the TIP system. The leftnonlinear rules resulting from (B10), (B11), (B14), and (B15) cause problems with TIP's treatment of AC declarations. Unlike TIP, LP was able to add (B10), (B14), and (B16) to [PIM.sup.right arrow.sub.t]. (A4), (A5), (A6), (S9), and (S10) are good candidates to be put in the set of "modulo" equations, but we are not aware of any available Knuth-Bendix completion system that allows it. (S12) and the general form of (S8) cannot be ordered properly and thus lead to non-terminating term-rewriting systems. (L9), (L10), and (L11) lead to leftnonlinear rules, which again cause problems for completion modulo AC. Despite these difficulties, we conjecture that larger confluent subsystems of [PIM.sup.=.sub.t] exist, particularly if we consider confluence modulo associativity, idempotence, identity, and commutativity. Finding such systems is left as future work. One approach might be to incorporate Hsiang and Dershowitz's [1983] confluent [Omega]-complete specification of the booleans, since the well-known disjunctive and conjunctive boolean normal forms are not produced by any rewriting system. We have not been more successful here, again due to the interference of the left-nonlinear rules in these booleans with the other left-nonlinear rules of PIM.


Although we have treated the algebraic core of PIM quite extensively in this article, more work remains to be done to study the higher-order versions of PIM touched on in Section 5. In particular, we would like to develop this work along the following lines:

Provide an Operational Semantics for Higher- Order PIM. This would proceed by defining a reduction strategy for [PIM.sub.t] + the [Beta] rule, then proving a standardization theorem. We expect such a result to be routine, given similar results by other authors [Felleisen and Hieb 1992; Odersky et al. 1993], the lack of overlap in the structure of pure lambda terms and [PIM.sub.t] terms, and the fact that an oriented version of [PIM.sup.0.sub.t] is sequential [Boudol 1985]. Confluence follows trivially from the result of Muller [1992].

Study the Effect of Nontermination on [PIM.sup.=.sub.t]. It is not difficult to show that in the presence of nontermination, the additional equations introduced by [PIM.sub.=.sub.t] are unsound in a technical sense, since they may cause a nonterminating term to be equated with a terminating one. However, we expect to be able to show that equations between any two terms are valid provided both of their standard reductions terminate. Such a result would provide a weak form of soundness consistent with the way most compiler transformations are implemented in practice. Alternatively, one could attempt to completely reconcile formal semantics and practice by reformulating certain equations of [PIM.sup.=.sub.t] in conditional form, predicating their consequents on proving nontermination for appropriate subterms.

Consider Typed Higher-Order Variants of PIM. Due to its simplicity and computational power, the untyped lambda calculus is attractive for encoding a variety of higher-order program constructs. However, the absence of typing makes it more difficult to define strong inference rules for commonly occurring higher-order constructs. It would therefore be useful to investigate typed higher-order versions of PIM. PIM's stores pose a challenge to defining appropriate type systems, since (like real machine memory) they may hold a variety of values of different types -- including, e.g., functional values.

Study Induction Principles. While we believe that the induction rules defined in Field [1992] and Field et al. [1995] are a good starting point for reasoning about loops and recursion, other forms of induction may also be useful. Rules for which inductive premises can be established mechanically, e.g., by fixpoint iteration, are especially attractive.

Obtain Completeness Results for Restricted Higher-Order Systems. Given the lack of an [Omega]-complete axiomatization for the pure untyped lambda calculus [Plotkin 1974], it appears unlikely that a completeness result for a logic axiomatizing the operational properties of [PIM.sub.t] + the [Beta] rule could be obtained. However, one might nonetheless attempt to formulate completeness results for restricted systems, e.g., for typed subsystems, or systems that eschew lambda terms in favor of more restricted forms of recursion or iteration.

In addition to further study of the properties of higher-order variants of PIM, we would also like to extend our results in the following directions:

-- Using the canonical forms discussed in this article to develop a decision procedure for [PIM.sub.t]. Obtaining a canonical form for unrestricted open store structures would be a useful complement to this.

-- Obtaining completeness results for variants of [PIM.sub.t], including versions in which booleans and addresses are not treated as distinct from other base values, variants incorporating the merge distribution rules, as used for addresses in Field [1992] and generalized in Field et al. [1995], and extensions with nontrivial value operations.

-- Constructing confluent and/or terminating rewriting subsystems of [PIM.sup.=.sub.t] stronger than [PIM.sup.'right arrow.sub.t].



A.1 [Mu]C Programs to PIM Terms

The syntax of [Mu]C is given in Figure 25. A formal description of the translation of [Mu]C programs to PIM terms is given in Figures 26 and 27. The translation is written in the style of Natural Semantics [Kahn 1987] and adheres very closely to standard C semantics, e.g., integers are used to represent boolean values.


The translation uses several different sequent forms corresponding to the principal [Mu]C syntactic components. These sequent forms are as follows:


Each of the sequents above takes a [Mu]C construct c and an incoming PIM store s, and each yields a PIM term or a pair(12) of PIM terms, depending on the nature of the ILC construct being translated. Pure expressions (those having no side-effects) and impure expressions are distinguished in the translation process; subscripts p and i are used to denote the two types. Details of the sequent types are as follows:

-- Sequents with =>Stmt are used to translate statements. Sequents with =>Pgm are used to translate entire programs. Both yield a PIM store term u corresponding to the cumulative effect of updates to the store made by the statement or program.

-- Sequents with =>[Exp.sub.p] are used to translate pure expressions computing ordinary values. Sequents with =>[LValue.sub.p] are used to translate pure expressions computing L-values (addresses). An instance of the former yields a PIM base value term v corresponding to the expression's value; an instance of the latter yields a PIM address term a corresponding to the expression's L-value.

-- Sequents with =>[Exp.sub.i] are used to translate impure expressions computing ordinary values. Sequents with =>[Lvalue.sub.i] are used to translate impure expressions computing L-values. An instance of the former yields a pair <v, u> consisting of the PIM base value term v corresponding to the expression's value and the PIM store term u corresponding to the expression's side-effects. An instance of the latter yields a pair <a, u> consisting of the PIM address term a corresponding to the expression's L-value and the store term u corresponding to the expression's side-effects.

-- Sequents with [??.sub.Exp] or [??.sub.LValue] are used to translate arbitrary ordinary or L-valued expressions. They yield pairs of the form <v, u} or <a, u>, respectively. Rules using these sequents simply choose the appropriate pure or impure sequents, depending on the type of construct being translated.

As an example of how the translation process works, consider rule ([E.sub.u5]) in Figure 27. This rule may be read as follows. Given [Mu] C expression [Exp.sub.1] + [Exp.sub.2] and incoming PIM store s, first translate [Exp.sub.1] to the pair <[v.sub.1], [u.sub.1]> using initial store s. Term [v.sub.1] represents the value of [Exp.sub.1], and term [u.sub.1] represents the side-effects occurring in [Exp.sub.1]. Next, translate [Exp.sub.2] in an initial store given by the composition of store s and store [u.sub.1], yielding pair <[v.sub.2], [u.sub.2]>. This means that any side-effect occurring in [Exp.sub.1] is accounted for in the store used to translate [Exp.sub.2], thus effectively encoding a left-to-right order of evaluation. Finally, the PIM term corresponding to the entire expression [Exp.sub.1] + [Exp.sub.2] is given by the pair <intSum([v.sub.i], [v.sub.2]), [u.sub.1] [Omichron] [u.sub.2]>. The term intSum([v.sub.1], [v.sub.2]) corresponds to the sum of [v.sub.1] and [v.sub.2], and the term [u.sub.1] [Omichron] [u.sub.2] is the PIM store corresponding to the cumulative side-effects occurring in both [Exp.sub.1] and [Exp.sub.2].

A.2 PIM Terms to PIM Graphs

The translation given in Figures 26 and 27 takes a [Mu] C program and produces a PIM term. To get the PIM graphs used in the examples in the main body of the article, we simply adopt the convention that any term bound to a variable used in a translation rule is shared if that variable appears more than once in the rule. For example, in the case of rule ([E.sub.u5]), the incoming store s appears twice in the rule's antecedent. If the term bound to s is used in the translation of both [Exp.sub.1] and [Exp.sub.2] (which would happen, for example, if both [Exp.sub.1] and [Exp.sub.2] were identifiers, causing rule ([E.sub.p2]) to be applicable to each), then the term bound to s may be shared.

In almost all cases where multiple instances of the same variable appear, the variable represents a PIM store. This should not be surprising, since the store must be "threaded" by the translation rules to every expression that could possibly use it -- note, for example, the extensive sharing of substores that occurs in the PIM graph [S.sub.P1] of Figure 2. However, multiple instances of other kinds of variables also appear in the rules, e.g., in rules ([S.sub.5]) and (E.sub.u4]).

Although shared subgraphs arise most naturally from the structure of the rules used in the translation, it is often also useful to share identical subgraphs generated "serendipitously" during the translation of unrelated parts of the [Mu] C program. Such sharing is often referred to as value numbering in the program optimization literature, and is referred to as hash consing in the functional and symbolic computation literature. However, unlike many IRs used in program optimization, it is always semantically valid to share identical PIM subgraphs, regardless of whether they represent statements or expressions, and regardless of the context in which they are used.



Appendix B is available only online in a seperate document. You should be able to get the online-only appendix from the citation page for this article:

Alternative instructions on how to obtain online-only appendices are given on the back inside cover of current issues of ACM TOPLAS or on the ACM TOPLAS web page:


We are grateful to G. Ramalingam for his assistance in implementing various versions of PIM, as well as for valuable technical suggestions. We also wish to thank the referees and associate editor for suggestions that helped to improve the presentation of the article. Without the help of Dimitri Naidich the mechanical verification of [PIM.sup.=.sub.t] would not have succeeded.

(1) In this article, we do not actually use Cartwright and Felleisen's denotational model in our completeness result, preferring instead a final algebra model that makes a more direct link between a language's operational semantics and its logic of program equivalence.

(2) Selke's semantics uses an ad hoc graph rewriting formalism, rather than term graph rewriting [Barendregt et al. 1987].

(3) Although PIM admits arbitrary computation on addresses in general, the completeness results in this article assume that addresses are not themselves "results" of programs.

(4) If address or predicate expressions may contain nonterminating computations, there are a number of semantic issues beyond the scope of this article that must be addressed. In brief, we take the position (usually adopted implicitly by optimizing compilers) that equations remain valid as long as they equate terms that behave the same in the absence of nontermination.

(5) The gaps in the equation numbers used for [PIM.sup.0.sub.t], as well as those occurring in other systems discussed in the sequel, are present to ensure compatibility with the equation numbers used in Field [1992].

(6) It is straightforward to design an "allocation function" that generates new address names simply by encoding some form of counter. However, the equational semantics that result are somewhat unsatisfactory, in that certain simple properties of heap-allocated addresses (e.g., inequality of addresses generated by two consecutive calls to the allocator) must be derived from the properties of the counter used in the allocation function. A more abstract and natural encoding of heap-allocated data remains an open problem.

(7) For clarity, Figure 2 does not depict certain null stores created by the translation process; this elision will be irrelevant in the sequel.

(8) For procedures, Y has type ((S [right arrow] S) [right arrow] (S [right arrow] S)) [right arrow] (S [right arrow] S). and Yf = f (Yf) for any lambda expression f : (S [right arrow] S) [right arrow] (S [right arrow] S).

(9) Such rewrite systems are usually called complete or canonical. To avoid undue overloading of both adjectives in this article we prefer the more cumbersome terminology.

(10) Equation (M2') in Figure 12 is a special case of (M6) in the preliminary version of [PIM.sup.=.sub.t] given in Field [1992, Fig.6]. (S8) subsumes (6) and (S7) in the in the preliminary version.

(11) (S6) and (S7) correspond to (S6) and (S7) in the preliminary version of [PIM.sup.=.sub.t] given in Field [1992, Figure 6].

(12) The pair constructor <., .) is an auxiliary symbol used only during the translation process; it is not itself a function symbol of PIM.


Ammarguellat, Z. 1992. A control-flow normalization algorithm and its complexity. IEEE Trans. Softw. Eng. 18, 3 (Mar.), 237-251.

Ballance, R. A., MacCabe, A. B., and Ottenstein, K. J. 1990. The program dependence Web: A representation supporting control-, data-, and demand-driven interpretation of imperative languages. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, New York, 257-271.

Barendregt, H., van Eekelen, M., Glauert, J., Kennaway, J., Plasmeijer, M., and Sleep, M. 1987. Term graph rewriting. In Proceedings of the PARLE Conference. Vol. 2: Parallel Languages. Lecture Notes in Computer Science, vol. 259. Springer-Verlag, Berlin, 141-158.

Bergstra, J. A. and Heering, J. 1994. Which data types have [Omega-complete initial algebra specifications? Theoret. Comput. Sci. 124, 149-168.

Bergstra, J. A. and Tucker, J. V. 1995. The data type variety of stack algebras. Ann. Pure Applied Logic 73, 1 (May), 11-36.

Berlin, A. and Weise, D. 1990. Compiling scientific code using partial evaluation. IEEE Comput. 23, 12 (Dec.), 25-36.

Boehm, H.-J. 1985. Side effects and aliasing can have simple axiomatic descriptions. ACM Trans. Program. Lang. Syst. 7,4 (Oct.), 637-655.

Boudol, G. 1985. Computational semantics of term rewriting systems. In Algebraic Methods in Semantics, M. Nivat and J. C. Reynolds, Eds. Cambridge University Press, Cambridge, U.K., 160-236.

Cartwright, R. and Felleisen, M. 1989. The semantics of program dependence. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, New York, 13-27.

Chambers, C. and Ungar, C. 1989. Customization: Optimizing compiler technology for Self, a dynamically-typed object-oriented programming language. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, New York, 146-160.

Click, C. 1995. Global code motion, global value numbering. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM SIGPLAN Not. 30, 6. ACM, New York, 246-257.

Cytron, R., Ferrante, J., Rosen, B. K., Wegman, M. N., and Zadeck, F. K. 1991. Efficiently computing static single assignment form and the control dependence graph. ACM Trans. Program. Lang. Syst. 13, 4 (Oct.), 451-490.

Dershowitz, N. and Jouannaud, J.-P. 1990. Rewrite systems. In Handbook of Theoretical Computer Science. Vol. B, Formal Models and Semantics, J. van Leeuwen, Ed. Elsevier/The MIT Press, Amsterdam, 243-320.

Ershov, A. P. 1982. Mixed computation: Potential applications and problems for study. Theoret. Comput. Sci. 18, 41-67.

Ershow, A. P. and Ostrovski, B. N. 1987. Controlled mixed computation and its appplication to systematic development of language-oriented parsers. In Program Specification and Transformation, L. G. L. T. Meertens, Ed. North-Holland, Amsterdam, 31-48.

Felleisen, M. and Friedman, D. P. 1989. A syntactic theory of sequential state. Theoret. Comput. Sci. 69, 243-287.

Felleisen, M. and Hieb, R. 1992. The revised report on the syntactic theories of sequential control and state. Theoret. Comput. Sci. 103, 235-271.

Ferrante, J., Ottenstein, K. J., and Warren, J. D. 1987. The program dependence graph and its use in optimization. ACM Trans. Program. Lang. Syst. 9, 3 (July), 319-349.

Field, J. 1992. A simple rewriting semantics for realistic imperative programs and its application to program analysis. In Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Semantics-Based Program Manipulation. ACM, New York, 98-107. Also available as Yale Univ. Technical Report YALEU/DCS/RR-909.

Field, J., Ramalingam, G., and Tip, F. 1995. Parametric program slicing. In Proceedings of the 22nd ACM Symposium on Principles of Programming Languages. ACM, New York, 379-392.

Fraus, U. 1994. Inductive theorem proving for algebraic specifications -- TIP system user's manual. Tech. Rep. MIP 9401, Univ. of Passau. The TIP system is available at

Garland, S. and Guttag, J. 1991. A Guide to LP, The Larch Prover. Tech. Rep. 82, DEC Systems Research Center, Palo Alto, Calif. December.

Heering, J. 1985. Variaties op het thema `stack'. Tech. Rep. CS-N8502, CWI, Amsterdam. In Dutch.

Heering, J. 1986. Partial evaluation and [Omega]-completeness of algebraic specifications. Theoret. Comput. Sci. 43, 149-167.

Hoare, C., Hayes, I., Jifeng, H., Morgan, C., Roscoe, A., Sanders, J., Sorensen, I., Spivey, J., and Sufrin, B. 1987. Laws of programming. Commun. ACM 30, 8 (Aug.), 672-686. Corrigenda, CACM 30, 9, p. 770.

Horwitz, S., Prins, J., and Reps, T. 1988. On the adequacy of program dependence graphs for representing programs. In Proceedings of the 15th ACM Symposium on Principles of Programming Languages. ACM, New York, 146-157.

Hsiang, J. and Dershowitz, N. 1983. Rewrite methods for clausal and non-clausal theorem proving. In Automata, Languages and Programming (10th ICALP), J. Diaz, Ed. Lecture Notes in Computer Science, vol. 154. Springer-Verlag, Berlin, 331-346.

Jones, N. D., Gomard, C. K., and Sestoft, P. 1993. Partial Evaluation and Automatic Program Generation. Prentice-Hall, Englewood Cliffs, N.J.

Kahn, G. 1987. Natural semantics. In 4th Annual Symposium on Theoretical Aspects of Computer Science. Lecture Notes in Computer Science, vol. 247, Springer-Verlag, Berlin, 22-39.

Kennaway, J. R., Klop, J. W., Sleep, M. R., and de Vries, F. J. 1994. On the adequacy of graph rewriting for simulating term rewriting. ACM Trans. Program. Lang. Syst. 16, 3, 493-523.

Klop, J. W. 1992. Term rewriting systems. In Handbook of Logic in Computer Science. Vol. 2, Background: Computational Structures, S. Abramsky, D. Gabbay, and T. Maibaum, Eds. Oxford University Press, New York, 1-116.

Lazrek, A., Lescanne, P., and Thiel, J.-J. 1990. Tools for proving inductive equalities, relative completeness, and [Omega]-completeness. Inf. Comput. 84, 47-70.

Mason, I. A. and Talcott, C. 1989. Axiomatizing operational equivalence in the presence of side effects. In Proceedings of the 4th IEEE Symposium on Logic in Computer Science. IEEE, New York, 284-293.

Meinke, K. and Tucker, J. V. 1992. Universal algebra. In Handbook of Logic in Computer Science. Vol. 1, Background: Mathematical Structures, S. Abramsky, D. M. Gabbay, and T. S. E. Maibaum, Eds. Oxford University Press, New York, 189-411.

Meseguer, J. and Goguen, J. A. 1985. Initiality, induction and computability. In Algebraic Methods in Semantics, M. Nivat and J. C. Reynolds, Eds. Cambridge University Press, Cambridge, U.K., 459-541.

Muller, F. 1992. Confluence of the lambda calculus with left-linear algebraic rewriting. Inf. Process. Lett. 41, 293-299. Amended proof available from Inge Bethke (

Naidich, D. and Dinesh, T. B. 1996. An automated induction method for verification of Pim -- a transformational toolkit for compilers. Tech. Rep. CS-R9630, CWI, Amsterdam, The Netherlands.

Nirkhe, V. and Pugh, W. 1992. Partial evaluation of high-level imperative programming languages with applications in hard real-time systems. In Proceedings of the 19th ACM Symposium on Principles of Programming Languages. ACM, New York, 269-280.

Odersky, M., Rabin, D., and Hudak, P. 1993. Call by name, assignment, and the lambda calculus. In Proceedings of the 20th ACM Symposium on Principles of Programming Languages. ACM, New York, 43-56.

Peyton Jones, S. L. 1987. The Implementation of Functional Programming Languages. Prentice Hall International, Englewood Cliffs, N.J.

Plotkin, G. D. 1974. The [Lambda]-calculus is [Omega]-incomplete. J. Symbol. Logic 39, 2 (June), 313-317.

Ramalingam, G. and Reps, T. 1989. Semantics of program representation graphs. Tech. Rep. 900, Computer Sciences Dept., Univ. of Wisconsin, Madison, Wisc. December.

Selke, R. P. 1989a. A rewriting semantics for program dependence graphs. In Proceedings of the 16th ACM Symposium on Principles of Programming Languages. ACM, New York, 12-24.

Selke, R. P. 1989b. Transforming program dependence graphs. Tech. Rep. TR90-131, Dept. of Computer Science, Rice Univ., Houston, Tex. July.

Swarup, V., Reddy, U. S., and Ireland, E. 1991. Assignments for applicative languages. In Proceedings of the 5th ACM Conference on Functional Programming Languages and Computer Architecture. Lecture Notes in Computer Science, vol. 523. Springer-Verlag, Berlin, 192-214.

van Deursen, A., Heering, J., and Klint, P., Eds. 1996. Language Prototyping: An Algebraic Specification Approach. AMAST Series in Computing. World Scientific, Singapore.

Wand, M. and Wang, Z.-Y. 1990. Conditional lambda-theories and the verification of static properties of programs. In Proceedings of the 5th IEEE Symposium on Logic in Computer Science. IEEE, New York.

Weise, D., Crew, R. F., Ernst, M., and Steensgaard, B. 1994. Value dependence graphs: Representation without taxation. In Proceedings of the 21st ACM Symposium on Principles of Programming Languages. ACM, New York, 297-310.

Wirsing, M. 1990. Algebraic specification. In Handbook of Theoretical Computer Science. Vol. B, Formal Models and Semantics, J. van Leeuwen, Ed. Elsevier/The MIT Press, Amsterdam, 675-788.

Yang, W., Horwitz, S., and Reps, T. 1989. Detecting program components with equivalent behaviors. Tech. Rep. 840, Univ. of Wiconsin, Madison, Wisc. April.

Yang, W., Horwitz, S., and Reps, T. 1990. A program integration algorithm that accommodates semantics-preserving transformations. In Proceedings of the 4th ACM SIGSOFT Symposium on Software Development Environments. ACM, New York, 133-143.
COPYRIGHT 1997 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bergstra, J.A.; Dinesh, T.B.; Field, J.; Heering, J.
Publication:ACM Transactions on Programming Languages & Systems
Date:Sep 1, 1997
Previous Article:Utilizing symmetry when model-checking under fairness assumptions: an automata-theoretic approach.
Next Article:Proving concurrent constraint programs correct.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |