Printer Friendly

Estimating the non-environmental consequences of greenhouse gas reductions is harder than you think.

I. BACKGROUND: THE PROBLEM

As governments, economists, and citizens struggle with the variety of difficult issues surrounding the Kyoto Protocol, the problem of estimating the non-environmental consequences(1) of meaningful greenhouse-gas (GHG) reductions remains unresolved. Leaving aside the predictably pessimistic estimates produced for political purposes on behalf of segments of the fossil fuel industry, top-down modelers have begun generating runs purporting to show what the effects of implementation of the Kyoto provisions might be (see, e.g., Energy Modeling Forum, 1998; Energy Information Administration, 1998). At the same time, new bottom-up studies have attempted to answer the same question with different methodology - examining the technologically attainable emissions reductions that appear profitable at current fuel prices or that would be profitable given small increases in such prices.(2) The gap between the two approaches has not been bridged. The top-down studies typically show multibillion dollar GDP reductions (though small in percentage terms) accompanying GHG reductions of the magnitude called for by Kyoto, while the bottom-up studies suggest that very substantial reductions in emissions approaching or even exceeding the Kyoto target could be achieved at close to zero cost (or even at a gain to the economy).(3)

Each approach has well known problems. It should be noted first that the framing of climate policy largely in terms of the economic costs and benefits is somewhat an artifact of an "economism" that permeates the debate. It is certainly possible to imagine other powerful justifications for climate protection. Ethical principles might be invoked (e.g., versions of "sustainable development" requiring that future generations inherit a natural world no less environmentally benign than it is at present). Once it is recognized that risk management is at the center of the climate protection problem, even the assignment of rough monetary values (and probabilities) to the various possible outcomes is quite difficult. There is no way to recover meaningful market-based "willingness to pay" data for planetary risks that fall outside historical experience, let alone outside the narrow range of marginal changes for which current prices can be used to measure marginal utilities. Given the opportunity to make an informed choice, many (most? almost all?) people would be highly reluctant to conduct a onetime, irreversible experiment with the planet that, if the outcome is unfavorable, will have disastrous consequences for their children and grandchildren. Even granting the premise that it is the dollar costs and benefits that matter most, the gap between the results of the top-down and bottom-up methodologies is very troubling. Such intellectual gridlock can be symptomatic of some sort of extrascientific agenda, as might be the case if the modelers' differences were being driven by politics or ideology. No doubt politics and ideology do affect the climate debate, but there is another possibility. The divergence of the estimates may signal a deeper scientific problem that confounds efforts to estimate the economic effects of climate policies. A persistent set of anomalies of this type may be a signal that Kuhnian "normal science" (Kuhn, 1962) is no longer adequate to make progress.

Of course, the top-down studies are known to be quite sensitive to their specifications and assumptions (Repetto and Austin, 1997). The ordinary course of research should lead to a sharpening of such estimates, because some of the variation in the models' outcomes can be traced to differences that are, ultimately, empirical. For example, a recent computable general equilibrium (CGE) model with endogenous technological progress shows that alternative combinations of a carbon tax and R&D subsidies (including subsidies specific to energy efficiency R&D) can have a greater or lesser effect on conventionally measured GDP, depending on the existing degree of inefficiency in R&D markets (Schneider and Goulder, 1997; Goulder and Schneider, forthcoming). At least in principle, it should be possible to estimate the magnitude of the market imperfections, with a consequent improvement in the predictive performance of the model.

The top-down models embody more than particular sets of parameter estimates, however. They all have been built within a particular theory of economic organizations. The models uniformly are based on an underlying structural commitment to some form of the "optimization assumption," in which economic agents (whether firms or individuals) successfully maximize their objective functions subject to constraints. In such a framework, the increased provision of any desirable good (such as environmental quality) can only be purchased at the expense of other desirable goods (such as ordinary economic output).

For example, the Goulder and Schneider (forthcoming) model expresses production in each of its sectors (four energy and materials intermediate goods sectors and final goods sectors for capital, R&D services, and consumer goods) as the activity of representative firms. These firms maximize the present value of net profits subject to standard contant elasticity of substitution (CES) production function constraints. The dynamic optimization solution incorporates perfect-foresight expectations on the part of firms as well as intertemporal utility maximization by consumers. This model is state-of-the-art largely because it treats induced technological change endogenously, but it is not atypical in its specification of the production process. Widely used models such as DICE (Nordhaus, 1994) and Global 2100 (Manne and Richels, 1992) also represent the creation of output by means of neoclassical production functions. Optimization of some sort of social welfare function (or the utility of a representative agent) subject to a neoclassical production function only pushes the assumption of maximization by firms into the background - the production function concept itself embodies an intrinsic trade-off between potential uses of the inputs for creating valuable outputs. There would be no "function" relating output to inputs if the level of output that could be obtained from a given bundle of inputs were indeterminate, or alternatively, if the same output could be produced with different amounts of pollutant emissions. On the other hand, the bottom-up studies also entail a theory of the firm. Even if it is not spelled out, these models require some explanation for why firms do not take advantage of the profitable energy-saving investments available to them. There is a large literature on the barriers that allegedly prevent firms from adopting technological innovations that would increase shareholder value, but this literature does not present a unified account of the origin and persistence of those barriers. Specific examples of barriers are easy to point out (Lovins and Lovins, 1997), but bottom-up analysts are hard-pressed to explain why alert managers would not soon overcome them. A variety of explanations have been put forth to explain this "efficiency paradox." A large segment of this literature is focused on energy-saving investments, but that is not the only area in which questions could be asked about why profit opportunities are passed up. The answers that have been given in the literature are many and varied, including principal-agent problems, split or dysfunctional incentives, managerial myopia, and institutional constraints, but there has been no unification or definitive testing of the alternative explanations.(4)

Yet by thinking about the firm in a different way, it is possible to see that the efficiency paradox may not be a paradox at all, but instead a manifestation of the normal state of affairs. Suppose that full optimization by firms and other organizations is not automatic and not just difficult, but impossible in practice. Then it would not be surprising that profit opportunities are sometimes (or even frequently) missed. Market pressures might screen out the worst performers and the rewards of profitability would still provide positive incentives for managers, but it would no longer be a safe theoretical presumption that the firm's behavior can be modeled by profit maximization subject to a well-defined production function. The extent to which observed behavior matched the optimization ideal would be an empirical question - if indeed it were even possible for any outside analyst to determine what the optimum might be.

Economists since Herbert Simon have understood that there are limitations on the kinds of calculations firms (and other economic agents) can carry out. "Bounded rationality" is shorthand for describing these limits (Simon, 1979; Conlisk, 1996). Even before Simon's work in the 1950s, the "socialist calculation" debate that began in the 1920s and 1930s had raised the question of the informational and computational requirements for arriving at an efficient allocation of resources economywide. This debate still is alive (Adaman and Devine, 1996). However, there is nothing intrinsic in the Austrian "impossibility" argument that limits it to economywide calculations. Although it is certainly true that there are no successful examples of centrally planned economies achieving anything close to allocative efficiency over the long run,(5) the large private-sector firm exhibits all the features of localized knowledge and difficulty of coordination that the Austrians identified as preventing efficient central planning.(6)

Both before and after Simon's initial papers, developments in the formal theory of algorithmic computation delineated bounds on what can theoretically and practically be accomplished by step-by-step calculation.(7) Some of these limitations are independent of any particular hardware or software implementations (brains, computers, bureaucracies), while in other cases, the time and cost that would be required to carry out the calculations required for optimization are prohibitive given existing equipment. It is not controversial that individuals and organizations employ rules of thumb, heuristics, and approximation methods to carry out their daily tasks. What has been less clear is that theories acknowledging the limits to computational capabilities would "feel" different from theories based on full maximization. If models of firms, other organizations, and economic agents were required as a theoretical prerequisite to carry out their calculations algorithmically, the economic significance of the limits to algorithmic computation would have to be addressed.

There is a long-standing tradition in economics that regardless of what firms or agents actually do, it suffices for theoretical purposes to pretend that they act "as if" they are optimizing an objective function.(8) This "as if" fiction may be usefully applied in some circumstances, but it is far from innocuous. One colorful illustration is that dogs catching frisbees act "as if" they are solving the differential equations governing the frisbees' flight.(9) Dogs certainly can catch frisbees, but it does not follow that the differential equation-solving model of dogs' behavior is a good one. It is manifest that in other settings (e.g., with a different input format) a dog could not solve differential equations, even approximately. In order to have a fully applicable theory of the dog's behavior (a theory that would have predictive power under a variety of circumstances), it would be necessary to understand the mental processes governing the dog's behavior. Different abstractions of these processes might be useful in different settings, but unless a model represents the actual processes correctly, there can be no guarantee that the model will work well or at all.

Characterizing disequilibrium dynamics or developing models that can make useful long-term predictions of genuinely complicated economic phenomena requires moving beyond the "as if" expedient. In doing so, it is likely that description of the actual internal functioning of the firm will not match the optimization paradigm very well. If the attempt is made to describe the firm's behavior as a set of algorithmic processes, the mathematical theory of computation offers reasons for serious skepticism about the possibility of optimization. In the end, the magnitude and importance of deviations from full optimization must be open to empirical estimation and testing. It may be that in difficult cases such as determining the macroeconomic costs of GHG reductions, further progress requires models that capture more of the reality of firms' actual decision processes than do the models currently in use. A necessary starting point is to examine the implications for economic modeling of what is known about algorithmic computation.

Section II presents a brief review of the literature on algorithmic computability and complexity as those concepts have been applied in the economic literature to date. This section is primarily a survey the kinds of results that have been obtained by a diverse set of investigators. Readers who are interested mainly in the potential policy implications of this work may wish to skip directly to section III. However, to make a judgment about the importance and relevance to policy of the kinds of findings collected in section III, some familiarity with this literature is necessary.

II. COMPUTATIONAL LIMITS IN ECONOMIC MODELS

Algorithmic computation is subject to two potential limitations. The first is the possibility that a particular problem is not effectively computable. If a problem falls into this class, there is no algorithm at all that can solve it within the mathematical system expressing the model. The second possibility is that any algorithm capable of solving the problem may take too long to be of practical use. This can happen either because the problem happens to be too big for existing hardware and software (including human brains) or, alternatively, because the time and resources required to solve the problem grow very rapidly as the size of the problem grows. Modern computer science has developed operational definitions of terms such as "size," "too big," and "grow very rapidly," but the point is not that economics can be reduced to a subfield of computer science. Instead, what needs to be recognized is that "bounded rationality" can be quantified, and that limits on what firms can calculate may have an influence on their ability to optimize and other aspects of their real-world behavior.

A. Some Problems in Economics Are Not Effectively Computable

The theory of effective computability or recursive function theory emerged in the 1930s in an effort to characterize rigorously what was meant by the intuitive concept of an "algorithm." Perhaps the most famous model of computation, the one proposed by Alan Turing in 1936-37, was based on an analysis of how a human being could carry out computations mechanically using pencil and paper. Over the succeeding years, a number of proposals have been made for alternative mathematical formalizations of the idea of computation (Cutland, 1980). All of the models of effective computability that have been proposed to date have given rise to the same class of effectively computable functions. This unexpected and remarkable result has led to widespread belief in the Church-Turing thesis (sometimes called Church's thesis) that "[t]he intuitively and informally defined class of effectively computable . . . functions coincides exactly with the class of [computable functions that have been modeled formally]" (Cutland, 1980). The Church-Turing thesis is a conjecture, not a theorem, although the evidence in favor of its truth value is quite persuasive. Copeland (1996) summarizes this evidence as follows:

(1) Every effectively calculable function that has been investigated . . . has turned out to be computable by Turing machine. (2) All known methods or operations for obtaining new effectively calculable functions from given effectively calculable functions are paralleled by methods for constructing new Turing machines from given Turing machines. (3) All attempts to give an exact analysis of the intuitive notion of an effectively calculable function have turned out to be equivalent in the sense that each analysis offered has been proved to pick out the same class of functions, namely those that are computable by Turing machine. Because of the diversity of the various analyses the latter is generally considered strong evidence.

Acknowledging that there are deep issues involved,(10) standard practice in computability theory is to proceed on the assumption that the Church-Turing thesis is true, and to presume that results derived for Turing machines (or one of the other equivalent models of computation) apply to any realistic model of computation.

As the notion of effective computability was being developed, the shattering discoveries were made that there are functions and numbers that are not effectively computable, as well as mathematical problems that are undecidable. These impossibility results, which include Godel's incompleteness theorem (1931; see Nagel and Newman, 1958) and Turing's demonstration of the existence of noncomputable numbers (1936-37; see Chaitin, 1993), demonstrate intrinsic limitations to algorithmic computational procedures. These limitations have implications for economics if one insists that specification of the method of computation is a necessary component of the modeling of economic agents and organizations.

To be sure, impossibility results of this type can only be properties of mathematical models of economic activity. But that does not weaken the strength of the argument or allow these results to be disregarded. Models are scientifically indispensable, and specification of the algorithmic processes by which economic actors arrive at their decisions is neither a more nor a less stringent requirement than what is entailed in other model representations of economic behavior (such as constrained optimization). The standard neoclassical representation of the firm, with profits dependent on a production function transforming input and output quantities and prices into net income, is, if anything, a more sweeping abstraction. If insistence that models embody computational routines describing what firms actually must do to carry out their tasks leads to problems whose solutions are not effectively computable or that are intractably complex, it should be a signal that these limits must be respected in our efforts to model economic behavior realistically.

Limits on effective computability are known to exist in economic models. Spear (1989) has shown that in learning environments in which the agents do not have perfect information about the state of nature, "a version of Godel's incompleteness theorem applicable to the theory of computable functions yields the conclusion that rational expectations equilibria cannot be learned. . . . When agents have incomplete information (i.e., an incomplete signal about the current state), [learning rational expectations is not possible] since agents are required to infer not a function, but a correspondence" (pp. 889, 892).(11) Now, it is obvious that any firm wishing to make investment decisions based on the net present value criterion (or any other forward-looking rule) will have to form expectations about future prices. In the energy efficiency case, the present cost of the investment has to be weighed against the discounted value of the future energy savings, which depend on future sales, energy prices, and discount rates. The actual course of prices over time depends on the actions taken by the other agents in the economy and its future states: hence, learning the "rational expectations" needed to make optimal decisions about investment in energy-efficient technologies is an instance in which Spear's result applies because it is impossible for firms to have full knowledge of all the factors that will influence prices in the future.

Others have arrived at similar results. Rust (1997) examines literature showing that a number of standard economic problems(12) are not effectively computable. For example, Rabin (1957) showed that there are games whose equilibrium strategies are not effectively computable, and a similar result was established by Binmore (1987). As Rust remarks,

It isn't clear whether these results [of Rabin and Binmore] have anything to say about the narrower and more relevant issue of whether rational behavior is possible in practical contexts since they rely on nonconstructive arguments to establish the existence of games whose equilibria are not effectively computable. The relevant question is whether equilibria are effectively computable for the games economic agents actually play. (p. 6, emphasis in the original) However, he goes on to cite unpublished work by Nachbar (1993) and a series of papers by Lewis (1985a,b, 1986, 1992a,b) showing that the solutions of standard models in game theory and general equilibrium are indeed not effectively computable.

Nachbar and Zame (1996) prove that

for a large class of discounted repeated games (including the repeated Prisoner's Dilemma) there exist strategies implementable by a Turing machine for which no best response is implementable by a Turing machine. . . . [I]n general, there will exist computable strategies admitting no computable best responses, and the problem of choosing a best response (even when computable best responses do exist) does not have a computable solution. (pp. 103104)

They point out that their results

apply only to exact best responses, and not to approximate best responses. . . . Thus, the computability problems we identify do not necessarily have important consequences for payoffs. However, these computability problems do have important consequences for paths of play, because the path of play specified by an e-equilibrium may be very different from that specified by an exact equilibrium. In a sense, strategies implementable by Turing machines are intrinsically vulnerable to certain types of "trembles" which can cause observed play to depart substantially from what one would predict if players truly optimized. (p. 105)

Lewis (1985a) makes an even stronger assertion in his paper showing that demand correspondences are not effectively computable (or "computationally viable"):

It is obvious that any choice function . . . that is not at least computationally viable is in a very strong sense economically irrational. Unless the computation is trivial, i.e. involves no computation at all, and thus is both complete and accurate without the use of any Turing machine, the choices prescribed by a computationally non-viable choice function can only be implemented by computational procedures that do one of two things: either (a) the computation does not halt and fails to converge, or (b) the computation halts at a non-optimal choice. In case (a) the costs of computation procedures that do not converge must surely prohibit their use, and in case (b) whatever purpose there can be to a mathematical theory of representable choice functions is surely contradicted in such circumstances. (pp. 45-46)

In a later paper, Lewis (1992b), shows a similar failure of Walrasian equilibrium and N-person noncooperative games to be effectively computable. These papers are only the tip of the iceberg; Velupillai (1997) in "an attempt . . . to resurrect the pioneering work of Michael Rabin" cites a "growing literature on applying recursion theoretic concepts to traditional issues in game theory" and arrives at "the melancholy demonstration that very simple effectivized games, even when determined and playable, can contain intrinsic undecidabilities and intractable complexities" (p. 957). He give examples ranging from a form of the parlor game NIM to a standard barriers-to-entry problem in industrial organization.(13)

Problems that are not effectively computable are, as far as we know, beyond the reach of any physical device or human organization to solve, regardless of the resources available. The existence of such problems at the heart of economic theory ought to give pause; the examples cited above are not artificially constructed perverse cases, but (as with rational expectations, Walrasian general equilibrium, etc.) are very much part of the working equipment of policy modelers.(14) I will show next that even where effective computability is not an issue because finite algorithms capable of solving the problems are known, the complexity of the calculations may still provide a significant obstacle to the achievement of optimized outcomes.

B. Computational Complexity in Economics

Even when it is possible to compute the answer to a problem, it may not be practical. Computational complexity has to do with the failure of algorithmic methods to reach a solution within a reasonable time (or memory space) bound. Standard introductions to computational complexity theory(15) include Garey and Johnson (1979) and Papadimitriou (1995), but an outline of some of the basic concepts may be useful for those not already familiar with them. The notion of the time complexity of an algorithm is grounded in the nuts and bolts of real-world computation. The time complexity of a problem depends on the number of steps required by an algorithm to carry out the computation as a function of the "size" of the problem. The size of a problem is some measure of its dimensionality, such as the number of people making up an organization or the number of parameters needed to describe an optimization problem fully. If the number of steps needed to solve the problem grows exponentially (or faster) as a function of the size of the problem, then the computational burden will eventually overwhelm any physical computer.

If an algorithm exists that can always solve a problem in a number of steps that is a polynomial function of the size of the problem, the problem is considered to be "tractable" and belongs to the class P of problems that "can be solved in polynomial time." For example, linear programming is known to be in P (although this wasn't proven until 1979 - see Garey and Johnson's 1991 update of their 1979 introductory text). Of course, some problems in P might still be beyond the reach of existing hardware and software, but existence of a polynomial time algorithm is reassuring, if nothing else. However, there are a large number of problems for which no guaranteed polynomial time solution algorithm is known. An example is the famous Traveling Salesman Problem (TSP) in which one wants to know whether there is a path of length less than B that enables the salesman to visit each of n cities exactly once, returning at the end to the starting point. (The distances between each pair of cities are known.) In the case of the TSP, it is easy to verify whether any proposed tour meets the requirement; one only needs to add up the distances between the successive cities on the proposed route and see if the total is less than B. The difficulty of the TSP arises from the very large number of potential tours [which, if the starting city is given, equals the number of permutations of the other (n - 1) cities taken (n-1) at a time]. Problems for which it is possible to verify proposed solutions in polynomial time are described as belonging to the class NP.

A major open question in theoretical computer science is whether P is a proper subset of NP, i.e., whether P [subset] NP. Quite a number of problems are known to belong to NP that have no known polynomial-time algorithm capable of solving them. In addition to the TSP, these include the Knapsack Problem (in which each of a finite set of objects has a "size" and a "value," and the problem is to pick a subset of the objects whose total size is less than a given bound but whose total value is greater than a target value), as well as a large number of problems in graph theory, network design, set partitioning, and computer programming. The "hardest" NP problems form an equivalence class(16) known as the "NP-complete" problems. If a polynomial time algorithm capable of solving any one of the problems known to be NP-complete could be found, it would follow (because of the nature of the equivalence class formed by the NP-complete problems) that all the NP-complete problems could also be solved in polynomial time. The inability of mathematicians to find a polynomial time algorithmic solution to even a single one of these problems provides support for the conjecture that NP includes many problems harder (in the sense of computational complexity) than those in P. This conjecture is so widely believed that papers dealing with complexity issues routinely prove their results contingent on the presumption that P [subset] NP is true.

NP-completeness should not be thought of as the only possible measure of computational complexity, however. On the one hand, a problem may require more than polynomial time to solve only in worst-case situations; it may be that in the "average" case or most cases, a known algorithm can solve the problem in polynomial time. Alternatively, good approximations to some NP-complete problems (including the TSP) can often be found in polynomial time. It is also the case that some NP-complete problems are hard to solve only for a range of parameters characterizing the problems. These problems exhibit "phase transitions" as the parameter(s) move from values for which the problems are easy to solve to values for which they are hard (Cheeseman et al., 1991; Hayes, 1997). On the other hand, there are complexity classes encompassing problems that are harder than those that are NP-complete. Indeed, there is an entire complexity class hierarchy in which the NP-complete problems are near the bottom. It is an open question whether this hierarchy collapses into a small number of equivalence classes: "In the absence of a proof that P [not equal to] NP, there is no hope of proving that the polynomial 'hierarchy' is indeed a hierarchy of classes each properly containing the next (although once again, we strongly believe that it is)" (Papadimitriou, 1995, p. 427).

A number of fundamental economic problems have been shown to belong to the computationally intractable classes. Finding the core, the kernel, the nucleolus, the [Epsilon]-core, and the bargaining set for games that are not convex are all NP-complete problems (Deng and Papadimitriou, 1994). Blondel and Tsitsiklis (1997) have shown that several basic linear control design problems are NP-hard, and Papadimitriou (1985) has proved that a classical type of decision problem ("games against nature") is in the complexity category PSPACE-complete, which is higher in the complexity hierarchy than the NP-complete class. These results are important at more than one level, because, as Papadimitriou (1995) puts it,

complexity often tells us much more about a problem, than just how hard it is to solve. Sometimes it is more useful to look at a complexity result as an allegory about how conceptually difficult the underlying application area may be. After all, if algorithms are often the direct product of mathematical structure, computational complexity must be the manifestation of lack of structure, or mathematical nastiness. From this point of view . . . playing two-person games is more complex than solving optimization problems, counting combinatorial structures and computing the permanent of a matrix is somewhere in between, decision-making under uncertainty and interactive protocols both are as powerful as games, while succinct input representations make things even harder. (p. 409, emphasis in the original)

It should be noted that complexity is not limited to discrete problems - Papadimitriou and Tsitsiklis (1986) have demonstrated that both the discrete and continuous versions of certain problems in decentralized decision making (the team decision problem is their particular example) are computationally intractable (see also Ko, 1991). Nor are the formal computer science complexity classes the only way of describing the difficulty of economic problems. Page (1996) has recently proposed two measures of difficulty that distinguish between the difficulty of solving a problem in parallel and in sequence, with applications to the theory of organizational structures.

Rust's (1997) overview cited earlier also summarizes work showing that a large number of economic problems are intractable in the sense of being subject to the curse of dimensionality. Citing Traub et al. (1988), Rust lists multivariate integration, optimization, finding zeros of nonlinear functions, finding Brouwer and Kakutani fixed points, solution of partial differential equations and Fredholm integral equations, and the problem of finding fixed points to certain classes of contraction mappings (which includes the problem of solving infinite horizon continuous-state dynamic programming problems) as all being subject to worst-case complexity problems. Rust (1997) states:

It appears to be a relatively straightforward exercise to translate these general complexity bounds into corresponding complexity bounds for economic problems: utility maximization requires solution of constrained optimization problems, maximizing expected utility requires calculation of multivariate integrals, finding Walrasian equilibria requires calculations of zeros to an aggregate excess demand function or finding a Brouwer fixed point, calculation of rational expectations equilibria involve solutions to Fredholm integral equations, computation of option values involve solutions of [partial differential equations], solutions to infinite horizon dynamic programming problems involve computation of contraction fixed points, and calculation of Nash equilibria reduce to the calculation of a fixed point to a correspondence. However to date formal complexity bounds have been established only for a relatively small number of economic problems such as social planning (Friedman and Oren, 1995), Walrasian equilibrium (Papadimitriou, 1994) and dynamic programming (Chow and Tsitsiklis, 1989). However it is likely that this list will quickly grow and formal proofs will soon be available showing that the majority of economic problems are intractable. (pp. 9-10)

Rust goes on to argue, however, that there are a number of strategies or circumstances that can transform worst-case intractable economic problems into tractable ones. These include hardware and software solutions (the time required to solve an intractable problem is a function of its dimensionality, so even if this function is nonpolynomial, it may still be possible for real computers or agents to solve an actual problem of limited size); exploitation of special structure (some restricted problems may be tractable even though the unrestricted problem is not);(17) decomposition (this works if the larger, worst-case intractable problem can be split into smaller, tractable problems, the solution of each of which does not depend on the solution of the others); randomization (this technique is used, e.g., in Monte Carlo integration in which the approximation of sufficient closeness need be found only with a probability, not a certainty); and utilization of knowledge capital (as when a sufficiently close "guess" makes solution of an otherwise intractable problem feasible). He conjectures that "decentralization" (which is difficult to define rigorously) is "computationally efficient" and that this accounts for the undoubted success of many economic institutions such as double auction markets.

This gives only the flavor of some of the main ideas in Rust's important paper. He makes it clear that he is not a pessimist regarding the ability of economics to understand a wide range of real-world situations. However, his bottom-line point is that determination of the limits imposed by computational complexity is, fundamentally, an empirical issue. If this is so, the other side of the coin of Rust's optimism is that there may indeed be cases in which the "computational 'solution' that nature and the economy have 'discovered'" (p. 38) may itself be only an heuristic approximation to optimality. If it is, then the kind of realistic theory that would enable us to "take our models seriously for forecasting and policy analysis" (p. 38) would be one reflecting the actual computations, not an-idealization that "solves" the problem by assuming it away.

III. WHAT DO FIRMS REALLY DO?

To illustrate and make more concrete the possibilities discussed above, consider just one of the ways a firm's activity can be represented as a computation - specifically as the combinatorial optimization problem of picking the best organizational form. Firms (and other economic organizations, such as nonprofits or government agencies that provide goods and services) are, first and foremost, collections of discrete individuals. The individuals communicate with one another in various ways; in large firms, it is not possible for every individual to be directly in touch with every other one in all matters - some communications must be mediated by other members of the firm. The communications may be two-way (as in face-to-face contact) or one-way (as when a policy is handed down via memorandum or e-mail). It has recently become possible (through message boards and electronic discussion groups) to connect everyone in an organization for some purposes. This suggests that any complete representation of the activities of a firm (i.e., one that involves no discarding of essential information) must include specification of the communications network(s) linking the firm's members. Therefore, even if the numerical computations made by individuals making up the firm can all be carried out in polynomial time, it does not follow that the organizational maximization problem is computationally tractable.(18)

The fact that the firm intrinsically is an aggregate of separate individuals carrying out their own individual computations means that the firm cannot realistically be thought to operate as a unitary entity with a mind and will of its own; instead, the "decisions" made by the firm constitute a form of collective action. This, in turn, implies that there must be some procedure or set of procedures by which the firm processes information and through which the individual agents interact with each other in order to reach a decision. The rules of procedure may be simple (e.g., the firm decides to do something when the CEO issues an order) or complicated (there may be layers of review before an action is ultimately taken by the board of directors or a top management committee). Different procedures may apply in different cases, and some decisions may be decentralized in the sense that no one above a particular level in the firm's hierarchy ever sees the input to the decision.

The critical point is that the procedures are contingent on the network structure of the firm. The nature and functioning of the procedures are constrained by the possible channels of communication (and command) linking the members of the organization. The network structure is not something that is determined exogenously, so if any of the tasks involves maximization of an objective (shareholder value, say), an essential element of the maximization problem is selection of an optimal network structure. Now it is known that the number of possible network structures rises very rapidly with the number of individuals in the firm (Graicunas, 1933; Wilson, 1985). The problem of finding the optimal structure for carrying out any particular task (or any combination of tasks) according to the specified set of procedures is one of combinatorial optimization and, as such, is likely to be computationally complex. Explicit models showing how organizations can carry out simple, stylized tasks according to well-defined procedures have been given in the literature.(19) If the combinatorial optimization problem of finding the best structure for some task is NP-complete or harder, then the firm cannot be presumed to be able to reach a maximum of its objective function. Levinthal (1997) has shown that adaptation of organizational structures can be represented as an instance of Kauffman's (1993) NK model, and Weinberger (1996) has demonstrated that in some cases, finding the globally optimal structure for the NK model is NP-complete. DeCanio et al. (1998) have developed evidence from simulation studies showing that even in models in which highly stylized firms carry out simple economic tasks, it is unlikely that optimal organizational structures can be found in polynomial time.(20)

It is instructive to see how the organizational structure question arises in the limited context of investment in energy efficiency. At a superficial level, this problem would seem to involve nothing more than calculating on a spreadsheet whether the net present values (NPVs) of investments in particular technologies are positive. But description of the firm's actual behavior would require (at least) specification of (1) who is assigned the job of collecting information on costs and cash flows that goes into the NPV calculation; (2) how forecasts of future costs and savings are obtained; (3) how information on the NPV of different investment possibilities is reported to other members in the firm (in particular, to other levels of management); (4) the decision rule by which some subset of managers decides whether to go ahead with projects or not (which may involve some sort of voting, some method of ranking projects, or some way of comparing the information received from different divisions); and (5) a "chain of command" by which decisions are implemented. Several of these steps entail the network structure of the firm; the last step involves, in addition to specification of channels of communication, a determination of who is authorized to give directives that are then carried out by other members of the organization. Thus, even in the simple case of sifting through potential projects and investing in those with positive NPV, the design of networks of communication, reporting, and command is an intrinsic part of the firm's optimization problem.

It should be stressed again that the actual degree of computational intractability of the tasks faced by firms is an empirical question. Some of the decisions made by firms (and other economic agents) are of the "no-brainer" variety. It does not take a lot of calculation for managers and employees alike to realize that a firm selling a standardized commodity in a competitive market has very little discretion in its pricing strategy. Similarly, if everyone were truly convinced that continued emission of GHGs at business-as-usual rates would lead to climate disasters and a large number of preventable deaths, and if this understanding were socially expressed through passage of a carbon tax or establishment of an appropriate regulatory regime, then the decision making of individual firms regarding their carbon emissions would be simplified. When the scientific evidence of the causes and consequences of stratospheric ozone depletion reached a sufficient level of acceptance, the political and diplomatic difficulties of reaching an agreement to reduce the emissions of ozone-depleting substances were overcome and firms were able to focus on finding alternatives.(21)

IV. CONCLUSIONS AND IMPLICATIONS

A. Modeling

The results reviewed here point to a need to replace the optimization assumption as the default in modeling the behavior of firms. But what sort of representation is to replace it? The aim of theorizing is to provide a foundation for scientific investigation and a source of testable hypotheses, not tautological generalizations that are immune to falsification. That, in turn, suggests two criteria for elements of a reconstructed theory of the firm: (1) specification of the firm's internal processes as algorithmic computations, and (2) adoption of an evolutionary view of economic dynamics.

The first point has already been elaborated upon in some detail in the previous section. The second has only been alluded to briefly in discussion of the adaptation of firms, but it is critical as well. Whatever the nature of the computational problems faced by firms, we know that real firms function and change over time. The fate of economic organizations depends partly on the intentionality of their members, but also on the ebb and flow of market and other (legal, regulatory, cultural) environmental conditions. Given that full optimization along all dimensions is not possible for firms, selection pressure exists nevertheless; no firm that is continually unprofitable can survive (at least without perpetual government subsidies), and firms that are highly profitable have advantages that allow them to expand to occupy more space in their market niches. The metaphors of evolutionary biology are a natural language for describing the process of organic growth and change, and constitute the basis of an "appreciative theory" of economic dynamics that is closer to what economists know and believe than the formalisms of general equilibrium maximization models.(22)

But as Nelson and Winter (1982) understood, the use of evolutionary imagery alone is not enough to create a rigorous and realistic theory of the firm. Darwin and his followers were able to formulate the theory of biological evolution prior to the "modern synthesis" that joined the operation of natural selection and Mendelian genetics, but until that unification was accomplished, there was controversy over the mechanism of evolution (Ridley, 1996). Rediscovery of Mendelian genetics was required to formulate laws governing the heritability of traits, and to explain the variation and rates of mutation within populations. Furthermore, it was not until the latter half of the twentieth century that the molecular-level genetic code for living systems (the DNA double helix) was discovered. Knowledge of molecular biochemistry increases our capacity to understand biological processes and evolutionary change, even though we are still far from an all-encompassing theory of living organisms.

The theory of human organizations has not yet achieved anything close to its own "molecular biochemistry." Representation of the inner life of the firm by models of algorithmic computation could be one part of such a theory, but obviously a great deal of work needs to be done before the full outline of such models can take shape. Unlocking the code of "organizational DNA" may involve insights from numerical optimization (applications of genetic algorithms, simulated annealing, or neural nets), evolutionary game theory (allowing the individuals within firms to interact repeatedly in self-interested ways), the "new institutional economics" (in which property rights and incentives are emphasized as determinants of institutional performance), or agent-based modeling (in which an "emergent order" arises from the purely local interactions of the agents). It would be a great advance in knowledge if we could devise a scientific description of the "genetics" of organizational phenotypes, even if a computable model of how firms change over time remains beyond our reach.

B. Policy

To return to the policy issue of estimating the nonenvironmental consequences of GHG reductions, the chief conclusion that can be drawn from the findings discussed here is that estimates based on conventional top-down models should be deemphasized in the debate. These models are built on a shaky foundation, and as a result their quantitative cost estimates offer only a specious kind of precision. The behavior of firms (including their invention and adoption of new technologies having different energy intensities) is not now being modeled in a tight and reliable way. Yet a well-articulated understanding of the actions and responses of firms is an essential prerequisite for reliable top-down model predictions.

This conclusion should not, however, be taken as a counsel of despair. Top-down models can still serve useful purposes (providing a consistent accounting framework, keeping track of the relative sizes of different sectors and the scales of various activities, projecting the consequences of a continuation of historical trends, etc.) as long as too much weight is not placed on their shoulders. The bottom-up models would appear to offer more promise at their current stage of development. These models have the virtue of specifying in detail what some of the technological possibilities are. (Of course, no model can incorporate the consequences of technological advances that have not been conceived. The world-changing technologies of the past century - ubiquitous computers, nuclear weapons, genetic engineering - were literally unimaginable to "futurists" of the past who were trying to visualize life 50 or 100 years ahead.) These technological possibilities can be incorporated into top-down models "off line" to indicate a wider range of scenarios that could be achieved through appropriate policy intervention (Boedecker et al., 1996; Koomey et al., 1998). Where the bottom-up models fall short is in spelling out exactly which kinds of policies could make their putative GHG reductions a reality.

Even if we acknowledge that we do not have particularly good models (yet) of organizational behavior or policy-induced technical change, it is still possible to be informed by the lessons of recent history. Close attention should be paid to the experience with other major environmental protection initiatives that have triggered innovation and reorganization. The most instructive recent examples include the phaseout of ozone-depleting substances under the Montreal Protocol and the massive reductions of S[O.sub.2] emissions being accomplished under the Clean Air Act Amendments of 1990. In both cases, ex ante model forecasts greatly overestimated the costs of the policy initiatives (Bohi and Burtraw, 1997; Cook, 1996; Hammitt, 1997). Well-designed market-based policies appear to have been able to smooth the transition to a less-polluted regime (Economic Options Committee, 1994; Schmalensee et al., 1998; Stavins, 1998). One of the key elements of both situations was that the policy signal was clear and unambiguous: the reductions in consumption and production of ozone-depleting substances and in S[O.sub.2] emissions were spelled out in a regulatory timetable. Within that framework, however, the affected firms were able to exercise maximum flexibility (and exhibited a correspondingly high degree of ingenuity) in meeting the targets.

In the case of GHG emissions, a carbon tax or emissions cap could provide a similarly clear signal. Within the context of the societal consensus that would be required to institute such a shift in the direction of policy, what additional features of an effective GHG reduction strategy would be suggested by the more realistic theory of the firm outlined above?

(1) Policy could reduce the complexity of firms' decision problems directly. As in the case of the ozone-depleting chemicals and S[O.sub.2] emissions, a clear indication that action was necessary would eliminate uncertainty within firms about what strategy to pursue. At the present time (with the Kyoto Protocol unratified, major details of its framework still to be negotiated, and no domestic measures in place to implement required emissions reductions), firms are faced with multiple dilemmas: whether to engage in lobbying or political action to influence the outcome (and if so, what coalition to join); whether to extrapolate energy prices and usage patterns based on historical trends or to plan for a less fossil-fuel-intensive future; whether to anticipate that overseas competitors will enjoy competitive advantages (or suffer from disadvantages) depending on the actions foreign governments take; and what kinds of energy-related R&D to initiate - just to name some of the most difficult issues. An unmistakable global (and national) commitment to reduce emissions would reduce the range of potential activities related to this issue to the focused goal of reducing GHG emissions while maintaining profitability. As most people (including economists) well understand, there are times when it is easier to have fewer choices.

(2) A more realistic theory of the firm opens up increased possibilities for win-win policy approaches. Because no population is ever optimally adapted to a particular environment, it is possible to create selection pressures that reinforce more than one desirable trait. For example, an environmental protection policy that simultaneously encourages innovation and technical progress may be able to enhance organizations' economic performance and at the same time produce environmental gains. The Porter hypothesis (1991; Porter and van der Linde, 1995a,b) suggests that productivity benefits from environmental regulations can be achieved in some circumstances. Change mechanisms capable of nonincremental "jumps" from one local maximum to a higher one are consistent with this evidence, unlike the conventional model that has only a single optimal configuration for a firm.

(3) It may be possible for policy to impinge positively on the decision processes of firms through mechanisms other than the traditional price and regulatory instruments. The success of the U.S. Environmental Protection Agency's and the U.S. Department of Energy's voluntary pollution-prevention programs is instructive in this regard. These programs operate entirely without a coercive component. In Green Lights, Energy Star, 33/50, and similar programs, the government's activity is confined to providing advice, serving as a clearinghouse for information, and facilitating the dissemination of knowledge about best-practice techniques. These activities can be thought of as speeding up the "rate of mutation" within organizations or as fostering the multiplication and exchange of beneficial organizational "genes." The progress made by such programs does not imply that the traditional policy interventions are irrelevant, only that the range of potentially beneficial policies is much broader than that ordinarily contemplated by economists.

No doubt other policy consequences will emerge as firms' algorithmic computations and evolutionary processes become better understood. The point is that a shift in perspective that incorporates into working models the computational limits faced by firms can have a liberating effect on both theorizing and policy design. The optimization assumption may be adequate for some applications, but in analyzing the response of firms to GHG reduction policies, economists may have come up against problems that highlight the inadequacies of that assumption. Markets work and firms are successful because of decentralized decision making, the survival of social structures that have performed better than others in complex environments, and through the management of computational intractability using rules of thumb and heuristics. Why not recognize these realities explicitly and take them as the starting point for theory, rather than trying to force theory into the Procrustean bed of optimization? Acknowledgment of the complexity limits real organizations face would be a fertile beginning for new discoveries and breakthroughs in all branches of economics. Plus, it would enable us to think more constructively about the non-environmental consequences of the measures that ultimately will have to be taken to stabilize the atmosphere.

This research was supported by a grant from the U.S. Environmental Protection Agency. The author is grateful to Keyvan Amir-Atefi, Stanley Burris, Sam DeCanio, Catherine Dibble, John Gliedman, Skip Laitner, Glenn Mitchell, Alan Sanstad, Stephen Schneider, William E. Watkins, and three anonymous referees for their comments. The conclusions are those of the author alone.

1. Use of this term, rather than the more conventional "economic costs," is deliberate. Environmental conditions (in particular, the state of the climate) enter directly into individuals' utility functions, and as such are an integral part of the material standard of living. An artificial division between "economic" and "environmental" effects can be misleading, as when environmental externalities are not taken into account in Ramsey-type infinite-horizon optimization models (Brock, 1977; Tol, 1994; Bovenberg and Smulders, 1996; Amano, 1997; Howarth, 1998). With this caveat in mind, it is still useful to ask what the effects of particular GHG reduction policies would be on standard measures of output such as GDP.

2. These studies have been reviewed and summarized by the Union of Concerned Scientists and the Tellus Institute (1998). The most comprehensive recent "bottom-up" analysis for the U.S. is the "5-lab study" conducted by five of the national laboratories (Interlaboratory Working Group, 1997). See also the special issue of Energy Policy edited by Bernow, et al. (1998).

3. There are exceptions to these generalizations. The meta-analysis by Repetto and Austin (1997) shows that the top-down models they surveyed would give positive GDP effects if five key assumptions were "best case": (1) noncarbon backstop fuel available, (2) efficient economic responses, (3) increased energy and product substitution, (4) joint implementation, and (5) revenues recycled efficiently. The Council of Economic Advisers (1998) also recently released a top-down analysis (based on the Batelle Laboratory's Second Generation Model) showing very low costs. Most of the emissions reductions in the Council's analysis were achieved through international trading of emissions allowances with the countries of the former Soviet Union, and with developing countries under yet-to-be-worked-out provisions of the Kyoto Protocol's Clean Development Mechanism.

4. See Koomey (1990), Lovins and Lovins (1991), Ayers (1993), Jaffe and Stavins (1993), DeCanio (1993), and the special issue of Energy Policy edited by Huntington et al. (1994) for an introduction to this literature.

5. An interesting potential short-term counterexample is the allocation of resources during wartime. In the U.S. at the peak of World War II, e.g., roughly 40% of the economy's output was devoted to the war effort (U.S. Bureau of the Census, 1975). The "efficiency" of the allocation of resources is difficult to gauge, but the economy was certainly productive - it overwhelmed the Axis powers in military production - and it must be remembered that the U.S. was still recovering from the Great Depression when rearmament began. Even inefficiently allocated jobs were likely to have been preferable to double-digit unemployment. See Rockoff (1984) on WWII price controls and the efficiency of the wartime American economy, and Lee and Passell (1979) on unemployment during the 1930s.

6. As Adaman and Devine (1996) put it,

The earlier models . . . do not address intra-firm organisational problems, since firms are treated as 'black boxes.' Once input is supplied output is obtained, with no consideration of 'shirking' or the costs of monitoring and contract enforcement. More recent models seek to overcome this naive conceptualisation by recognising that shirking may be partially overcome by supervision and that the 'principal-agent' problems that arise when firms are run by a managerial body rather than by their beneficial owner(s) may be partially dealt with by monitoring and incentive schemes. It is important to note, however, that these problems are structurally the same whether the owner is the Planning Bureau in a market socialist system or the stockholders in a corporate capitalist system. (p. 525, references omitted)

7. Simon seems to have arrived at bounded rationality as a generalization from common sense and empirical research in psychology rather than through reliance on the results of the formal mathematical theory of algorithmic computation.

8. This practice has been controversial for a long time. Early in the 1950s Friedman wrote, ". . . under a wide range of circumstances, individual firms behave as if they were seeking rationally to maximize their expected returns . . . and had full knowledge of the data needed to succeed in this attempt" (1953, p. 21). In a symposium on economic methodology published in the American Economic Review in 1963, the philosopher of science Ernest Nagel noted that "[Friedman] freely admits that as a rule businessmen lack such knowledge and do not perform the intricate calculations required for ascertaining the indicated maximum. Indeed, he declares that 'the apparent immediate determinants of business behavior' could be anything at all; e.g., ingrained habit or a chance influence. He nevertheless claims that these admitted facts do not affect the validity of the hypothesis" (p. 217). In the Discussion section of the same symposium, Simon urges economists to discard the false "as if" assumption and "make the observations necessary to discover and test true propositions. . . . Then let us construct a new market theory on these firmer foundations" (p. 230).

9. The example of dogs, frisbees, and differential equations is a "folk analogy"; it does appear in Lewis (1992a), along with a withering critique of its applicability to the problem of the computability of choice functions.

10. It is not known whether "[w]hatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable" (Copeland, 1996). As a recent textbook on computability puts it, "[S]ince the word algorithm has no general definition separated from a particular language, Church's thesis cannot be proved as a mathematical theorem" (Davis et al., 1994 p. 69).

11. Board (1994) derives a similar result for agents whose computational resources are polynomially bounded in the relevant parameters. He finds that "it is unlikely that agents can learn even those price functions and economies computable by finite state automata [a class of computing devices less sophisticated than Turing machines]. This result casts doubt on the ability of agents to form rational expectations in complex economies" (p. 246)

12. Rust makes the distinction between computations within economic models (such as calculation of the equilibrium of a CGE model) and the computations that are (explicitly or implicitly) carried out by economic agents or organizations. While computations of the first type may exhibit computability or complexity problems, the focus here is on the latter type.

13. It should be noted that Velupillai argues that the consequences of computability constraints are not as severe as suggested by Spear (1989) or Lewis (1985a).

14. Barrow (1998) has a very interesting discussion of whether Godel-type undecidability results have implications for physics. There does not appear to be a consensus.

15. "Complexity" as used here is distinct from the notion of complexity that is often associated with the Santa Fe Institute. For example, Durlauf (1997) in "What Should Policymakers Know about Economic Complexity?" gives the definition that "a system is said to be complex when it exhibits some type of order as a result of the interactions of many heterogeneous objects" (p. 1). The line of inquiry Durlauf refers to often builds on the existence of positive feedbacks and/or increasing returns. For a survey of"complexity theory" (defined as "the study of non-linear dynamic systems") applied to organizations, see Levy (forthcoming).

16. That is, it can be shown that any NP-complete problem can be transformed into each other NP-complete problem by a polynomial-time procedure.

17. Rust cites Nemirovsky and Yudin (1983), who show that the consumer choice problem is tractable if the utility function is concave.

18. In actuality, many of the everyday problems the individuals in firms must handle can be shown to be NP-complete when appropriately formalized. Garey and Johnson's (1979) extensive list includes problems in network design, storage and retrieval, database management, and sequencing and scheduling.

19. See DeCanio and Watkins (1998) and the references therein. For some tasks and rules of procedure, the optimal structure can be determined, as in Radner (1992, 1993).

20. Papadimitriou (1996) reports on several problems of organizational optimization for which decision making is computationally intractable. For example, regarding the problem of solving a linear program in a hierarchical organization in which the decision makers have their own objective functions, he observes, "Needless to say, if the hierarchy, as well as the linear program, is part of the input, then the problem becomes PSPACE-complete" (p. 563).

21. This is not to say that serious problems of leadership, free-riding, and international burden-sharing did not remain. See Benedick (1991) and Victor et al., (1998).

22. The starting point for any review (not to be attempted here) of what has been accomplished by evolutionary models is Nelson and Winter (1982; see also Nelson 1995, 1997). The "organizational ecology" approach to studying populations of organizations can be found in sociology (see Hannan and Freeman, 1989, e.g.). It is worth noting that Hayek (1988) argued forcefully that the idea of evolutionary change in the humanities and social sciences predated its acceptance in biology, although of course the specifics of the theories differ across the various fields. In particular, the biological theory of evolution excludes the inheritance of acquired characteristics, whereas the evolution of culture and institutions must be based on the transmission of learned patterns of behavior.

REFERENCES

Adaman, Fikret, and Pat Devine, "The Economic Calculation Debate: Lessons for Socialists," Cambridge Journal of Economics, 20, 1996, 523-537.

Amano, Akihiro, "On Some Integrated Assessment Modeling Debates," paper presented at the IPCC Asia-Pacific Workshop on Integrated Assessment Models, United Nations University, Tokyo, Japan, March 1012, 1997.

Ayers, Robert U., "On Economic Disequilibrium and Free Lunch," Working Paper 93/45/EPS, Centre for the Management of Environmental Resources, INSEAD, Fontainebleau, France, 1993.

Barrow, John D., Impossibility.' The Limits of Science and the Science of Limits, Oxford University Press, Oxford, 1998.

Benedick, Richard Elliot, Ozone Diplomacy: New Directions in Safeguarding the Planet, Harvard University Press, Cambridge, Mass., 1991.

Bernow, S., M. Duckworth, and J. DeCicco, Special Issue: Climate Strategy for the United States: "Bottom-up" Analyses of C[O.sub.2] Reductions, Costs and Benefits, Energy Policy, 26:5, April 1998.

Binmore, Ken, "Modeling Rational Players, Part I," Economics and Philosophy, 3, 1987, 179-214.

Blondel, Vincent, and John N. Tsitsiklis, "NP-Hardness of Some Linear Control Design Problems," SIAM Journal on Control and Optimization, 35:6, 1997, 2118-2127.

Board, Raymond, "Polynomially Bounded Rationality," Journal of Economic Theory, 63, 1994, 246-270.

Boedecker, Erin, John Cymbalsky, Crawford Honeycutt, Jeffrey Jones, Andy S. Kydes, and Duc Le, "The Potential Impact of Technological Progress on U.S. Energy Markets," Issues in Midterm Analysis and Forecasting, U.S. Department of Energy, Energy Information Administration, Washington, D.C., 1996.

Bohi, Douglas, and Dallas Burtraw, "S[O.sub.2] Allowance Trading: How Do Expectations and Experience Measure Up?" Electricity Journal, 10:7, August/September 1997, 67-75.

Bovenberg, A. Lans, and Sjak A. Smulders, "Transitional Impacts of Environmental Policy in an Endogenous Growth Model," International Economic Review 37, November 1996, 861-894.

Brock, William, "A Polluted Golden Age," in V. Smith (ed.), Economics of Natural and Environmental Resources, Gordon and Breach, New York, 1977, 441-461.

Chaitin, G. J., "Randomness in Arithmetic and the Decline and Fall of Reductionism in Pure Mathematics," Bulletin of the European Association for Theoretical Computer Science, No. 50, June 1993, 314-328. Reprinted in G. J. Chaitin, The Limits of Mathematics, Springer, Singapore, 1998.

Cheeseman, Peter, Bob Kanefsky, and William M. Taylor, "Where the Really Hard Problems Are," in J. Mylopoulos and R. Reiter (eds.), Proceedings of IJCAI-91, Morgan Kaufmann, San Mateo, Calif., 1991.

Chow, C. S., and J. N. Tsitsiklis, "The Complexity of Dynamic Programming," Journal of Complexity, 5, 1989, 466-488.

Conlisk, John, "Why Bounded Rationality?" Journal of Economic Literature, 34:2, 1996, 669-700.

Cook, Elizabeth (ed.), Ozone Protection in the United States: Elements of Success, World Resources Institute, Washington, D.C., 1996.

Copeland, B. J., "The Church-Turing Thesis," Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/church-turing, 1996.

Council of Economic Advisers, The Kyoto Protocol and the President's Policies to Address Climate Change.' Administration Economic Analysis, Washington, D.C., 1998.

Cutland, Nigel, Computability: An Introduction to Recursive Function Theory, Cambridge University Press, Cambridge, 1980.

Davis, Martin D., Ron Sigal, and Elaine J. Weyuker, Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science, 2nd ed., Academic Press, San Diego, 1994.

DeCanio, Stephen J., "Barriers within Firms to Energy-Efficient Investments," Energy Policy, 21:9, 1993, 906-914.

DeCanio, Stephen J., and William E. Watkins, "Information Processing and Organizational Structure," Journal of Economic Behavior and Organization, 36, 1998, 275-294.

DeCanio, Stephen J., William E. Watkins, Glenn Mitchell, Keyvan Amir-Atefi, and Catherine Dibble, Complexity in Organizations, unpublished manuscript, University of California, Santa Barbara, 1998.

Deng, Xiaotie, and Christos H. Papadimitriou, "On the Complexity of Cooperative Solution Concepts," Mathematics of Operations Research, 19:2, 1994, 257-266.

Durlauf, Steven N., "What Should Policymakers Know about Economic Complexity?" paper prepared for The Washington Quarterly, http://www.santafe.edu/sfi/publications/Working-Papers/97-10-080.html, 1997.

Economic Options Committee, 1994 Report of the Economic Options Committee, United Nations Environment Programme, Nairobi, 1994.

Energy Information Administration, Impacts of the Kyoto Protocol on U.S. Energy Markets and Economic Activity, Office of Integrated Analysis and Forecasting, U.S. Department of Energy, Washington, D.C., October 1998.

Energy Modeling Forum, "Post-Kyoto EMF Scenarios - Round #2," http://www.stanford.edu/group/EMF/Research.html, June 11, 1998.

Friedman, E. J., and S.S. Oren, "The Complexity of Resource Allocation and Price Mechanisms Under Bounded Rationality," Economic Theory, 6, 1995, 225-250.

Friedman, Milton, Essays in Positive Economics, University of Chicago Press, Chicago, 1953.

Garey, Michael R., and David S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, New York, 1979 (updated 1991).

Godel, Kurt, "Uber formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme I," Monatschefte fur Mathematik und Physik, 38, 1931, 173-198.

Goulder, Lawrence H., and Stephen H. Schneider, "Induced Technological Change and the Attractiveness of C[O.sub.2] Abatement Policies," Resource and Energy Economics, forthcoming.

Graicunas, Vytautas Andrius, "Relationship in Organization," Bulletin of the International Management Institute, 7, March 1933, 39-42.

Hammitt, James K., Are the Costs of Proposed Environmental Regulations Overestimated? Evidence from the CFC Phaseout, Center for Risk Analysis and Department of Health Policy and Management, Harvard School of Public Health, Cambridge, Mass., May 1997.

Hannan, Michael T., and John Freeman, Organizational Ecology, Harvard University Press, Cambridge Mass., 1989.

Hayek, Frederich A. von, The Fatal Conceit: The Errors of Socialism, in W. W. Bartley III (ed.), The Collected Works of F. A. Hayek, vol. 1, University of Chicago Press, Chicago, 1988.

Hayes, Brian, 1997. "Can't Get No Satisfaction," American Scientist, March-April 1997, http://www.amsci.org/amsci/issues/comsci97/compsci9703.html.

Howarth, Richard B., "An Overlapping Generations Model of Climate-Economy Interactions," Scandinavian Journal of Economics, 100:3, 1998, 575-591.

Huntington, Hillard, Lee Schipper, and Alan H. Sanstead (eds.), "Markets for Energy Efficiency," Energy Policy, 22:10, 1994 (special issue).

Interlaboratory Working Group, Scenarios of U.S. Carbon Reductions: Potential Impacts of Energy Technologies by 2010 and Beyond, Lawrence Berkeley National Laboratory, Berkeley Calif., (LBNL-40533), and Oak Ridge National Laboratory, Oak Ridge, Tenn., (ORNL-444), September 1997, http://www. ornl.gov/ORNL/Energy_Eff/CON444.

Jaffe, Adam B., and Robert N. Stavins, "The Energy Paradox and the Diffusion of Conservation Technology," faculty research working paper series R93-23, John F. Kennedy School of Government, Harvard University, Cambridge, Mass., 1993.

Kauffman, Stuart A., The Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press, New York, 1993.

Ko, Ker-I, Complexity Theory of Real Functions, Birkhauser, Boston, 1991.

Koomey, Jonathan G., "Energy Efficiency Choices in New Office Buildings: An Investigation of Market Failures and Corrective Policies," Ph.D. Dissertation, Program in Energy and Resources, University of California, Berkeley, 1990.

Koomey, Jonathan G., R. Cooper Richey, Skip Laitner, Robert J. Markel, and Chris Marnay, Technology and Greenhouse Gas Emissions: An Integrated Scenario Analysis Using the LBNL-NEMS Model, Energy Analysis Department, Environmental Energy Technologies Division, Ernest Orlando Lawrence Berkeley National Laboratory, University of California, Berkeley, Calif., and U.S. Environmental Protection Agency, Office of Atmospheric Programs, Washington, D.C., 1998.

Kuhn, Thomas S., The Structure of Scientific Revolutions, The University of Chicago Press, Chicago, 1962.

Lee, Susan Previant, and Peter Passell, A New Economic View of American History, Norton, New York, 1979.

Levinthal, Daniel A., "Adaptation on Rugged Landscapes," Management Science, 43:7, 1997, 934-950.

Levy, David, "Applications and Limitations of Complexity Theory in Organization Theory and Strategy," in Handbook of Strategic Management, 2nd ed., Dekker, New York, forthcoming.

Lewis, A. A., "On Effectively Computable Realizations of Choice Functions," Mathematical Social Sciences, 10, 1985a, 43-80.

-----, "The Minimum Degree of Recursively Representable Choice Functions," Mathematical Social Sciences, 10, 1985b, 179-188.

-----, "Structure and Complexity: The Use of Recursion Theory in the Foundations of Neoclassical Economics and the Theory of Games," unpublished manuscript, Department of Mathematics, Cornell University, Ithaca, N.Y., 1986.

-----, "On Turing Degrees of Waltasian Models and a General Impossibility Result in the Theory of Decision Making," Mathematical Social Sciences, 24, 1992a, 141-171.

-----, "Some Aspects of Effectively Constructive Mathematics that are Relevant to the Foundations of Neoclassical Mathematical Economics and the Theory of Games," Mathematical Social Sciences, 24, 1992b, 209-236.

Lovins, Amory B., and L. Hunter Lovins, "Least-Cost Climatic Stabilization," Annual Review of Energy and the Environment, 16, 1991, 433-531.

-----, Climate: Making Sense and Making Money, The Rocky Mountain Institute, Snowmass, Colo., 1997.

Manne, Alan S., and Richard G. Richels, Buying Greenhouse Insurance: The Economic Costs of Carbon Dioxide Emission Limits, MIT Press, Cambridge, Mass., 1992.

Nachbar, J. H., "On Computability in Infinitely Repeated Discounted Games," unpublished manuscript, Washington University, 1993.

Nachbar, John H., and William R. Zame, "Non-computable Strategies and Discounted Repeated Games," Economic Theory, 8, 1996, 103-122.

Nagel, Ernest, "Assumptions in Economic Theory," American Economic Review, 53:2, 1963, 211-219.

Nagel, Ernest, and James R. Newman, Godel's Proof, New York University Press, New York, 1958.

Nelson, Richard, "How New Is New Growth Theory?" Challenge, 40:5, 1997, 29-58.

Nelson, Richard R., "Recent Evolutionary Theorizing about Economic Change," Journal of Economic Literature, 33:1, 1995, 48-90.

Nelson, Richard R., and Sidney G. Winter, An Evolutionary Theory of Economic Change, Belknap Press, Cambridge, Mass., 1982.

Nemirovsky, A. S., and D. B. Yudin, Problem Complexity and Method Efficiency in Optimization, Wiley, New York, 1983.

Nordhaus, William D., Managing the Global Commons: The Economics of Climate Change, MIT Press, Cambridge, Mass., 1994.

Page, Scott E., "Two Measures of Difficulty," Economic Theory, 8, 1996, 321-346.

Papadimitriou, Christos H., "On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence," Journal of Computer and Systems Sciences, 48, 1994, 498-532.

-----, Computational Complexity, Addison-Wesley, Reading, Mass., 1995.

-----, "Computational Aspects of Organization Theory (Extended Abstract)," in Josep Diaz and Maria Serna (eds.), Algorithms - ESA '96: Fourth Annual European Symposium, Barcelona, Spain, September 1996 Proceedings, Springer, 1996.

-----, "Games Against Nature," Journal of Computer and System Sciences, 31, 1985, 288-301.

Papadimitrou, Christos H., and John Tsitsiklis, "Intractable Problems in Control Theory," SIAM Journal on Control and Optimization, 24:4, 1986, 639-654.

Porter, Michael E., "America's Green Strategy: Environmental Standards and Competitiveness," Scientific American, 264:4, 1991, 168.

Porter, Michael E., and Claas van der Linde, "Toward a New Conception of the Environment-Competitiveness Relationship," Journal of Economic Perspectives, 9:4, 1995a, 97-118.

-----, "Green and Competitive: Breaking the Stalemate," Harvard Business Review, 73:5, 1995b, 120-134.

Rabin, Michael O., "Effective Computability of Winning Strategies," Annals of Mathematics Studies, 39, 1957, 147-157.

Radner, Roy, "Hierarchy: The Economics of Managing," Journal of Economic Literature, 30, September 1992, 1382-1415.

-----. "The Organization of Decentralized Information Processing," Econometrica, 61:5, 1993, 1109-1146.

Repetto, Robert, and Duncan Austin, The Costs of Climate Protection: A Guide for the Perplexed, World Resources Institute, Washington, D.C., 1997.

Ridley, Mark, Evolution, 2nd ed. Blackwell, Cambridge, Mass., 1996.

Rockoff, Hugh, Drastic Measures: A History of Wage and Price Controls in the United States, Cambridge University Press, Cambridge, 1984.

Rust, John, "Dealing with the Complexity of Economic Calculations," paper for "Fundamental Limits to Knowledge in Economics," Workshop, Santa Fe Institute, August 3, 1996, Santa Fe, revised 1997.

Schmalensee, Richard, Paul L. Joskow, A. Denny Ellerman, Juan Pablo Montero, and Elizabeth M. Bailey, "An Interim Evaluation of Sulfur Dioxide Emissions Trading," Journal of Economic Perspectives, 12:3, 1998, 53-68.

Schneider, Stephen H., and Lawrence H. Goulder, "Achieving Low-Cost Emissions Targets," Nature, 389, 4 September 1997, 13-14.

Simon, Herbert A., "Discussion," American Economic Review, 53:2, 1963, 229-231.

-----, Models of Thought, Yale University Press, New Haven, Conn., 1979.

Spear, Stephen E., "Learning Rational Expectations under Computability Constraints," Econometrica, 57:4, 1989, 889-910.

Stavins, Robert N., "What Can We Learn from the Grand Policy Experiment? Lessons from S[O.sub.2] Allowance Trading," Journal of Economic Perspectives, 12:3, 1998, 69-88.

Tol, Richard S. J., "The Damage Costs of Climate Change: A Note on Tangibles and Intangibles, Applied to DICE," Energy Policy, 22:5, 1994, 436-438.

Traub, J. F., G. W. Wasilkowski, and H. Wozniakowski, Information-based Complexity, Academic Press, Boston, 1988.

Turing, A. M., "On Computable Numbers, with an Application to the Entscheidungsproblem," Proceedings of the London Mathematical Society, series 2, 42, 193637, 230-265.

Union of Concerned Scientists and Tellus Institute, A Small Price to Pay: US Action to Curb Global Warming Is Feasible and Affordable, UCS Publications, Cambridge, Mass., 1998.

U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970, Bicentennial Edition, Part I, Government Printing Office, Washington, D.C., 1975.

Velupillai, K. (Vela), "Expository Notes on Computability and Complexity in (Arithmetical) Games," Journal of Economic Dynamics and Control, 21, 1997, 955979.

Victor, David G., Kal Raustiala, and Eugene B. Skolnikoff (eds.), The Implementation and Effectiveness of International Environmental Commitments: Theory and Practice, International Institute for Applied Systems Analysis, Laxenburg, Austria, and MIT Press, Cambridge, Mass., 1998.

Weinberger, Edward D., "NP Completeness of Kauffman's N-k Model, a Tuneably Rugged Fitness Landscape," unpublished manuscript, Max-Planck-Institut fur biophysikalische Chemi, Gottingen-Nikolausberg, 1996.

Wilson, Robin J., Introduction to Graph Theory, 3rd ed., Longman, New York, 1985.
COPYRIGHT 1999 Western Economic Association International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1999 Gale, Cengage Learning. All rights reserved.

 
Article Details
Printer friendly Cite/link Email Feedback
Author:DeCanio, Stephen J.
Publication:Contemporary Economic Policy
Date:Jul 1, 1999
Words:12228
Previous Article:Toward behavioral modeling of Alaska groundfish fisheries: a discrete choice approach to Bering Sea/Aleutian Island trawl fisheries.
Next Article:The Kyoto Protocol, CAFE standards, and gasoline taxes.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters