Printer Friendly

Designing corporate governance regimes.

"Time is a device that was invented to keep everything from happening at once" (2)


If survival is a measure of success, then the public corporation has to be considered a highly successful institution. Notwithstanding the periodic prophesies of its imminent demise, (3) the public corporation has not only survived, but proven to be a highly effective mechanism for coordinating the intertemporal decisions (4) of thousands of actors--employees, customers, suppliers, lenders, shareholders, board members, and managers. At this atomic level, the public corporation can be seen as a system and as such bears a resemblance to another system comprising large number of participants--the market. (5) But unlike fully competitive markets, in which each participant has little power to tilt the scales in its favor, the corporation is all about power--about who has it and the consequences that ensue when it is brought to bear. In fact ever since Berle and Means (6) underlined the power asymmetry between shareholders and managers, the governance gap produced by this "separation of ownership and control" has been the "problem" that corporate governance has sought to address, at least in the context of public corporations. After each new round of corporate scandals or wholesale market failures, policymakers identify the sources of governance failures and adjust legal rules to make them more robust and reliable, and shareholders and creditors that answer to their own owners promise then greater vigilance and accountability. The two most recent examples of wholesale governance failures, the Enron/WorldCom scandals in 2001 and financial crisis of 2007-2009, led to myriad blue ribbon panels and congressional hearings and yielded the Sarbanes-Oxley and the Dodd-Frank Acts.

A corporate governance regime can be seen as a set of formal and informal rules to help actors coordinate their behavior and anticipate and deal with conflicts produced by differences in their expectations and preferences. While each corporation is a system to itself, individual corporations cooperate and compete with each other, and thus form a larger system, governed by a broader set of rules. one can further expand the system to which corporate governance applies by including financial intermediaries, which help corporations raise capital and hedge risks, and households, which provide funds to corporations and financial intermediaries and interact with both of them in myriad other contexts. it follows that the first step in designing a governance regime is to specify the system being governed. For example, the governance rules in the Sarbanes-Oxley Act are primarily aimed at individual corporations; on the other hand, those in the Dodd-Frank Act have a broader scope given that the Act casts a regulatory net that extends to corporations, financial intermediaries, and households. congress cast this wider net in order to identify and address system-wide problems--systemic risk--in a timely fashion, a task made difficult by the web of interconnections between intermediaries, corporations, and households.

This paper approaches the problem of corporate governance from an engineering perspective. It argues that one should design and test corporate governance regimes using formal techniques similar to those developed by engineers to design, test, and update complex concurrent systems. (7) These are systems that are composed of multiple agents or components that interact over time and must synchronize their individual and joint activities to properly share resources, avoid conflicts, and produce the desired system behavior. (8) corporations are concurrent systems; in fact, computer scientists sometimes refer to business organizations (9) and contract relations (10) when illustrating characteristics of concurrent systems. Engineers have given close attention to the general problem of how to specify and design concurrent systems for the very reason that costly, even catastrophic, system failures are difficult to detect, particularly when system components or actors are able to engage in behavior that cannot be fully observed by others. This can lead to incongruous behavior that can increase the costs of undertaking planned joint activity (or forecloses it altogether). (11) As a result, a designer of a concurrent system has to anticipate, as best as possible, how these different components will interact and extent to which coordination failure can affect the behavior of the system as a whole. (12)

By using a set of "governance rules" to govern the design and testing of concurrent systems, engineers have been able to build large, extremely complex, safety critical systems that are robust and reliable. As a general matter, a system is robust if it behaves reasonably well even when it encounters unforeseen contingencies. (13) A system's reliability is in turn a function of how well it meets its specifications--i.e., it's stated goals or intended behavior. (14) A designers ability to prove a system's correctness--that it is guaranteed to meet its specifications--can be important where system errors can produce significant harm--e.g., railway crossings, hospital equipment, and nuclear power plants. (15) However, formally proving that a system will meet its specification--i.e., showing correctness--is extremely difficult, except in relatively simple systems. (16) The best that system implementers can hope for is to show that the system is reliable, in that its behavior stays within a specified interval of acceptable error.

As we have seen, concurrent systems are highly complex because the interconnections between the various components change over time in an indeterminate fashion. The goals of this paper are to introduce some of the formal tools that engineers use to pierce through the complexity of concurrent systems and address their inherent indeterminacy, and to provide some examples of how these tools can be used for making sense of the complexity of corporations and governance regimes. it is not my intention in this paper to develop a complete theory of the design of governance regimes.

Part I argues that the standard conception of corporate complexity fails to take into account the important role played by the multi-dimensional "social dependence" of corporate participants, and intertemporal nature of their relationships. Part II first describes the general problem of designing corporate governance regimes, in light of the inherent complexity of public corporations. it then describes the relevant aspects of the tools used by engineers to design and test concurrent systems. After that, it shows how one can extend the general insights from the concurrency approach to the problem of specifying, designing, and verifying the reliability of corporate governance regimes. part iii develops some additional implications of the approach discussed in part ii. part iv provides a conclusion.

I. Complexity Matters: Social and Intertemporal Dependence

This Part begins by providing an overview of corporate complexity and then describes two sources of complexity that have not received adequate attention in corporate law theories: the social dependence of corporate actors and the intertemporal nature of their relationships. (17) The last section argues that complexity matters only when decision-makers have limited time and resources to make sense of it; that is, only if they have bounded rationality.

A. Corporate Complexity

What does it mean to say that an organization, a bureaucracy, or set of legal rules, such as the tax code, is complex? (18) The complexity of an object or system (19) is a function of the number of parts that it has and the ways in which they interact. (20) What does it mean for the components of a system to interact with each other? We will say that two objects interact, when the behavior of each influences the other, and where some coordination is required. Human interactions differ in a variety of ways: they can be consensual or imposed by a third party and require different levels of coordination, cooperation, (21) planning, (22) and intersection of intentions. (23) A competitive market, by definition, has many components in the form of buyers and sellers; however, market participants interact along a well-defined interface--in the form of prices--which transfers information about the aggregate behavior of all participants. (24) On the other hand, a system with few parts may be complex, if the interface between them is not well-calibrated or involves multiple points of contact--i.e., it requires a greater amount of communication or coordination. For example, when two English-speaking individuals communicate they do so via a more reliable interface than that used by two individuals who do not share the same language and thus have to resort to the much more under-specified syntax and semantics of hand gestures. (25)

Public corporations are highly complex systems, given that they are composed of a large number of actors who, over time, interact with each other in a variety of ways, with incomplete information, and in the shadow of legal rules, contracts, and markets, which are themselves complex. People are also extremely complex along multiple dimensions: their DNA, bodies, minds, as well as their plans, interests, and emotions, and all the other things affecting their behavior. And yet when we interact with each other and corporations, we seem to manage quite well; moreover, when we set out in these transactions between obviously complex entities, we give relatively little attention to their myriad parts and way that they are brought together to form a whole. (26) Paradoxically, the fact that we can proceed in our daily lives oblivious to much of the complexity around and within us only underlines the importance of thinking about the general problem. There is obvious value to well- worn solutions, whose operation becomes automatic or second-nature; but their very invisibility in our deliberation process can also create costs. For example, people transacting in complex environments often resort to decisional short-cuts; using heuristics helps economize on deliberation costs, but they can also lead to systematic deviations from full rationality. (27)

B. Social Dependence

Social dependence relations are ubiquitous across settings touched by law, (28) economics, (29) and ethics. (30) For example, Robert Nozick argued that these mutual connections between people's actions provide the "background that gives rise to ethics." (31) At the same time, social dependence within particular legal contexts can be pervasive, multi-dimensional, and thus complex; as a result, when they are acknowledged in legal analyses it is usually, as the first step in abstracting away from them to focus on more tractable aspects of problems, or alternatively as part of arguments regarding legal indeterminacy.

I will say that a person X is socially dependent to Y if two conditions are met: (1) X has a goal that can be brought about if someone takes an action; and (2) X cannot carry out that action alone, but Y can. (32) When two individuals have the same or different goals that can only be brought about by the participation of the other person, they are mutually dependent. (33) For example, if Joe's goal is to marry Sally, Joe is socially dependent on Sally saying "I do," and if Sally's goal is to marry Joe, they are in a mutual dependence relationship. (34) Additionally, they cannot marry unless a third party with the proper authority performs the ceremony; that third party will in turn depend on an agent of the state--e.g., the legislature--to grant it the authority to perform the marriage. The legislature depends on others, the voters who elected them, the judges that interpret relevant constitutional provisions, regarding the legislature's own authority, and so on. There are three important points here. First, a web of socially dependent relationships can be extensive and complex, even in relatively simple transactions such as this one, let alone those within large public corporations. second, social dependence may arise through voluntary or autonomous acts--e.g., entering into a contract--but it does not have to. A person who has a heart attack in a restaurant is socially dependent on other present to perform CPR. Third, people may voluntarily put themselves in socially dependent relationships only to find out later that the costs of terminating them is too costly, as in cases in which transaction-specific investments makes a party vulnerable to opportunistic behavior. (35)

Agency relationships are one example of social dependence, given that a principal hires an agent to act on her behalf to achieve some goal, and does so because of a belief that she can maximize her net returns if the agent is the one who takes the required actions. (36) However, the agency approach is concerned with the basic issue of how to realign the incentives of self-interested agents so that they coincide with those of their principals. Social dependence is a more general concept--for example two individuals can be in a socially dependent situation even if neither of them knows of the existence of the other. (37) In other words, a necessary condition of an agency relationship is the parties' awareness of the agency roles that they are playing.

Finally, when we govern our bodies, corporations, and societies, we resort to one set of dependence relations to manage a second; this is the case whether we are involved in self-governance or delegate the task to lawmakers and markets. This second-order social dependence increases the complexity of the governance task, and thus limits the extent to which a group of actors can resort to a set of governance rules to reduce the risks created by their first-order social dependence. As a result, in order to design effective governance regimes, those designing it have to address this second-order, governance complexity.

C. Social Dependence in Corporations

Corporate participants pool capital, labor, and expertise, and repeatedly interact over time, sharing those resources in order to produce and distribute surpluses that they then divide among themselves. (38) As a result, shareholder, board members, managers, creditors, employees, and other constituencies depend on each other to help realize goals at both the individual and collective levels. A corporate actor may depend on one or more other actors, and do so along a number of dimensions. It follows that a corporation's complexity will depend both on the number of participants and the nature of these dependencies, including their multidimensionality. Social dependence reveals an interest and willingness of corporate actors to engage in a cooperative enterprise; although they may each have different reasons for putting themselves in such a relation, these possibly divergent interests do intersect in an important way: the actors believe that others within the firm can help. A person who ceases to believe in the value of being part of the firm will eventually exit. Nonetheless, a willingness to cooperate is not a sufficient condition to bring it about (39) ; cooperation also requires coordination, (40) at the physical, temporal, and epistemic levels. (41) Achieving coordination along these three dimensions is not a straightforward exercise and one fraught with the potential for failure, which can be costly, even if that failure is temporary in nature.

D. Temporal Complexity

Corporations are embedded within a temporal frame. They are abstract entities, brought into existence by the filing of a certificate of incorporation and persisting until they are dissolved, merged, or terminated by the state. Temporal issues come into play whenever the consequences of decisions--in the form of costs and benefits--are experienced at different points in time, as well as in situations in which the order in which events occur affect the overall outcome. As a general matter, a rational person will take an action in the current period if it is intertemporally worthwhile: if given her belief of how she plans to act in the future, the action maximizes her current and future well-being. (42) In making the determination, she will try to predict two things that may change over time: (1) her preferences and those of others--e.g., other corporate actors; and (2) the environment in which her actions will unfold and payoffs materialize.

This means that corporate actors will be in a series of intertemporal dependent relations, not just with others, but also with their past and future selves. (43) In other words, even when parties do not need to coordinate their actions with others, they still may have to plan the behavior of their temporally extended selves. (44) For example, managers must determine how much general and firm-specific human capital to acquire, which in turn will be influenced by their age, the outside employment options they expect to have, and more generally by how long they expect to be employed by the firm. (45) In the same manner, a shareholder must determine how much effort to expend monitoring managers and when to do so, which in turn will depend on the size of its holdings, how long it expects to own the shares, tax considerations, and the level of diversification of its portfolio. (46)

E. Bounded Rationality

In making a decision, a fully rational actor will use all of the information in its possession that is relevant to that decision--this is true both for computers and individuals. (47) A computer may fall short of full rationality if given the speed of its central processing unit it does not have enough storage, memory, and time to carry out the requisite computations. (48) People too have limited time, memory, and computational power, (49) and thus will face the same type of "bounded rationality" constraints as computers. (50) The concept of bounded rationality was introduced into computer science by Herbert Simon in the 1950's and subsequently imported into economics (51) and law. (52) From a design perspective, complexity matters only to the extent that a decision-maker faces a bounded rationality constraint that is in fact binding. (53) As we will now see, one way to deal with the design of very complex systems is through a divide-and-conquer technique in which different parts of the system are designed and tested independently and eventually glued together to create the complete system. However, in order to use such a recursive approach, the designer has to understand how the various components interact with each other. One way to reduce the complexity of making sense of these potential interactions is to create clear boundaries between components--for example, by specifying when they are allowed to interact, and when they are prohibited from doing so. This is in fact the very approach used to design concurrent systems.

II. A Theory of Concurrent Corporate Governance

Part I described three important sources of corporate complexity: the number of actors involved, their social dependence, and the intertemporal nature of their interactions. Complexity is not an absolute constraint, but one that can be managed by making adjustments along two dimensions: (1) reducing the number of system components; and (2) increasing the efficiency and reliability of the interactions between them by modifying the interface through which they communicate and engage in joint behavior. Of these two, the second one is the more important for the law, given that policymakers often want to add more rules to an already complex system of laws. For example, if one accepts that it would be valuable to simplify a set of rules, such as the tax code or a regulatory framework, it does not follow that one has to reduce the number of rules; instead one can make rules more self-contained, such as by limiting the way that they interact with or depend on other rules in the system. Engineers have given great attention to developing techniques to better describe, understand, and manage complexity so that they can continue to add additional components to already complex systems. (54) Since the bounded computational abilities of humans and machines are analogous, it makes sense to look at the way that engineers have handled the issue. (55) With this in mind, this Part will address the following two questions. If an engineer were hired to specify, design, and implement a corporate governance regime, how would she approach the task? What design methodologies would she use to address the problems raised by the inherent complexity of corporations, discretion of corporate agents, and physical and strategic obstacles to observing their behavior?

A. Managing Complexity

The first step in designing any governance mechanism is to specify: (1) who will be using it--that is, the identity of the governing party and of the governed; (2) the types of actions available to each party, including when those actions are feasible, and whether they can be undertaken independently of each other or require the participation of both; (3) the information that each party has available at the time of acting; and (4) the consequences of those actions, including the payoffs received by each party. (56) However, given the complexity problems, the second task that the governance designer faces is that of identifying the sources of complexity, the tools available to it to make the actions of parties more transparent.

1. The Art of Abstraction

One of the principal ways in which engineers deal with complexity is through the use of abstractions. For example, if a system has 1,000 parts, each with their own independent identity, we may just choose to ignore the independence of some of them by embedding them in groups of like types. (57) We then attach a name or identity to each group and forget about their different, but conceptually equivalent members. (58) A user interacting with such a compound object will do so through a limited interface which, if all goes well, provides it with all of the information it needs to use the object. Information about the internal workings of the object is hidden from the user. There is often more than one way of abstracting from the underlying complexity of a system, and the one chosen will usually depend on the identity of the individuals using the system and the nature of their relationship with it. For example, a geneticist, neurologist, and painter will each have a different take on the human body. (59) In other words, the interface between each of them and the bodies with whom they interact will differ: they will observe and communicate with them at different levels of abstraction. (60)

2. Choosing Corporate Abstractions

Thus, one way of reducing corporate complexity is to hide from corporate participants information that is superfluous to the type of governance decisions that they will have to make. But how should one do this? one way is to make certain behavioral assumptions about what motivates the various participants, and use these to categorize them; a second way is to adopt the ontology provided by corporate law, dividing participants into shareholders, managers, boards members, creditors, and so on. However, the resulting abstractions are not identical. For example, shareholders may have different goals, risk-tolerance, and time horizons, as may also board members and managers. There are other ways of categorizing corporate participants, but the key point in that the scheme chosen has positive and normative consequences; at the very least, it will carve out and privilege certain features of the real world for observation and manipulation by laws and contracts. Given that we are concerned with the way that social dependence affects the decision-making process of corporate participants, we will focus on abstractions that are well-suited for dealing with the complexity involved when parties are involved in repeated interactions, but are not in constant contact with each other. That is, contexts in which participants will come together intermittently to carry out particular corporate actions, and must predict the extent to which they will be able to coordinate their behavior when they are called to do so.

B. Designing Reliable Concurrent Systems

Concurrent systems are composed of socially dependent components or actors. In order to design reliable systems, the designer has to understand how the various components coordinate or synchronize their joint behavior over time. The inherent indeterminacy of concurrent systems makes it difficult to use standard abstraction techniques, which are best suited for contexts in which the nature of the interactions among components remains stable over time. (61) in order to deal with this indeterminacy, an engineer will try to model or simulate the potential courses of action available to components and extent to which a component will be in a position to join in a joint action with other components when it is called to do so. (62)

1. The Concurrency Problem

The general concurrency problem can be stated as follows. Suppose that there are two socially dependent actors or components, each with a set of actions available to it. Moreover, each of these actions may be appropriate or available at particular points in an actor's action-sequence (63) but foreclosed or impossible at others. (64) More specifically, the actions available to an actor at any one point in time, will be a function of the state in which it is then in; moreover, each time that an actor acts, it moves to a new state, which may have the same or some other actions available (or none). (65) Actors are expected to interact or participate in joint activities, at various points in time; however, given that the order in which actions occur matters, an actor will be able to participate in a joint activity only if it is at a juncture in its action-sequence in which the action called for in that joint activity is available to it--i.e., if it is in the appropriate state. When actors interact they are able to observe the result of their joint behavior; or equivalently, they can ascertain the state to which each transitions. However, actors in concurrent systems are not in constant interaction; in fact, it is expected that, in between any two interactions, they will engage in independent or autonomous behavior, which may or may not be observable to others. If their independent behavior is always observable, then there is no concurrency issue but to the extent that it is not actors may be unable to participate in required or planned interactions.

To summarize. Actors in a concurrent system will each engage in a sequence of actions over time, where some will be independent and others will involve the participation of others. In order for these joint actions to occur (or to avoid mistakes or inefficient interactions), each actor has to be in a state in which that behavior is available to it. This means that the extent to which an actor can depend on another will be a function of its ability to predict whether that other actor will be able to engage in the required behavior at the appropriate time. That second actor will be completely "transparent" if all of its independent actions are observable, but will become increasingly opaque or indeterminate the more of those actions that are beyond observation.

2. Observation and Synchronization Events

Thus the designer of a concurrent system needs to find some way of dealing with the non- observable behavior of actors. The key breakthrough in the design of these systems came with the realization that in many cases one can simply ignore non-observable behavior; with the critical proviso, that they should be ignored in a principled, well-thought-out manner. The standard concurrency model has the following properties. First, an actor is completely defined by the set of observable actions that it can engage in, (66) the order in which these can occur, and the types of interactions with others that they are expected to undertake. (67) Moreover, an actor's behavior can be observable to one or more other actors, but may be unobservable to third parties. Second, two actors can observe each other if and only if they engage in a joint activity or interaction--which is usually referred to as a synchronisation event. (68) This does not mean that two individuals in a joint event will observe the same thing, experience it the same way, or attach the same meaning to it. (69) However, it does mean that any time that one actor observes another they are by necessity involved in a joint activity, even if the second actor is oblivious that it is being observed. (70) By requiring that whenever two actors need to share, disclose, signal, or in any way communicate or transfer information that they do so through a pre-defined interactions or synchronization, the concurrency model brings the observability/non-observability problem to the foreground, forcing system designers to make all design decisions with this general problem in mind. (71) Given that the only way to observe the occurrence of an event is to participate in it, whenever two actors interact, a third party who was not part of that interaction will not be able to observe what transpired between them: as far as third-parties are concerned, that interaction can be treated as a unit or black box. (72) in order to get a better sense of the relationship between internal and external observable behavior and the constraints that may dynamically arise as agents engage in action-sequences, imagine that two chess players are playing a championship match. The game is a concurrent activity that will consist of each player undertaking a sequence of observable moves that will terminate when there is a checkmate, or they agree to a draw. Each move, beginning with the first one, will change the state of the game, as well as that of each of the players. The progress of a game can be characterized as a series of transitions from one state to the next, where each move is constrained by all of the moves up to that point and the rules of the games--e.g., the way that each piece may be moved. A player's observable state--their current position on the board--will not (if properly executed) betray her internal deliberations about sequences of moves that she expects to make. While some of these internal transitions may be inferable, given that players often adopt well known move-sequences from previous matches, (73) the great players are the ones who can make others believe that they are in fact engaged in such sequence, but eventually reveal that they were actually playing a new variation of a known sequence. (74)

3. Traces and Specifications

Since the only actions that are relevant for the model are observable ones, an actor can be described by setting forth the set of observable action sequences that are available to it. The trace of an action-sequence is a record of an agent's observable behavior, (75) in the order that they occur; a set of traces is a collection of all action-sequences that are possible. (76) In some cases, it is important to assure that an actor will abstain from certain types of behavior. One can create such a "safety" specification by setting forth a set of action-sequences that are prohibited and then checking to make sure that the observable actions of an actor, as captured by its trace set does not include one of those sequences. Such specifications do not require an actor to do anything, just to avoid certain bad outcomes. For example, an engineer designing a railway crossing would adopt safety specifications such as: "if a train is approaching crossing, do not allow cars from passing though, which can be met by having the gate lowered when a train is within a certain distance from the crossing." on the other hand, "liveness" specifications require that an actor engage in one or more observable actions; that is, refusing to take such an action at the required time would be deemed an error. (77) The engineer may, for example, want to require that the gate goes up after the train passes though so that cars can continue to use the crossing. The set of actions in an environment which an actor is able to refuse is that actors "refusal set." one way of defining a particular actor is by specifying the set of actions that it is able to take (the trace set) as well as the set of actions that it can refuse to take in all relevant contexts. Two actors can thus be considered equivalent if they have the same trace and refusal. sets. (78)

4. The Designer's Task

This of course does not mean that we have eliminated the potential indeterminacy that can arise whenever actors engage in non-observable behavior. However, it does bring the transparency problem into clear focus: a designer who wants to make sure that two actors can coordinate their behavior at some point in time, will need to assure that: (1) the internal state transitions of each does not affect their ability to engage in that joint event; or (2) it will have to include one or more synchronization points along the way to assure that their knowledge of each other's state--i.e., future ability to engage in the joint event--do not diverge too much.

The designer's task is then to identify those contexts in which actors are socially dependent and the types of problems that can arise due to their potential inability to observe each other's behavior. once that is determined, the designer has to ascertain the extent to which observation-events are needed, the nature of those observations, and their order or timing. The designer cannot simply assert that actors will come to know or should know a particular fact; it has to point to a specific observation event in which that communication can occur. If the designer is unsure that the actors have sufficiently synchronized their beliefs about a relevant fact, then it has to insert additional observation points, or explain why they are not needed.

Finally, let me summarize some of the benefits of the concurrency model. First, it provides a set of tools that greatly simplify the task of describing and reasoning about complex interactions within intertemporal contexts; for example, a lot of these systems are required to stay in constant operation for very long periods of time, and to provide real-time solutions when they interact with people.

Second, the model makes it easier to specify, design, and implement systems, and formally determine places in which unreliable interactions may occur. Third, the key decision to model joint behavior as a single action or event helps reduce complexity of concurrent system by indicating the precise points at which the independent actions of two or more autonomous agents are required to intersect. Fourth, by treating all observable behavior in the same manner, it is much easier to compare the expected behavior of systems, given that what is important is not the number of components that are interacting within the system but the observable result that they produce as a group.

C. Concurrent Corporate Governance

How then can we test the reliability of governance structures and compare between different types--e.g., markets, contracts, mandatory or default legal rules? Can we identify design rules that will facilitate the process by which corporate governance goals--whatever they may be--can be translated into robust and reliable governance mechanisms? The approach introduced in this Paper provides a general framework that can be used by corporate actors, lawyers, and lawmakers to compare governance devices and identify potential problems that can arise when they are placed in real-world contexts, particularly given the social dependence and intertemporal complexity constraints described in Part I. However, the goal of this paper is relatively modest: to provide an overview of how the concurrency model would operate and to show how it can help with the first steps in the road towards governance transparency.

1. Governance Divergences

shareholders delegate managerial tasks to board members and managers, knowing that they will generally be unable to observe their behavior. Agency problems are due precisely to this general inability of principal to observe the behavior of their agents. The concurrency approach allows one to model corporate governance events as a stream of snapshots across time, much in the same way that a balance sheet captures the financial state of a firm as of a certain date. (79) The reliability of a governance approach will depend, in part, on how often these observation events occur, the nature of the information that is transferred, and who is involved in those observation events. Some types of actions are what we can call experience actions in that their consequences are not fully apparent until after they occur. (80) We can generalize this notion of experience actions to take into account situations in which the aggregate effect of an actor's actions may not be completely obvious or cognizable until it has taken a sufficiently large number of them. (81) For example, managers make a series of management decisions over time, many of them of relatively small import by themselves. However, when taken together these decisions can have a material impact on the welfare of shareholders.

One principal obstacle to designing effective governance devices, is determining the types of contexts in which managers can undertake a sequence of non-observable actions harmful to shareholders--i.e., lead to an "unacceptable" divergence from the course of action most likely to maximize returns to shareholders. In order to identify and prevent these sorts of governance divergences, the designer of governance structures needs to be able to determine the number and types of observation or synchronization events that should be used. (82)

2. Concurrent Corporate Governance

The general rationale of the concurrent governance approach is straightforward: instead of trying to find transparency where there is none, policymakers and shareholders should take the opposite tack; they should assume that the actions of managers are completely non-observable, except where managers and shareholders have participated in a well-specified observation event. Under the concurrency approach, a governance mechanism includes one or more governance operations, with the following characteristics: (1) an operation involves two or more actors, who engage in a joint activity--an observation or synchronization event; (83) and (2) the only way to observe an occurrence of an event is to actually participate in it. While there may be limitations to the accuracy of an observation, a non-participant is by definition a non-observer, and thus is not involved in the governance exercise. At first glance, this approach may appear artificial or one difficult to implement. But drawing a sharp distinction between synchronizations/observation events based on participation and all other types of events acts as a commitment device to prevent designers of corporate governance regimes from adopting overly optimistic assumptions about non-observable behavior. Modeling systems solely based on observable events, forces designers to either say nothing about them--if the observations are not there--or revise their original design to allow for additional observations events. To the extent that complexity is making it difficult for policymakers to correctly determine how the behavior of socially dependent actors is affecting the overall behavior of corporations, the solution is not to try to impose axiomatic behavioral assumptions--e.g., that these actors are fully rational and do not face bounded rationality or informational asymmetry constraints--but to create additional observation points that take into account both the intertemporal nature of corporations and the intricate social dependence of participants.

3. Basic Governance Operations

One way to approach this general problem is to define a limited set of governance operations and way of combining them such that all that we need to know in order to reason about composite governance devices is how the primitive devices operate individually and extent to which their operation changers when they are combined. The first type of governance operation is what I have referred to as an observation event, which, as mentioned above, are, by definition, a joint activity of two or more actors. In the corporate context, observation events take a variety of forms, including securities disclosures and monitoring interactions between shareholders and managers, and between creditors and shareholders. (84) The second type of governance operation is a requirement that a corporate decision be made jointly--i.e., with the participation of two or more corporate actors. For example, if a manager wants to carry out a merger, she will have to engage in a joint event with the board of directors, in which the board will decide whether or not to proceed. If the board agrees, then control to make the decision will be handed to the shareholders. This type of externalized joint-decision needs to be contrasted with internalized independent decisions. Many of day-to-day corporate decisions are internal decisions of managers, in that those choices will not be directly observable by the board or shareholders, or require their participation. The board and shareholders may however engage in subsequent synchronization events with the managers in which they may be able to infer the content of one or more of those internal choices. The third general governance operation is that of parallel composition. (85) This just allows us to combined two actors into a context in which they will interact. Under the concurrency model all corporate governance is parallel in nature. This is because by definition the only way to engage in a governance operation is to participate in a synchronization or joint event with one or more actors. An observation event involves the interaction of at least two actors and thus is a type of parallel composition. For example, assume that the audit committee of a board of directors deliberates and makes a decision regarding the annual audit. The members of the committee participate in a joint event, where they engage in a set of reciprocal observations of each other, although some information will be internal to each committee member--i.e., not observable by the others. When the rest of the board reviews the minutes of the audit committee, the board becomes involved in a governance operation; however, it is an operation in which the board can only observe the joint output of the committee, as reported in the minutes and encapsulated in their decision to approve or reject the audit. Suppose that the managers also review the committee's minutes, by doing so they become part of a governance operation. Finally, the audit committee, board, and managers are involved in a symmetrical parallel governance operation involving each other.

4. Parallel Governance

When a person tells another "if you could only see yourself ..." they are basically saying that if they were able to engage in a detached, objective observation of their behavior, they would, at the very least, begin to see the error of their ways. But encapsulated in that phrase is a second statement: that although her self-governance or self-observation is faulty, she is nonetheless being observed and, in this case, judged. In other words, the very fact of being involved in a joint event restricts the types of behavior that are available to each actor, whether it is because of the observation event or requirement that the decision be made jointly. More specifically, a necessary condition for a governance event is the participation of two or more actors, and it is through that interaction--joint observations, joint decisions, and so on--that governance will ultimately occur. The claim is not that a joint event will always be a sufficient condition for effective governance to occur. But existence of observation and the need to coordinate behavior limit the autonomy of each involved. A gatekeeper who is in collusion with a manager who is engaged in illegal behavior is both observing and being observed by the manager, and while this fact may not always be sufficient to constrain their joint misconduct, it can nonetheless be used in designing governance devices. The reason that the gatekeeper and manager can engage in the illegal activity is that the rest of the world cannot observe their internal behavior, their collusion. This suggests that adding an additional external actors to "participate" in the existing activity can further reduce their freedom to engage in misconduct. If the manager has to report to the board of directors, the ability of the board to observe the manager's behavior can constrain both the manager and gatekeeper, who now must account for the fact that the manager is involved in a joint event with the board that it cannot fully observe. (86)

5. Recursive Governance

If we take the general argument one step further, one can further argue that all governance is necessarily self-governance; or alternatively, all governance is recursive in nature. This follows from the manner in which I have defined the concept of governance: (1) it requires a joint observation event; (2) this is the same as saying that in order for a person to be involved in governance, they have to be in a parallel composition to one or more actors; (3) due to the first requirement a third party who is added to the governance scheme becomes part of that composite group--although it may not be at the same level as other members and thus may be unable to observe some of their behavior. Nonetheless, putting an additional actor in an parallel composition with an existing group will restrict that groups' autonomy, since it is exposed to some degree of observation by that new actor.

More specifically, whenever a corporation engages in misconduct it has to be because one or more individuals failed at self-governance; which may be either because the expected benefits of that misconduct exceeded the expected sanctions or because of self-control problems. (87) suppose that a manager has not exercise proper self-governance and a shareholder has been harmed because of its inability to observe the manager's behavior. To protect itself, the shareholder may impose additional observation events, either involving its own interactions with managers or bringing in a gatekeeper. if it resorts to using a gatekeeper, the shareholder will face a similar agency problem, which may lead it to hire a second gatekeeper to monitor the first one. While this exercise is hierarchical in nature--we are building a governance scheme from the bottom up--it is not a hierarchy based on authority or fiat. (88) Instead it is one that is held together by mutual synchronizations involving individual members; moreover, it is one in which placing actors in a horizontal or parallel composition can serve as a valuable constraint. In other words, each time that a new actor is added to the composite governance group, the discretion of existing members to engage in non-observable, harmful behavior will be further reduced. This means that the oft-cited agent-watching-agents problem can be resolved by using parallel governance regimes. More generally, one can envision a parallel governance regime in which there are multiple gatekeepers, each of which knows of the existence of other gatekeepers, but not their identity. Such a regime would make it difficult for a gatekeeper to collude with managers, at least to the extent that they do not know whether all of the other gatekeepers have also agreed to collude with the managers, given that even one honest gatekeeper may discover and expose that gatekeeper's misconduct.

Someone might object that this approach is far too general to provide useful guidance in designing governance structures or that it avoid dealings with difficult issues, such as the general difficulty in observing the actions of managers and board members. It is thus helpful to explain what can and cannot be accomplished. First, the ability to observe the behavior of the governed party is a necessary condition for the operation of a any governance mechanism; therefore, drawing a sharp distinction between observable and non-observable behavior will help a designer identify contexts in which governance devices would not work unless one were to include an observation event. second, corporate participants must coordinate their behavior over time; they thus need a way to predict whether the other parties with whom they need to coordinate will be in a position to do so at the appropriate time.

III. Legal and Corporate Governance Implications of Concurrency Model

A. The Modular Corporation

How can one use this modular design via abstraction to reduce corporate complexity? Corporate law has some built-in abstraction mechanisms. It requires that at least one person hold voting and residual rights, (89) and that that the corporation be managed by or under the control of a board of directors. (90) Additionally, boards can delegate some management tasks to committees (91) and officers. (92) These modularity-enhancing rules encourage a division of labor: officers engage in day-to-day management, the board in general managerial oversight, and shareholders in electing the board and approving extraordinary transactions. While there is general agreement that this governance scheme helps reduce corporate complexity, commentators have used this fact to make additional, albeit conflicting, normative claims as to who should be at the top of the corporate hierarchy--i.e., whose interest should trump all others--shareholders, the board, and other constituencies. (93)

The goal of modular design is to decrease complexity without at the same time introducing other costs into the system. Treating shareholders as a unit only works if they are in fact a unit, in that their preferences and other important characteristics are sufficiently similar, a fact recognized by courts dealing with conflicts between majority and minority shareholders. (94) The same is true with boards of directors: we cannot predict how a board will behave unless we have some sense about the identity of those within it--i.e., whether they are independent directors or officers. Moreover, even if they are all independent directors they may not be all equally able to engage in oversight of certain matters, such as the audit process. (95) Moreover, a second characteristic of good modular design is that the different modules be de-coupled, such that they interact only through a pre-determined interface. In other words, the designer needs to specify the ways in which different modules can influence each other's behavior. (96) in corporations, the same person can be a shareholder, board member, and officer, something that is common in close corporations, and encouraged in public ones through the use of compensation schemes that give managers stock options.

There are of course important reasons for allowing such violations of modularity, but not surprisingly a large number of problems in corporate law revolve around how to make sense and judge the behavior of actors who operate in multiple domains. (97) But the problem is not that we allow someone to wear different hats, but that we create different roles, separate modules, and then allow someone to straddle them. It is the combination of these two that lead to problems with officers who are also board members, majority shareholders who, acting as a director, fire an employee, who also happens to be a minority shareholder, and corporate opportunity cases in which an officer or board member has to decide whether an opportunity is in the company's line of business. The same type of problems arise with gatekeepers such as auditors and lawyers who may be performing more than one task, something recognized in a number of provisions in the Sarbanes-Oxley Act. (98) Of course, this does not mean that it is necessary or prudent to adopt a blanket prohibition of such cross-domain behavior; however, it does suggest that a hidden cost of modular design is the false impression that it may give to an outside observer as to the thought and rigor that went into creating internal cohesion within a module and clear decoupling between them.

B. The Sarbanes--Oxley Act: Parallel Governance Par Excellence

Some provisions of the Sarbanes-Oxley Act that have been criticized for imposing significant financial burdens on corporations--in managerial time and effort and increased legal and accounting fees (99)--can be explained in part as parallel governance schemes. The certification requirements and management's assessments of internal controls (100) were adopted to make it more difficult for senior managers to claim that they were unaware of problems with the company's financial statements, disclosures, and internal control mechanisms. However, the much criticized section 404 of the Sarbanes-Oxley Act adopts a series of cross-monitoring procedures that act as parallel governance devices. For example, these procedures require that managers make representations regarding the company's internal accounting controls and then requiring auditors "to attest to, and report on, the assessment made by the management." (101) This, in turn, leads accounting firms to hire lawyers to help prepare these attestations. It may well be that critics are correct that section 404 imposes potentially high compliance costs; however, the rule also increases the number of observation events involving these different actors, and thus impose the sort of parallel governance that we have been discussing. It is a gatekeeper scheme in which gatekeepers not only engage in their usual gatekeeping activities, but police each other.

C. Safety and Liveness Specifications in Corporate Law

A shareholder concerned with reducing the agency costs imposed by the board's un-observed behavior can make a list of all possible observable action sequences available to the board, as captured by their traces. It can then decide which of those action sequences it wants to allow the boar to undertake and which to exclude. The shareholder can create a specification of the board's legitimate behavior by setting forth a set of traces representing the allowable action-sequences. Such a safety specification only identifies the type of observable behavior that is prohibited. An example would be the prohibitions against self-dealing imposed by the duty of loyalty and the due care and informed decision-making requirements under the duty of care. Affirmative disclosure requirements under the securities laws can also be seen as safety specifications. In the fiduciary duty and disclosure cases, the law sets forth a set of triggering events that will exclude certain types of behavior, including doing nothing, but gives varying amounts of leeway on the set of action- sequences that will satisfy the legal requirements. However, in certain case we want to make sure that an actor take one or more actions; this can be achieved with a liveness specification. shareholders may require the board to consider any takeover offers presented to it or to monitor the behavior of managers on an ongoing basis. The recent Delaware decisions, In re Caremark International and Ritter can both be characterized as liveness specifications, since they impose an affirmative duty on the board to monitor managers on an ongoing basis and not just when they become aware of some act of misconduct. (102) The distinction between safety and liveness specifications may be sometimes blurred, particularly if one characterizes a required behavior as a way of avoiding some bad result. Nonetheless, the reason to draw distinguish between them is that there are certain types of bad behavior that designer of governance rules may want to clearly prohibit, while, in other cases, the goal is to reduce the discretion of a corporate actor from completely abdicating its duties. For example, the Francis v. United Jersey Bank (103) case stands for the proposition that a board member can violate its fiduciary duty of care and good faith by failing to acquire the rudimentary knowledge needed to discharge its duty as a board member. (104)

D. Hierarchies and Markets

Agency theory posits that markets are the source of synchronizations between shareholders and managers and that the non-observable behavior of managers between any two market- synchronizations is not important; what matters is the fact that the markets for corporate control, managers, and the company's products will provide managers with the correct governance incentives. These markets communicate certain things to managers, (105) such as "approval" or "disapproval" of the way they are running a company. But how does a manager figure out what it should or should not do, in reaction to these coarse market signals? If managers sees that the market price of their company's stock has declined, they may conclude that the decline is due to some outside shock, and not a signal of disapproval. Even if a manager believes that the market is sending it signal, it still needs to decipher and react to it--is the market saying to take more or fewer risks, sell some of its assets or make more acquisitions, focus on long-term growth, short-term returns? one reason that markets sometimes fail to live up to expectations is because too much is asked of the price signal. (106) A price is a cost-effective, low-complexity signal because it hide all information regarding how that equilibrium price came about. At the same time, however, the information that gives rise to a market price is often be useful in other governance contexts, but once used to bring about a market price a lot of it gets discarded or forgotten--in fact, the weak form of the efficient capital market hypothesis tells us that once prices have incorporated all relevant information about a company, the historical information that is left behind is of no real value to future traders.

IV. Conclusion

This paper approached the problem of corporate governance from an engineering perspective. This paper argued that policymakers and shareholders should design and test corporate governance regimes using formal techniques such as the ones used by engineers to design, test, and update complex concurrent systems. Engineers have given close attention to the general problem of how to specify and design concurrent systems to identify potential catastrophic failures before they can materialize. The recent financial crisis showed that financial systems are susceptible to catastrophic failures and, as a result, that corporate governance is important not just at the level of individual corporations, but also at the system level in which shareholders and creditors and households interact via financial intermediaries and corporations. The concurrency approach to corporate governance is easily generalizable to the web of relationships between corporations and financial intermediaries, and thus helps provide a coherent approach for identifying and addressing governance failures at the micro and macro levels.

One possible objection to the concurrency approach is that it seems to introduce a new ways of talking about and characterizing behavior which can be described with our existing vocabulary. For example, economists have given a large amount of thought to the problem of information asymmetries within agency contexts. I have purposely taken a different approach to underline the complexity within corporations and how it is affected by the social dependence and intertemporal interactions of corporate actors. A second goal has been to introduce into the law the work of computer scientists in dealing with very similar issues in other types of concurrent systems. It is sometimes valuable to introduce languages that are specially tailored for capturing specific aspects of the world being described and studied. In fact, establishing new languages--domain specific languages--allows designers to represent a complex problem in different ways, (107) and thus is yet another technique used by engineers to deal with complexity.

While spontaneous emergence of legal rules and norms no doubt occur, spontaneity is always within institutional structures and thus constrain by them, and while institutions will sometimes emerge at the urging of invisible hands, those hands are only made invisible by the structures used to organize markets and market-like frameworks. In the end, there is not Archimedean point from which spontaneity can operate, there is no real way to do away with designers of legal rules, institutions, and markets. Legal commentators therefore need to pay much closer attention to the question of specifying, designing, and implementing legal rules and contracts. We should view complexity and indeterminacy as challenges that should spur, not hinder, innovation; not as fixed constraints or upper bound that limits the scope on our enterprise to that of describing behavior and the bookkeeping tasks associated with tallying what we have observed. At the very least, we should consider ways of improving the design and testing of legal rules. Adopting formal design and testing methods, such as the ones described in this paper, should make it easier to predict how legal rules will operate once implemented and lead to the creation of better metrics for measuring the reliability, fairness, and efficiency of legal interventions.

(1) Charles W. Ehrhardt Professor, Florida State University College of Law.

(2) See Harold Abelson & Gerald Jay Sussman, Structure and Interpretation of Computer Programs 298, n.35 (2nd ed. 1996).

(3) See, e.g., Michael C. Jensen, Eclipse of the Public Corporation, 89 Harv. Bus. Rev. 61(1989)

(4) Intertemporal decisions are those that have deferred consequences; they involve the general problem of how to choose between outcomes that are distributed over time. See George F. Lowenstein & Drazen Prelec, Preferences for Sequences of Outcomes, in Choices, Values, and Frames 565, 565 (Daniel Kahneman & Amos Tversky eds., 2000); George F. Lowenstein & Richard H. Thaler, Intertemporal Choice, 3 J. Econ. Persp. 181, 181 (defining intertemporal choices as "decisions in which the timing of costs and benefits are spread over time"). For a general discussion of various roles played by time in decision-making, see Dan Ariely & Dan Kakay, A Timely Account of the Role of Duration in Decision Making, 108 Acta Pcyh. 187 (2001).

(5) A market can be seen as a web of relationships in which actors can cooperate and compete, produce and divide, without knowing the preferences or identities of other participants. See Friedrich A. Hayek, The Use of Knowledge in Society, in Individualism and Economic Order 77, 86 (1948) (arguing that the price system allows individuals to make the right decisions by merely acting on the price, through which "only the most essential information is passed on and passed on only to those concerned").

(6) See Adolf A. Berle, Jr. & Gardiner C. Means, The Modern Corporation and Private Property (1932).

(7) Although there has been sporadic interest in the role of corporate lawyers as "transaction costs engineers," very little attention has been given to studying the actual mechanics of specification and designs, and particularly the role played by design innovation in the implementation of governance structures. For the seminal piece on transaction cost engineering, Ronald J. Gilson, Value Creation by Business Lawyers: Legal Skills and Asset Pricing, 94 Yale L.J. 239 (1984). The role of corporate lawyers is not limited to coming up with mechanisms to reduce transaction costs, but also in dealing with non-strategic coordination problems and reducing internal corporate production costs that have nothing to do with standard notions of transaction costs. See Manuel A. Utset, Producing Information: Initial Public Offerings, Production Costs, and the Producing Lawyer, 74 Oregon L. Rev. 275 (1995) (arguing that lawyers help reduce both transaction and production costs, which arise outside the standard transactional contexts studied by transaction costs economists).

(8) These formal methods, which we will refer to generally as process algebras, were developed precisely to deal with the problem of designing systems in which the joint activities of actors mattered and can lead to implementation errors and indeterminacy. Importantly the actors modeled included both artificial computer processes and the human being who interact with them, particularly in contexts in which there is a low tolerance for erroneous results or system deadlocks. See C.A.R. Hoare, Communicating Sequential Processes (2004) (setting forth formal methods used to specify, implement, and test concurrent systems to deal with indeterminacy and system deadlocks; Robin Milner, Communication and Concurrency (1989) (introducing calculus of communicating systems which models interactions between agents as a series of synchronization activities); Steve Schneider, Concurrent and Real-Time Systems: The CSP Approach (2000) (discussing theory of real-time concurrent systems).

(9) See Milner, Communication, supra note_, at 11 (referring to companies as networks of departments, which in turn are networks of people).

(10) See, e.g., Roscoe supra note_, at 9 (describing approach to modeling a transaction between a customer and store as either a single sale event or as composed of independent parts, such as "offer, acceptance, money, change").

(11) See Hoare, supra note_(goal of process algebra is to provide formal methods that can be used to identify errors in indeterminate concurrent systems that are not open to standard testing techniques used in systems that execute sequentially).

(12) See Schneider, supra note_, at v (stating that system complexity due to components that execute in parallel and interact in non-obvious ways, and that one goal of designers is to understand the extent and result of these interactions, in order to keep them under control).

(13) Carlo Ghezzi et. al., Software Qualities and Principles, in The Computer Science and Engineering Handbook 2278, 2282 (Allen B. Tucker, ed., 1997) (defining robustness of systems).

(14) A "specification" is a description of the desired properties or behavior of the system being built. A "design" is a blueprint or abstract plan of a system meeting the specifications. Finally, an "implementation" is the actual instantiation of the system. There will often be more than one way to implement a system; but the goal is to achieve an implementation that can be verified to meet the specification, and thus behave correctly. One may then choose between different equivalent specification using other metrics, such as cost-effectiveness, robustness, or amenability to future modifications. See, e.g., Ghezzi et. al., supra note__, at 2281-82 (describing the often iterative process from specification to implementation).

(15) This is often done either by showing that the specification can be transformed into an implementation by carrying out a series of logical steps or functional transformations of the specification. See Richard Bird, Introduction to Functional Programming Using Haskell (1998) (describing use of functional composition to transform specification into correct implementation, within the pure functional language Haskell).

(16) In fact, software designers know that it is virtually impossible to produce software that can be guaranteed to be correct; nonetheless, computer scientists have developed a number of formal methods to prove correctness of at least parts of systems in which real-time errors can be costly. See Ghezzi et. al., supra note__, at 2282.

(17) It should not be assumed that increasing the complexity of a system is necessarily bad, since complexity can be used to achieve other goals. A person who wants to stop smoking, but has trouble committing to it may prefer a system for acquiring cigarettes that is complex--i.e., that imposes a high immediate deliberation cost, since the same pull of immediate gratification that can lead her to override a long-term preference to stop smoking can cause her to procrastinate purchasing cigarettes. Complexity in short can act as a commitment device to overcome self-control problems. See Manuel A. Utset, Hyperbolic Criminals and Repeated Time-Inconsistent Misconduct, 44 Houston L. Rev. 609, 662-63 (2007) (discussing use of legal rules as "off-the-rack" or default commitment devices).

(18) For an early exposition of the importance of organizational and administrative complexity see Herbert A. Simon, Administrative Behavior (3d ed. 1976). For a discussion of complexity of legal rules and contracts see Louis Kaplow, A Model of the Optimal Complexity of Legal Rules, 11 J. L. Econ. & Org. 150 (1995).

(19) I will use the term "system" in a non-technical sense of a group of related objects that interact with each other in some regular way. While some of the general discussion below regarding systems and recursive governance can be applied to complex or dynamic systems--those, whose complexity is due to the manner in which system outputs feed back upon themselves--I am concerned with more basic issues of static and dynamic complexity within organizations.

(20) Herbert Simon, defined a complex system as "one made up of a large number of parts that have many interactions," where its complexity will increase whenever, given "the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole" See Herbert A. Simon, The Sciences of the Artificial 183- 84, 207 (3d ed. 1996).

(21) There are well-known obstacles to achieving cooperation within large groups. These collective action problems have been widely studies and help explain why shareholder voting is a relatively weak governance device. See Mancur Olson, The Logic of Collective Action.

(22) See Michael E. Bratman, Intentions, Plans, and Practical Reason 10-11 (1987) (discussing the problem of decision-making by individuals given bounded rationality, and the role of planning in reducing the bounded rationality constraint).

(23) The concept of "shared intentions" developed by analytic philosophers can valuably apply to many areas of law, as it helps clarify a number of difficult questions regarding what it means to be a part of a group and the relationship between interpersonal and intrapersonal coordination in groups. See Michael E. Bratman, Shared Intention, in Faces of Intention: Selected Essays on Intention and Agency 110-13 (1999) (developing the theory of shared intentions that emphasizes the role of such intentions in intra and interpersonal coordination, as well as in intragroup bargaining and conflict resolution); John R. Searle, Collective Intentions and Actions, in Consciousness and Language 90(2002) (developing an account of collective intentions that draws sharp distinction between individual and collective intentions).

(24) The classic exposition of this view is by Hayek:

The problem which we pretend to solve is how the spontaneous interaction of a number of people, each possessing only bits of knowledge, brings about a state of affairs in which prices correspond to costs, etc. and which could be brought about by deliberate direction only by somebody who possessed the combined knowledge of all those individuals. See Frederick Hayek, The Use of Knowledge in Society, 35 Am. Econ. Rev. 519, 521 (1945).

(25) The complexity is due in part to the indeterminacy of communicating by pointing and through hand gestures. In fact early work on coordination problems by the philosopher David Lewis developed out of the problem of indeterminacy of translation introduced by the logician Quine. See Willard Van Ormand Quine, Word & Object 29-30 (1960) (describing indeterminacy of translation); David K Lewis, Convention: A Philosophical Study 24-36 (1969). See also Kenneth Arrow, The Limits of Organization 55-59 (1974) (discussing importance of communication channels and shared codes, or formal and informal understandings of way information will be transmitted within organizations).

(26) While evolutionary forces have provided us with senses that automatically transform complex environments into pared-down streams of data, it does not follow that we should necessarily rely on the "evolutionary forces" of competitive markets to make our experience of complex corporate institutions or legal rules manageable. See Samuel Bowles, Microeconomics: Behavior, Institutions, and Evolution 344-49 (2004).

(27) See Amos Tversky & Daniel Kahneman, Judgment Under Uncertainty: Heuristics and Biases, in Judgment Under Uncertainty: Heuristics and Biases 3 (Daniel Kahneman, Paul Slovic, Amos Tversky eds., 1982) (arguing that heuristics have benefits and costs, and can lead to systematic--i.e., non-random--deviations from rational behavior).

(28) See, e.g., John Rawls, A Theory of Justice (1970) (developing theory of justice based on choices made behind veil of ignorance in which actors know that they are in social context, but are unaware of the set of social dependent relations in which they are involved).

(29) See e.g.., Gerard Debreu, Theory of Value: An Axiomatic Analysis of Economic Equilibrium 90-97 (1959) (describing theory of general equilibrium in economics).

(30) See R. Jay Wallace, Reason and Responsibility, in Normativity and the Will: Selected Papers on Moral Psychology and Practical Reason, 123, 123-24 (2006) (describing expectations of behavior in moral communities as reactive sentiments and judgments).

(31) See Robert Nozick, Invariances 240-41 (2001) (discussing mutual dependence where the actions of individuals are connected in nontrivial ways and require ethical coordination).

(32) See Michael N. Huhns & Larry M, Stephens, Multiagent Systems and Societies of Agents, in Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence 83, 113 (Gerhard Weiss ed., 2000) (discussing concept of socially dependent autonomous agents) (stating that person X is socially dependent on person Y whenever the following two conditions hold: (1) person X has a goal, G, that can be brought about (completely or in part) about is someone takes action, A; and (2) person X cannot do A herself but person Y can).

(33) That is, individuals are in a state of mutual dependence whenever: (1) person X has goal G1, which requires actionA1, and person Y has goal G2, which requires actions A2; and (2) person X cannot take action A1, but person Y can, and person Y cannot take action A2, but person X can; where, actions A1 and A2 are necessarily distinct, but goals G1 and G2 can be the same or different goals. Id. Dependence relationships may be symmetric or asymmetric in nature.

(34) Some statements, like saying "I do" or "I promise" within well-specified contexts are actions in the sense that they produce a result other than merely transferring information, and are referred to as performatives. See J. L. Austin, How to Do Thing with Words 7-11 (J.O. Urmson & Marina Sbisa ed., 2nd Edition 1975).

(35) See Oliver Williamson, Markets and Hierarchies 52-56 (1975) (discussing bilateral monopolies and opportunistic renegotiations due to transaction-specific investments).

(36) See John W. Pratt & Richard J. Zeckhauser, Principals and Agents: An Overview, in Principals and Agents: The Structure of Business 1, 2 (John W. Pratt & Richard J. Zeckhauser eds., 1985) (stating that agency problem can arise when one individual depends on the actions or behavior of another).

(37) See Chester I. Barnard, The Functions of the Executive 91-92 (1938) (discussing "observational feeling"--decisions in groups "arrived at, and acted upon without having ever been formulated by anybody"). The environment in which actors transact is "objective" in the sense that it exists and affect their behavior whether or not they fully understand or are even aware of all the constraints imposed on their actions. For example, we go through most of our lives without giving much thought to the fact that we are constrained by gravity or about the nature of the oxygen that we are breathing in; a person climbing Mt. Everest on the other hand will be keenly aware of the effect of both. See Robert Nozick, supra note_(providing such a definition of "objective").

(38) I will refer to the various corporate constituencies as "corporate actors" or, where the context allows, merely as "participants"; and to board members and officers/managers collectively as "managers," unless the context requires drawing a distinction between the two.

(39) See Simon, Administrative Behavior, supra note--at 72-73 (stating that "cooperation will usually be ineffective--will not reach its goal, whatever the intentions of the participants--in the absence of coordination"); Barnard, Function of the Executive, supra note--at 6 (arguing that "the survival of an organization depends upon the maintenance of an equilibrium of complex character in continuously changing environment ... which calls for adjustments of processes internal to the organization").

(40) See Thomas C. Schelling, The Strategy of Conflict 54-57 (1960) (describing role of focal points in overcoming coordination problems); David K. Lewis, Convention: A Philosophical Study (1969) (discussing coordination problem from philosophical perspective and developing concept of common knowledge); David Hume, A Treatise of Human Nature 489-90 (L. A. Selby-Bigge ed. Oxford Univ. Press 1978) (classic treatment of coordination problem and contracting relationships); Drew Fundenberg & Jean Tirole, Game Theory 18-20 (1991) (providing formal treatment of coordination games).

(41) This general problem of epistemic coordination within organizations has received close attention in the transaction cost, agency, and property rights literatures. See Kenneth Arrow, The Limits of Organization 53-59 (1974) (discussing the role of information channels and communication codes within organizations); Oliver Williamson, Markets and Hierarchies 31-33 (1975) (discussing "information impactness" and its relation to opportunistic behavior); Michael C. Jensen & William H. Meckling, Specific and General Knowledge and Organizational Structure, in Contracts Economics 251 (Lars Werin & Hans Wijkander eds., 1992) (discussing agency costs associated with control of knowledge).

(42) See Ted O'Donoghue & Matthew Rabin, Choice and Procrastination, 116 Q.J. Econ. 121, 128 (2001) (setting up a general model where people act with reasonable beliefs about future actions and choose current actions to maximize preferences in light of those beliefs).

(43) Economists sometimes model an intertemporal decision-maker as: (1) a current self with current preferences, and (2) a series of separate "agents," one for each point in time between current choices and future consequences. The current agent will make choices to maximize her current preferences, but her future selves will control her behavior. See Ted O'Donoghue & Matthew Rabin, Doing It Now or Later, Am. Econ. Rev., March 1999. See also Derek Parfit, Personal Identity, 80 Phil. Rev. 3, 26-27 (1971) (arguing that individuals discount future payoffs because of changes in identity over time--a diminution of the connection between our present and future selves); Roland Benabou & Jean Tirole, Self- Knowledge and Self-Regulation: An Economic Approach, in 1 The Psychology of Economic Decisions 137, 138 (Isabelle Brocas & Juan D. Carrillo eds., 2003) (arguing that actors "who usually populate economic models have little doubt about 'who they are': they know their own abilities and basic preferences").

(44) See Michael E. Bratman, Intention, Plans, and Practical Reason 29-30 (1987) (defining "plans" as "mental states involving an appropriate ... commitment to action" and discussing the contingent, reversible nature of plans).

(45) See Bratman, Intentions, Plans, supra note--at 33-34 (discussing how prior intentions and plans provide a background framework used to weigh various options regarding potential actions).

(46) The rise of mutual funds, hedge funds, and securitization and derivatives markets over the last thirty years can all be seen, at least in part, as ways of helping individual shareholders, managers, employees, and debt-holders to resolve their own intertemporal coordination problems.

(47) See Robert Nozick, The Nature of Rationality 64-75 (1993) (describing decision-making procedure of rational actors).

(48) See John von Neumann, The Computer and the Brain (1958) (describing computer architecture, whose only necessary components are immediately available storage--memory--and a processing unit to process instructions, and is generally known as von Neumann computer; other types of storage and peripherals to facilitate input and output are now common, but ultimately not necessary).

(49) See Herbert A. Simon, 1 Models of Thought 3 (1979) (arguing that "human thinking powers are very modest when compared with the complexities of the environments in which human beings live").

(50) See Herbert A. Simon, The Science of the Artificial 36 (2d ed. 1981) (describing the boundedly rational decision-maker as "a satisficer, a person who accepts 'good enough' alternatives, not because he prefers less to more, but because he has no choice"); Ariel Rubinstein, Modeling Bounded Rationality 107-120 (1998) (discussing various approaches to modeling bounded rationality within groups).

(51) See Glenn Ellison, Bounded Rationality, in Industrial Organization in Advances in Economics and Econometrics: Theory and Applications, Ninth World Congress, Vol. II 142, 150-52 (Richard Blundell, Whitney K. Newey, & Torsten Persson, eds., 2006) (summarizing theoretical and empirical literature on learning in complex environments, when parties interact repeatedly); Oliver Williamson, The Economic Institutions of Capitalism (1995) (setting forth options available to contracting parties generally, given bounded rationality, incomplete contracting, and transaction costs).

(52) See Christine Jolls, Cass R. Sunstein & Richard Thaler, A Behavioral Approach to law and Economics, 50 Stan. L. Rev. 1471, 1477-78 (1998) (discussing bounded rationality and heuristics issues within legal context).

(53) This means that one approach is to leave the level of complexity alone and instead reduce temporal constraints and/or supplement human computational power with that of a computer. This has become an increasingly attractive possibility given recent improvements in computer memory storage, and processor power, developments in peer-to-peer networks, and proliferation of highly sophisticated financial management systems. Interestingly, in a lecture entitled Will the Corporation Be Managed By Machines?, delivered in the early 1960s, Simon predicted "that we will have the technical capability, by 1985, to manage corporations by machines, although humans will continue to play an important role, given that machines will be able to engage in symbolic computation and problem solving, but will be constrained by their relative inability to see and move." Herbert A. Simon, The Shape of Automation: For Men and Management 49 (1965).

(54) For example, computer scientists for example have been able to greatly increase the speed and efficiency of microprocessors and system memory, and developed algorithms whose execution take less time and space, computer languages that help produce source code that is easier to understand and change. See Handbook of Engineering, supra note--.

(55) This is not to say that legal academics have not focused on identifying areas in which complexity, bounded rationality, and heuristics affect the behavior of actors; however, progress in the second part of the enterprise, that of developing tools and theories to directly and consciously address the complexity problem, has lagged behind.

(56) This is partly the way that one would go about specifying a game, for a game theory model, although in the latter one would also specify other factors, such as the strategies available to each party, the equilibrium concept being used, and the extent to which the players have common knowledge about certain aspects of the game. See Drew Fundenberg & Jean Tirole, Game Theory 4-9 (1991).

(57) See Robert Cecil Martin, Designing Object-Oriented C++ Applications: Using the Booch Method 9 (1995) (stating that abstraction involves the "elimination of the irrelevant and the amplification of the essential").

(58) See Abelson & Sussman, supra note --, at 4 (describing abstraction technique by which "compound elements can be named and manipulated as units").

(59) See Milner, supra note --, at 11 (stating that the level of decomposition used depends on the goal of the model: "we do not treat a person as a network of parts when we are interested in companies, though this treatment is essential for the anatomist").

(60) See Simon supra note --, at 84 (stating that fact "that many complex systems have a nearly decomposable, hierarchic structure is a major facilitating factor enabling us to understand, describe, and even 'see' such systems and their parts").

(61) The inherent indeterminacy of concurrent systems, means that it is not possible to test them in the usual manner--i.e., by executing them repeatedly with inputs chosen to identify deviations from the specification--since the same set of

inputs can trigger different interactions and thus lead to different results. In short, testing a system will provide only an incomplete assessment of reliability. See Hoare, supra note--(discussing problems in testing concurrent systems).

(62) One important part of this exercise is to formally specify how two executions of system components can be compared to determine whether they are equivalent. Once a reliable solutions has been found, the designer can use it or substitute an equivalent one that has other properties, such as being more efficient. Efficiency by itself is not a sufficient condition for choosing between two potential approaches, given that neither may meet the specifications for the system. A similar approach can be used in designing legal institutions, where one would first create a specification, find a solution that meets that specification, compare it to other solutions that are equivalent along the dimensions in the specification, but which may differ along other ones, such as transaction costs or general operational efficiency. By separating the task of finding an actual solution from other normative goals, one can reduce the risk that normative constraints will reduce the space of institutional innovation.

(63) A sequence of n objects or components is an ordered set (or n-tuple) where the component are identified by their position in the set. More formally, if [a.sub.1], [a.sub.2] ... [a.sub.n] are the components, then a sequence is an ordered n-tuple, ([a.sub.1], [a.sub.2], ..., [a.sub.n]), where n is the length of that sequence. For example, the following two sequences are not equivalent. A = (a, b, c, c) and B = (b, a, c, c). Moreover, unlike sets where we are only concerned about the identity of members, in a sequence we not only care about the identity and order of the components, but also how many times they occur. In other words, under set notation A and B would be the same set: {a, b, c}. See, e.g., Harry R. Lewis & Christos H. Papadimitriou, Elements of the Theory of Computation 10 (2nd ed. 1998) (defining sequences).

(64) See Schneider, supra note --, at 85 (stating that an important property of an agent's behavior is the "occurrence of events in the right order, and that events do not occur at inappropriate times").

(65) Computer scientists model sequential behavior as automatons, each of which is defined by a set of states and transitions: (1) a start state; (2) an acceptance state; (3) a set of transition between states such that s1 [right arrow] s2 means that there is an action that will cause a transition from s1 to s2. An automaton is finite if it has a finite set of states and deterministic if for each state s and transition there is only one state to which s can move. See, e.g., Harry R. Lewis & Christos H. Papadimitriou, Elements of the Theory of Computation 55-57 (2nd ed. 1998) (defining finite automaton) See Abeson & Sussman, supra note --, at 80 (stating that state-variable can be used to capture the state of an actor at any one point).

(66) More formally, an actor's complete action set is composed of the observable-action set and one or more internal actions available to it but which cannot be observed by others. It is nonetheless possible for other actors to infer the content of internal events by observing that actor's subsequent external behavior. See Robin Milner, Communicating and Mobile Systems: The Pi-Calculus 38, 52-54 (1999) (describing action set of external and internal actions and proposing mechanism that allows one to disregard some occurrences of internal events).

(67) In other words, actors are modeled as independent "black boxes," whose behavior will include independent activity that cannot be observed by others, as well as joint actions or events involving two or more actors. See Robin Milner, Communicating and Mobile Systems: The Pi-Calculus 13 (1999) (describing actors as "black boxes" which can be distinguished only by observing their external behavior).

(68) In other word, a necessary condition for person A to observes an action of person B is to interact with B in some specified manner; and by so interacting B in turn observes A. See Milner, Pi-Calculus, supra note --, at 28 (stating that an agent observes the action of another agent by interacting with it).

(69) See Roscoe, supra note --, at 8 (stating that communications should be thought of as a "transaction or synchronization between two or more processes rather than as necessarily the transmission of data one way"); P. Y. A. Ryan & Steve A. Schneider, Process Algebra and Non-interference 18-24 (manuscript) (describing concurrent systems involving confidential information in which two actors may engage in the same joint-event, but each observes different things); Milner, supra note --, at 37.

(70) In fact, this approach has been used to model interactions in which messages are passed but the sender can remain anonymous, something that is sometimes important when securing networks. See Steve A. Schneider, Anonymity and Security (manuscript).

(71) See Milner, supra note --, at 12 (stating that under this approach "the behavior of a system is exactly what is observable, and to observe a system is exactly to communicate with it"). Examples of synchronization events of this sort include "passing of a baton in a relay race; the delivery of a registered letter; the closure of a contract; becoming married," and more generally, any action involving two or more actors whose behavior intersect temporally in a meaningful way. See Schneider, supra note --, at 29. See also Milner, supra note --, at 36 (describing handshake synchronizations as atomic or indivisible actions of two or more agents, where data may or may not be transferred and in which there is no necessary directionality in the "communication").

(72) For example, a computer has a clear interface, including a set of keys, communication ports, screen, and installed programs. When a person interacts with it, the only thing that she can observe is the behavior that is revealed via that interface. The rest of the computer is a black box that prevents her from observing the myriad operations occurring inside--the only way for her to interact with it is through the parts that have been externalized, or made available to her. This limited set of joint activities between a user and a system greatly reduce their complexity.

(73) The market for chess matches turns out to be highly informationally efficient: once a trap of variation of any importance has been introduced it is difficult to use it again in a professional match and expect to be able to engage an arbitrage opportunity.

(74) In fact sequences of those previously observed actions are given special names to identify their originators, they often also carry names of the variations that disrupted previous complacency--e.g., Ruy Lopez, Berlin Defense, including the Mortimer Trap.

(75) A trace is analogous to a history in a sequential game. See Fundenberg & Tirole, supra note --, at 70-71 (describing sequential games and role of histories).

(76) See Schneider, supra note --, at 85-86 (describing traces as record of the events that may occur in connection with an agent's action-sequence, as if some observer were "recording all events in the order in which they occur").

(77) See Schneider, supra note --, at 193 (contrasting safety specifications, which require that "nothing bad will happen" with liveness ones, which require that "something good will happen").

(78) If at point t, actor A is placed in context C in which action x is available, there are three sets of behavior that may be observed by a third party. First, whenever A is faced with that choice in context C it does x. A designer who wants to guarantee this will have to make sure of two things. First, each time that A is in context C, it is in a position to do x--i.e., it is in a state in which x is a possible action. Second, the environment must also be in a position to offer action x to A. This is particularly important when the action in question is a joint-event with another. Third, the designer has to be able to guarantee that actor A will in fact take that action--i.e., that it cannot refuse to act. In any context, an actor who is in a stable state--i.e., one in which it is not able to engage in an infinite number of internal transitions--who is offered action x by the environment, will either have the ability to do it and will do it, or will have the ability to refuse to engage in that action. See Schneider, supra note --, at 171-76.

(79) See Abelson & Sussman, supra note --, at--(describing lazy evaluation approach in functional languages in which functions are evaluated only when they are needed, giving impression of timeless execution).

(80) In economics, an "experience good" is a product whose value cannot be completely ascertained until after it is purchased and consumed.

(81) See Ariel Rubinstein, Modeling Bounded Rationality 63-84 (1998) (discussing problem of limited memory in the context of bounded rationality).

(82) For example, concurrency models have been extended to describe markets and auction mechanisms. See, e.g., Julian A. Padget & Russell J. Bradford, A Pi-Calculus Model of A Spanish Fish Market: Preliminary Report, in Lecture Notes in Computer Science 166 (Pablo Noriega & Carles Sierra eds., 1999).

(83) Recall that we are assuming that a person can be modeled as a sequence of selves over time. See supra note --. Therefore, a person can engage in self-governance, involving her current and future selves.

(84) See Jensen & Meckling, supra note --.

(85) One also needs to allow for an operation to capture the sequential observable behavior of actors. Suppose that an actor has the ability to choose an action from an action set. Then she will either make a choice or not; if she does not, then this is equivalent to saying that she is engaged in a sequence of non-observable behavior. After the actor makes an external choice, then she will transition to a state in which she may have additional choices and so on. The types of actions available to an actor and the set of states though which it can transition will be included as part of that actor's overall specification. Some economists have modeled contractual incompleteness due to complexity by positing that contracts are abstract computational machines, where the execution of a contract is equivalent to that of the machine. See Luca Anderlini & Leonardo Felli, Incomplete Contracts and Complexity Costs, 46 Theory & Decision 23 (1999) (modeling incomplete contracts as execution in abstract machine, where complexity is at level of contractual provisions being executed); Ariel Rubinstein, Finite Automata Play the Prisoner's Dilemma Game, 39 J. Econ. Lit. 83 (1986) (modeling repeated game as execution of abstract computational machine, where level of complexity is at level of structure). The difference between those approaches and the one here is that finite deterministic automata are not particularly useful in modeling concurrent systems, since they fail to distinguish with the type of non-deterministic behavior in these systems. See Milner, supra note --, at--(discussing limitations of standard automaton approach when modeling concurrent systems).

(86) This sort of recursive de-composition of conspiracies and collusive activity generally is well-known to prosecutors and antitrust enforcers.

(87) See Utset, Hyperbolic Criminals, supra note --, at 642-45 (setting forth model of time-inconsistent misconduct, in which preference for immediate gratification lead actors to engage in misconduct that from long-term perspective had negative expected benefits).

(88) Using hierarchical structures for corporate governance comes at a cost. See Paul R. Milgrom, Employment Contracts, Influence Activities, and Efficient Organisation Design, 96 J. Pol. Econ. 42 (1988) (discussing influence costs and other types of rent-seeking within hierarchies); Kenneth J. Arrow, The Limits of Organization 68-79 (1974) (discussing tradeoff between delegation and oversight within organizations).

(89) See Del. Code Ann. tit. 8, [section] 151(b) (1998) (at least one outstanding share must have residual and voting rights).

(90) See Del. Code Ann. tit. 8, [section] 141(a) (1998).

(91) See Del. Code Ann. tit. 8, [section] 141(c)(2) (1998)

(92) See Del. Code Ann. tit. 8, [section] 142 (1998)

(93) See Eugene F. Fama & Michael C. Jensen, Separation of Ownership and Control, 26 J.L. & Econ. 301 (1983) (stating that division of concerns is valuable and arguing for shareholder primacy); Stephen M. Bainbridge, Director Primacy: The Means and Ends of Corporate Governance, 97 Nw. U. L. Rev. 547, 550 (2003) (arguing that scheme shows that board of director should have final say--i.e., board primacy); Margaret M. Blair & Lynn A. Stout, A Team Production Theory of Corporate law, 85 U. Va. L. Rev. 247 (1999) (arguing that board acts as mediating hierarch to mediate interests of various constituencies).

(94) See, e.g., Sinclair Oil Corp. v. Levien, 280 A.2d 717, 720 (Del. 1971) (drawing a distinction between actions in which all shareholders share same interest (payment of dividend) and those in which the majority shareholder received a benefit and the minority's expense, and applying duty of care to first and duty of loyalty to second).

(95) See Sarbanes-Oxley Act of 2002 [section] 301 (requiring wholly independent audit committee, thereby reducing ability of managers to increase immediate costs to board of directors of challenging financial statements prepared by managers).

(96) See Abelson & Sussman, supra note --, at 117 (arguing that this allows for the creation of library of modules that can be exported to other systems or modified without having to change all other modules with which they interact)

(97) See Tamar Frankel, Fiduciary law, 71 Calif. L. Rev. 795, 811 (1983) (discussing role of fiduciary law in dealing with conflicts of interest).

(98) See Sarbanes-Oxley Act of 2002 [section] 303 (making it illegal for managers to increase immediate costs to auditors of refusing to go along with fraudulent financial statements).

(99) See, e.g., Andrew Countryman, Sarbanes-Oxley Mandates Send Corporate Audit Expenses Soaring, Chi. Trib., June 4, 2005, at 1; Sarbanes-Oxley Compliance Costs Exceed Estimates, Fin. Execs. Int'l, (Mar. 21, 2005), at; Dan Roberts, Sarbanes-Oxley Compliance Costs Average $5M Fin. Times, Nov. 12, 2004, at 1. For a discussion of the some of the problems associated with quantifying the true costs of Sarbanes-Oxley, see Carl Bialik, Low Much Is It Really Costing to Comply with Sarbanes-OxleyV Wall St. J., June 16, 2005, at **.

(100) See [section] 302 of the Sarbanes-Oxley Act of 2002 (general certification requirement); [section] 906 (certification for filings that include financial reports; criminal sanctions); and [section] 404 (management assessment of internal controls).

(101) Sarbanes-Oxley Act of 2002 [section] 404(b), 15 U.S.C. [section] 7262(b).

(102) See In re Caremark International Inc. Derivative Litigation, 698 A2d 959 (Del. Ch. 1996) (stating that in order to discharge duty of acting in good faith, a director must attempt to assure that corporation has adequate information and reporting system so that information needed by directors to oversee corporate activities reaches board); Stone v. Ritter 2006 Del. LEXIS 597, *30-31 (Del. November 6, 2006) (in order to violate duty of loyalty based on lack of good faith directors must engage in a systematic failure to exercise oversight).

(103) 432 A.2d 814 (N.J. 1981)

(104) Id. (finding that a director has a continuing obligation to acquire the requisite information needed to discharge its duties).

(105) See Herbert Simon, Economics, Bounded Rationality and the Cognitive Revolution 27 (1992) (arguing that market help conserve information processing task, allowing participants "to behave rationally with relatively simple computations and on the basis of relatively little information"). Importantly, while Hayek underlined the importance of market prices in encapsulating economic information for market participants, he argued forcefully that economists at the time were not giving the requisite attention to the role played by time in markets. He stated that "since equilibrium is a relationship between actions, and since actions of one person must necessarily take place successively in time, it is obvious that the passage of time is essential to give the concept of equilibrium any meaning." See Friedrich A. Hayek, Economics and Knowledge, in Individualism, supra note --, at 33, 36-37 (in discussing neoclassical economists he also argued that "economists appear to have been unable to find a place for time in equilibrium analysis and consequently have suggested that equilibrium must be conceived as timeless").

(106) See Manuel A. Utset, Reciprocal Fairness, Strategic Behavior & Venture Survival: A Theory of Venture Capital-Financed Firms, 2002 Wis. L. Rev. 45, 100-04 (discussing over-optimism and self-serving bias in context of entrepreneur-venture capitalist relationships).

(107) See Abelson, supra note --, at 359 (making argument and using example of circuits, which engineers represent and model using the languages of networks and systems, each of which picks out different aspects of the problem).
COPYRIGHT 2010 Elias Clark
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Utset, Manuel A.
Publication:Journal of Applied Economy
Article Type:Company overview
Date:Sep 1, 2010
Previous Article:Judging CERCLA: an empirical analysis of circuit court decision-making.
Next Article:Causes, cure and prevention of future financial crises: an ethical analysis.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters