Printer Friendly

The bias towards zero in aggregate perceptions: an explanation based on rationally calculating individuals.

I. INTRODUCTION

A central theme in the study of economic behavior is individual rationality, or utility maximizing behavior. Contrary to the accusations of many outside critics, the economics assumption of rationality neither denies that information is costly nor implies that decisions are error free. Rational expectations allows for ignorance, but insists on the absence of systematic errors in the aggregate. Even if some individuals make large underestimates, their errors are offset by large overestimates made by others. Alternatively, even if people on average underestimate the effect of some variable during one period, they may overestimate its effect at a later date. These assumptions form the basis for the standard conclusion that the government cannot systematically "outsmart" the public.

The theory and application of rational expectations has generated a vast literature over the last couple of decades. It is now well known that the application of rational expectations theory to actual cases may, for instance, be unsuitable when there are systematic errors in the observed variables, when loss functions are not quadratic, or when variables are constrained to take only positive values. While such limitations may well be problematic, these objections have, in our view, not been very damaging in the sense that rational expectations still provides a natural starting point for much economic analysis.

Like rational expectations, this paper takes individual rationality as its central theme. Yet, we reach a radically different conclusion from rational expectations regarding aggregate behavior. We demonstrate that individual errors in identifying the relationships among variables cause a downward bias in the aggregate that would be equivalent to the public underestimating the strengths of the true relationships. Rational expectations has considered the "misestimation" type of error, which can "cancel out" in the aggregate, but with errors in identifying relationships, there exists no similar cancelling out effect; and thus the public appears to ("irrationally") underestimate the strengths of relationships.

The paper is organized as follows. First we set up a simple formal model to examine the "misestimation problem," and then contrast it to the "identification problem." Section III explains why the identification problem can be so important even when a relatively small number of variables are involved. Section IV reviews empirical evidence of bias in the expectations formation literature. As shown in section V, our hypothesis can be used to explain political business cycles in a standard aggregate supply and demand framework.

II. THEORY

In our model individuals acquire knowledge in two steps: they first identify the causal relationship between two (or more) variables and then estimate the strength of that relationship. As with Herbert A. Simon's "satisficing" approach [1959; 1979] and the analysis in Ronald A. Heiner [1988], we assume finite intellectual capabilities. However, our model neither assumes nor implies anything about Simon's "satisficing" approach and the often related concept of bounded rationality. Instead, our inquiry is strictly limited to the consequences of omitting variables and will not address the more ambitious question of what strategies individuals might adopt to improve learning.

Like Heiner we distinguish between the reliability of making decisions and the issue of obtaining information. Heiner shows that while decision making itself might be flawless when individuals face only a very limited amount of information, it is optimal for them to choose a larger information set and enter into "the imperfect decision zone." He provides several different reasons ("finite channel capacity," "information complexity," and "nonlocal information") for why the advantage of having much information also translates into more decision error. While Heiner's model deals with individuals making correct or incorrect decisions, his underlying reasoning could equally well be used to motivate our analysis of why a portion of the population fails to appreciate certain economic relationships.(1)

There are also similarities to John Haltiwanger and Michael Waldman's [1985; 1989b] work in that we allow for heterogeneity, so that knowledge can vary across individuals.(2) Like Haltiwanger and Waldman, we deal with the consequences of errors made by a fraction of the population. While we examine the perception of causal relationships and argue that on average there should be a systematic bias, Haltiwanger and Waldman ask whether the actions of those making errors drive the aggregate market outcome.(3) Akerlof and Yellen [1985] provide a similar discussion where they show that erroneous decisions can be relatively costless to the agent while simultaneously yielding large economic impacts. Our paper attempts to complement these last three papers by providing a rationale for why agents are likely to make systematic mistakes; their papers have demonstrated why such mistakes can be important for market equilibrium.

Whether or not information is unbiased and costlessly available plays no role in our argument. Although we recognize that rational expectations is often criticized for assuming agents to be unrealistically well informed, our model shall disregard the information problem to focus exclusively on individuals making mistakes in identifying relationships. Little attention has been paid to this issue, though Benjamin Friedman [1979] mentions that misspecification of the underlying model can cause incorrect coefficient estimates. Our point is also about misspecification, but we more boldly propose that one type of misspecification - the omission of variables - will frequently occur and have major consequences.

Rational Expectations and the Misestimation Problem: A Brief Review

As the rational expectations literature has already discussed the misestimation problem extensively, this section only reviews its basic conclusions in the setup used below for discussing the effects of the "identification problem."

Assume that y is a linear function of x, that all individuals (i) have unbiased information on x and y, and that these individuals use a standard regression technique.(4) The resulting individual estimates of the true parameter b are

(1) [E.sub.i](b) = b + [e.sub.i], where E([e.sub.i]) = 0.

In the aggregate, across k individuals, we obtain the "representative" individual's expectation E(b):

(2) E(b) [equivalent to] [summation of] [E.sub.i](b)/k where i-1 to k = b + [summation of] [e.sub.i]/k where i = 1 to k.

When k [approaches] [infinity], the last term goes to zero, and E(b) = b. Thus, facing individual random estimation errors, the standard rational expectations postulate of E(b) being an unbiased estimator of b can be obtained.

The Identification Problem and Its Consequences

What is "the identification problem"? As mentioned above, we assume that each individual identifies what variables influence what other variables. In other words, models are formulated first, and then estimated. Rational expectations (and much of the critique against it) has focused exclusively on this second step. In contrast, what we label "the identification problem" is concerned with the failure to appreciate what variables are related.

When the capability to generate knowledge from data (roughly corresponding to the common usage of the word "intelligence") is finite or costly, individuals are not able to identify all existing causal relationships. What relationships are overlooked should vary across individuals as information and intelligence, as well as the pay-off to various types of knowledge, vary. There should also (just as with the estimation problem) be some unexplainable randomness in individuals' perceptions of what variables affect other variables. The crucial argument in this paper is that at least some people fail to identify a true relationship, and therefore never take the next step, which is to estimate the strength of it. Failing to estimate the strength of a relationship is essentially equivalent to estimating it to be zero.

Several types of identification mistakes are imaginable. For example, suppose there are three variables, x, y and z, and the only relationship between them is that x influences y. Mistakes in modeling (random or not) could result in incorrect models such as x = f(y), y = f(z), or z = f(x,y), to mention a few. Unless the correct model, y = f(x), (or possibly y = f(x,z)) is identified as the model to be estimated, a fraction of the population never even attempts to estimate the right relationship between x and y.

To formally show the consequences of the identification-type error, again consider the same true relationship between x and y, and assume that those people who correctly identify y as dependent on x fulfill the rational expectations' postulate of equation (2) above. In addition, including n - k individuals (where n is now the total population) who are unaware of this relationship, and who by definition have calculated the equivalent of [E.sub.i](b) equal to zero, leads to an average expectation for the entire population of

(3) [E.sub.p](b) [equivalent to] [[summation of] [E.sub.i](b) where i=1 to k + [summation of] [E.sub.i](b) where i=k+1 to n]/n = [summation of] [E.sub.i](b)/n where i=1 to k = b(k/n) + [summation of] [e.sub.i]/n where i=1 to k.

The expression approaches b(k/n) as n [approaches] [infinity], which obviously is less than b. Thus, there should be a bias toward zero for the "representative" estimate, due to the inclusion of individuals who are unaware of the relationship.

In the rational expectations misestimation case (as described above), typically half the population underestimate the coefficient; yet this does not result in any downward bias in the aggregate [E.sub.p] (b), as an equal number overestimate the coefficient. Can a similar compensating mechanism exist in the case of identifying relationships? No similarity can be found. Although a symmetry reveals itself in the existence of some agents who falsely identify nonexistent relationships (i.e., false-positives) along with the agents who ignore some true relationships (i.e., false-negatives), they do not offset each other in any meaningful sense. Even if we assumed false-positives and false-negatives to be equally prevalent, there is no compensation in the estimated relationship between x and y from having individuals believe, for instance, that z = f(y); there still exists a downward bias in the coefficient estimate for b, [E.sub.p] (b). Unless agents who correctly identify the relationship for some strange reason systematically overestimate the relationship (their [E.sub.i] (b)'s being too high), the downward aggregate bias from agents who fail to identify the relationship is not counterbalanced. (Note that an individual either identifies the relationship or does not; it is impossible to make the mistake of "doubly" or "triply" identifying it.)

While the estimated strength of true relationships generally should be biased downward in the aggregate, as seen above, falsely identified relationships in contrast should not be biased in the aggregate. An individual falsely identifying a relationship would - just as for correctly identified relationships - evaluate the strength of the relationship through some unbiased regression technique. As the true coefficient is zero, some people would estimate a positive coefficient, others a negative one, which should balance out in the aggregate. There is no bias "away" from zero, and our previous analysis would thus be of no relevance to falsely identified relationships.(5)

It is also worth noting that even though the aggregate estimate of a true relationship displays a downward bias, our discussion does not assume individuals to be "agnostic" or to underestimate relationships. Only the representative individual acts as if he underestimates the coefficient's strength.

In the aggregate, the results mimic those of systematic misestimation where the coefficient is biased downward. Unless the individual. agents can be studied directly, there may be no ready way of telling the two types of errors apart. Any aggregate bias caused by one type of error would be superimposed on the bias caused by the other. Nevertheless, the importance of our argument lies in that the bias created by identification errors is always negative (of varying degrees), whereas no such general presumption can be made about errors due to misestimation.
TABLE I


How the Number of Alternative Models Varies with the Number of
Possible Variables


 Number of Number of
Number of Potential Causal Alternative
Variables (v) Relationships (v(v-1)) Models


2 2 4
3 6 64
4 12 4,096
5 20 1.049 million
6 30 over 1 billion


III. HOW THE COMPLEXITY INCREASES RAPIDLY WITH MORE VARIABLES AND THE WAYS THAT AGENTS DEAL WITH IT

Although the problem of identification error is generally applicable, it is likely to be more severe in certain circumstances. In particular, the number of variables necessary to include in the model should be crucial. Even a small increase in the number of variables raises the complexity enormously. For instance, as shown in Table I, a two-variable world would result in four possible model specifications (neither variable affecting the other; both affecting each other; and two models where one influences the other), a three-variable world in sixty-four model specifications, and a six-variable world in an incredible one billion-plus specifications.

The reason why the complexity increases so rapidly is easy to see when applying the mathematical formula for combinations. How many combinations of causal relationships exists in a three-variable world? There potentially could be as many as six causal relationships in such a model, as can easily be ascertained by writing down the variables and drawing arrows between them. A model could include anywhere from zero to six potential relationships, and the possible combinations are numerous: one way of including zero variables, one way of including all six, six ways of including just one relationship, six ways of including five, fifteen ways of including two, fifteen ways of including four, and thirty ways of including three relationships. From elementary statistics, the number of combinations to choose r relationships out of a potential number of m is: [Mathematical Expression Omitted]. Hence the first step, as just done for the three-variable case, is to figure out m, the potential number of relationships in a v-variable case. Since each variable can possibly influence each one of the other variables, m = v [multiplied by] (v - 1). For our calculation of the six-variable case, m thus equaled thirty. Then, using the formula above, we calculated all the different possible combinations and added them together: [Mathematical Expression Omitted]. The resulting number is extremely large, which is primarily due to the C's being large whenever a "middle-sized" model is chosen, since there are so many different options to choose a middle-sized model from; for instance, [Mathematical Expression Omitted].

Note that even if each version took only one second to consider, it would take over thirty years merely to consider a billion different specifications. A careful consideration of each possible version of a six variable case is thus impossible. It seems unlikely that even a small minority of the population could do a good job going beyond the three-variable case, and that the potential for many individuals to fail to identify relationships generally is great. In addition to the number of variables needed in setting up the correct model, other factors would affect the likelihood of making identification mistakes, for instance the pay-off to figuring out the right model, and whether the data needed (in the second stage) to estimate the relationship are accurate and easily available.

Macroeconomics in particular should be prone to identification mistakes, with the consequential downward bias in the representative individual's coefficient estimate. With the many interdependencies among variables, it is hard to avoid a large set of variables. For instance, to predict the future inflation rate, money supply growth must be considered, which depends on the monetary policy objective function, which in turn includes such variables as output growth, unemployment and the inflation rate. The future inflation rate would also depend on money demand growth, which is affected by output and interest rates to mention a few factors. To make matters more complicated, variables such as GNP show a great deal of autocorrelation over time, which necessitates including past deviations from trend in the model. A host of lags must be brought in the model to improve on predictions.(6) Thus a macroeconomic model can easily become overwhelming in size.

A reflection on what we as economists do when faced with a problem like this may shed some light on how non-economists cope with it. The obvious "solution" to the problem is to focus on only the (one hopes) most important relationships and exclude the rest. In other words, we consciously make identification errors. While omitting variables might or might not bias the coefficients in the small-sized model, it does has one biasing effect: we fail to appreciate whatever other influences are relevant. It may be argued that the exclusion of variables is mostly a convenience, rather than true ignorance, and that as a profession we work piecemeal on a puzzle. As other economists study different sets of variables, we learn their estimates, rather than setting them to zero. Yet, this is only partly true. First, some economic relationships might well not have been discovered, implying that the "representative" economist in effect underestimates them. Such false-negative errors are not offset by the false-positive errors of believing in some incorrect doctrines. Secondly, if there exist mutually exclusive theories, the profession as a whole would underestimate the strength of the true relationship.

To illustrate this second point, assume that there are two schools of thought with equal support in the profession: one believes that x causes y, the other that y causes x, with a one-to-one quantitative relationship for either theory. In the aggregate, the representative economist would estimate the influence to run two ways, with coefficients of .5 on each. If the true relationship indeed runs from x to y, it is underestimated by a factor of one-half. Similarly, if the true relationship runs from y to x, it is underestimated by a factor of one-half. Thus, no matter which school of thought is correct, the "representative" coefficient estimate of the true model is biased downwards.(7)

Returning to the original question of how the general public reacts when faced with an overwhelming modeling question, it seems probable that individuals make mistakes similar to those of economists, but on a much grander scale, as the payoffs to figuring out a good model are lower and the costs of doing so are higher.(8) As discussed above, the rational expectations' hypothesis about aggregate expectations being unbiased could be far from the truth, depending on the severity of the identification errors made. For instance, if a large part of the population is unaware the money supply affects prices, an increase in the money supply, no matter how well publicized, might not have much effect on price expectations. This contradicts the orthogonality postulate of rational expectations: the prediction errors would not be orthogonal to the money supply variable, so the forecast could be improved by incorporating money supply as an explanatory variable. Further, there is the potential that different groups of actors have different degrees of knowledge, which will result in important public choice implications, as seen in following sections.

IV. DIRECT EMPIRICAL EVIDENCE ON EXPECTATIONS FORMATION

Empirical studies of expectations formation are consistent with our hypothesis of identification errors. Analyzing households' inflationary expectations formation, Gramlich [1983] found that a one-point rise in M1 raises inflationary expectations by only .3 points. Likewise, De Leeuw and McKelvey's [1984, 109-10] study of the price expectations of businesses concluded that "the BEA [Bureau of Economic Analysis] price expectations data are at least not inconsistent with some direct influence of lagged money supply changes and lagged capacity utilization on price expectations. The size of the coefficients suggests that such direct influence, if it exists, is small." Further, Lowell [1986, 116], citing studies by himself and others on firms' sales forecasts, reported that "expectations are not fully rational in that they do not appropriately incorporate information on seasonality and the rate of growth of the money supply." These three papers found past price changes to be both significant and important in explaining inflationary expectations, and concluded that forecasts violated rational expectations assumptions as the forecasts were biased and inefficient. The results, however, can in our framework be consistent with rational individuals making no systematic errors on an individual level. In simplified terms, there can exist a substantial portion of the population that fails to form a reasonable causal model of inflation and another portion that constructs various models where the coefficient estimates on average are unbiased. In other words, the empirical studies are consistent with individuals being "rational" in the basic economic sense, but not in the rational expectations literature sense.

An important characteristic of these empirical studies is that they involve systematic underestimates of price changes. Note that this does not merely contradict the "strong" form of rational expectations, but the "weak" version as well. It is easy to find evidence that individuals make mistakes in some circumstances, which leads us to reject the "strong" form of rational expectations (where individuals do not make mistakes). However, it is much harder to reject the weak version of rational expectations, where no systematic mistakes are made over time, because while at certain times or in certain places individuals may overestimate the strength of the relationship, presumably there are other places or times when the reverse is true.(9) Thus, in contrast to either version, our theory predicts that if mistakes occur, they will involve, as the evidence indicates, systematic underestimates of the relationship.

V. MACROECONOMIC MODELING

The effects of the identification problem can easily be incorporated into existing macroeconomic modeling. While identification errors could have macroeconomic consequences through an array of mistakes by various actors, we use the example of workers having difficulty in perceiving real wage changes. The problem of confusing nominal and real changes in wages is a common theme in economics, and we wish to show that the question can be understood better through reference to identification errors in a standard aggregate demand-aggregate supply model.

Not all aggregate demand and supply models are formulated precisely the same way, and we have chosen the version described thoroughly in John Beare's [1978] textbook. In his setup, the position of the short-run aggregate supply curve is based on the workers' given perception of the aggregate price level, and any change in this perception causes an explicit shift in the curve.

Since changes in the general price level cannot be perfectly and immediately observed by the workers, the responsiveness of their price perceptions to changes in the actual price level depend partly upon how well workers can use other variables, past or present, to improve their estimates. Since a greater degree of identification problem translates into a poorer ability to improve on price perceptions, the result will be - as discussed below - a short-run aggregate supply curve that fails to shift "properly" (i.e., does not shift in accordance with standard rational expectations predictions). Systematic business cycles can be generated over time under these premises.

The aggregate demand curve can be written as

(4) [Q.sup.d] = [Q.sup.d](M/P,F), with [Delta][Q.sup.d]/[Delta](M/P) [greater than] 0

and [Delta][Q.sup.d]/[Delta]F [greater than] 0,

where Q is output, M the money stock, P the actual price level, and F a measure of fiscal policy.(10,11) Likewise, the short-run aggregate supply curve is defined as

(5) [Q.sup.s] = [Q.sup.s] (P/[P.sup.e]), with [Delta][Q.sup.s]/[Delta](P/[P.sup.e]) [greater than] 0

where [P.sup.e] is the price level perceived by the workers. In the labor market nominal wages go up with prices, but less than proportionately. While workers believe that the real wage has increased, producers understand that it has fallen. The result is higher employment and higher output.(12)

For given levels of M, F, and [P.sup.e], the aggregate demand and supply equations yield solutions for P and Q. Figure 1 illustrates the effect of an expansionary aggregate demand shock, e.g., through a rise in the money stock, M. In the long run, with no new shocks in money or fiscal policy, workers will eventually perceive any new price level, hence P = [P.sup.e], and Q is given solely by real supply side variables. This implies the classical result where output is solely determined by real factors: the long-run supply curve is vertical and the money supply only affects the price level.

If workers could instantaneously and costlessly be informed about price changes, P = [P.sup.e] would always hold and the classical solution would always be reached. So far, the aggregate demand and aggregate supply analysis has not proposed anything that is contrary to the rational expectations hypothesis. The "weak" version of rational expectations normally stipulates that deviations from the classical solution can occur, like the one here illustrated by the intersection of aggregate demand and the short-run aggregate supply curve in Figure 1, precisely because [P.sup.e] does not always equal P. Yet, according to rational expectations theory, such deviations supposedly cannot be very predictable, because workers soon learn to forecast any systematic relationships. For instance, the monetary authorities cannot increase and decrease the money supply (or the rate of increase in it) systematically over time to produce a political business cycle to favor incumbents. Workers would quickly use the money supply variable to improve on their price perception ([P.sup.e]). With [P.sup.e] being a function of the money supply, P/[P.sup.e] would be insensitive to monetary policy, with M becoming ineffective as an instrument for altering output. Another argument often put forward to motivate the rational expectations hypothesis is that the use of systematic money supply changes would leave ex post errors in P/[P.sup.e] that varied systematically with time, and that any such pattern would be recognized.(13) Thus, in forming [P.sup.e], a cyclical component would be employed to serve as a proxy for the systematic money supply shock. Changes in the money stock thus need not be observed nor would their impact have to be understood. In such a case, only money supply shocks that did not display a systematic time pattern would be effective. Systematic monetary policy would only result in the aggregate demand curve moving up and down along the vertical long-run aggregate supply curve. Again political business cycles could not be induced.

Are the preceding rational expectations arguments plausible? We think that they are not, both from a theoretical and an empirical standpoint. First, as discussed in section III, the complexity of formulating models increases extremely quickly with the number of variables involved. It is not plausible that most individuals would try to handle models going beyond a three-variable case. Of those who make the effort to improve on their individual price perception, it is unlikely that the couple of independent variables chosen would always include the money supply or time. After all, there is a large set of other variables affecting the position of both the aggregate demand and supply curves and thus also influencing the price level.(14) Unless money supply shocks have been particularly large, we cannot expect workers to focus on the money stock to the exclusion of other variables. Also, there should be no special presumption that previously made errors in estimating the price level will somehow automatically be included. Workers care about reducing errors in general, and although one's own systematic errors correlated with time may be more easily perceived, at least some individuals may view these errors as small and put what they think are more important relationships in their model instead.(15)

Secondly, the empirical evidence does not support the rational expectations assumption. As discussed in section IV, the articles dealing explicitly with expectations functions found that agents did not fully account for the variables that could help forecast prices, such as the money supply. Even in the case of firms (who should be more informed than workers) dealing with their own sales forecasts, there was a failure to properly incorporate such very obvious information as seasonality.

Returning to the aggregate demand and supply analysis, we can show how the degree of identification error determines the extent to which the short-run aggregate supply curve shifts. We can also illustrate a perpetually repeating political business cycle in this framework. This is thus contrary to any version of the rational expectations hypothesis. To keep the example as manageable as possible, we make the following simplifying assumptions. The monetary authority expands the money supply right before each election, held every four years, and it contracts the money supply by the same amount two years later. There is no time trend in prices, labor force, or capital stock. Also, for simplicity we assume that the classical solution with P = [P.sup.e] holds approximately after two years if workers merely observe prices (i.e., without setting up any model to improve on their price estimate).

In this case, the increase in M would lead to an upward shift in the aggregate demand curve, with output increasing, as illustrated in Figure 2a by the movement from point a to point b. Then, as P and [P.sup.e] gradually converge, workers perceive the price movements, the short-run aggregate supply curve shifts up, and the intersection converges towards point c. As assumed above, two years after the original increase, the money supply has contracted to its original level with aggregate demand shifting back to A[D.sub.1], as shown in Figure 2b. Since [P.sup.e] temporarily remains at the higher price level, [P.sup.e] [greater than] P, and output and prices become unusually low as shown by solution d. As [P.sup.e] gradually adjusts, the short-run aggregate supply curve gradually moves down, and the original intersection, a, in Figure 2b is again reached after two more years.

We can now explicitly model what happens when a portion (k/n) of the workers understand enough not to be "tricked" by the monetary authorities - either by observing M directly and understanding its impact or by previous learning from the time pattern of past prediction errors. Of course, if the portion is 100 percent, there would be no business cycle, and the pattern in money supply changes would only result in the economy moving back and forth between points a and c, which would be the rational expectations prediction.

When (k/n) of the workers correctly predict the price level but the (n - k)/n of the workers only gradually adjust [P.sup.e] towards P, the result will be a less pronounced business cycle, as illustrated in Figures 3a and 3b. Even in the very short run, for the same money supply change the aggregate demand curve does not shift along the earlier described short-run supply curve since the latter was premised on [P.sup.e] being held constant. Instead, the aggregate demand curve intersects an aggregate supply curve based on a revised value of [P.sup.e], as seen in Figures 3a and 3b. The new points b and d are closer to the classical solutions, and the magnitude of the political business cycle is less as fewer workers (wrongfully) believe that their real wages are high right before election time. It is further plausible that the adjustment time to the new equilibrium is shortened.(16) In our theoretical setup a perpetual political business cycle can continue forever, as a certain portion of workers keep on making systematic errors in [P.sup.e]. Thus, while the political business cycle theory contradicts the standard rational expectations hypothesis, it is fully consistent with rationally calculating individuals amongst whom at least some make random errors in identifying relationships.

Since the sole purpose of this paper is to demonstrate how important identification errors can be, we will not discuss the relative importance of different macroeconomic shocks or the slopes of the different curves. The whole aggregate demand and supply framework with any possible accompanying IS-LM analysis allows the reader great latitude to insert the exact equations judged most plausible and illustrate the effects of various shocks. Regardless of the exact parameters and the nature of the shocks, the main point of this discussion has been to illustrate that business cycles can depend on the extent of identification errors.

Parenthetically, it should be noted that the whole aggregate demand and supply analysis could be adjusted to account for inflation, although doing so would involve additional complexity. A Phillips curve-type analysis would be an alternative framework to illustrate our basic argument, and might even be preferable if persistent inflation were to be accounted for.(17)

Does a political business cycle exist? While our argument does not imply per se that it does, we have shown that much weaker assumptions are required to make it theoretically possible. The empirical research on the subject is far from conclusive. Evidence in favor of political business cycles in the United States is provided by Nordhaus [1975; 1989], Michaels [1986], Allen [1986], Allen et al. [1986], Grier [1987], Alesina and Sachs [1988], Davidson et al. [1990], and Findlay [(1990], and this is consistent with the identification problem.(18) Keil [1988] provides similar evidence for Britain. On the other side, McCallum [1978] and Richards [1986] reject the hypothesis of a political business cycle.(19) The theoretical debate going as far back as McCallum [1978] and Nordhaus [1975] has centered over whether people are "rational." Or as Nordhaus [1989] phrases it, whether citizens are "ultra-rational," where they are fully informed, forward looking, and have perfect memories. By contrast, we have here pursued an argument that follows a different line. Whether citizens are fully informed etc. in a sense misses the point, because even if people were fully informed, people would not be capable of drawing the correct inferences. This in turn generates results that in the aggregate appear systematically biased.

VI. PUBLIC CHOICE

A major focus of the public choice literature has been on the imperfect knowledge of the voters. Those who believe that opportunistic behavior by politicians is pervasive could see possibilities for such shirking arising from our preceding discussion.(20) For example, although some voters may be well informed, politicians with better access to advisers and experts may better know the consequences of certain policies. Such "asymmetric" knowledge across groups is likely to occur, as briefly mentioned above when comparing the public to economists. In the previous section's discussion on the political business cycle, politicians and bureaucrats who "know better" may - using strong terms - "cynically exploit" the ignorance of the voters in order to gain popularity right at election time. As can be recalled, some workers were assumed to temporarily perceive nominal wage increases as real wage increases, and thus view the economic situation in more optimistic terms.

The public choice problem is not necessarily dependent on asymmetric information where knowledgeable vote-maximizing politicians deceive the voters. For example, voters could successfully sort into office politicians who intrinsically value the same positions as the voters (Lott [1987]).(21) If so, the result could be "populist" politicians who take the positions of the voters even when this may have foolish consequences. Political candidates who realize the adverse consequences of the "populist" agenda would not be elected.

In the case of political business cycles, the policy is not exactly advertised on a political agenda in front of the voters, and it appears hard to explain without assuming cynical politicians who exploit the ignorance of workers. In contrast, there are many "issue-campaigns" where asymmetrical knowledge accompanied by political cynicism would not be necessary. The less knowledgeable public might equally well be electing less knowledgeable politicians who agree with them.

One example of such an issue may be price and rent controls. As we proposed in an earlier piece (Lott and Fremling [1989]), price and rent controls may be popular because of an asymmetry in the ease with which different consequences are detected: it is relatively easy to observe that controls immediately halt price increases, whereas the more complicated longer-term consequences are more difficult to infer. In part, the identification difficulties arise since controls may even temporarily increase supply and thus temporarily avoid shortages. If firms hold inventories because of higher expected future prices, controls eliminate this return. It can take some time for producers to run down their inventories. During the 1970s, the oil shortages were primarily blamed on the oil companies, with OPEC mentioned as the second most likely cause. Government price controls ranked only a distant third (Lott and Fremling [1989, 296]).(22) In terms of our current discussion, while some voters correctly identify the relationship between controls and shortages, their estimates do not offset the implicit zero estimates of other voters who fail to identify this relationship.

Some additional polling evidence lends further support for how costly it is for different groups of voters to discern economic relationships. For instance, in the recent health care debate, while a large number of economists strongly opposed price controls for health care (Wall Street Journal, 14 January 1994, A16) and 71 percent of American economists opposed wage and price controls (Frey et al. [1984, 988-89]), opinion polls indicate that as many as 71 percent of Americans support price controls for health care (Blendon et al. [1993]).(23) If the polls on the 1970s energy crisis and the current health care debate are representative, they are consistent with political support for government programs and regulations depending upon groups of voters facing different learning costs.

Our discussion provides a possible justification for several recent public choice models. As Wittman (1996) correctly points out, many recent papers explicitly or implicitly assume systematically biased mistakes by voters. For example, Grossman and Helpman (1995) assume that the government maximizes the weighted sum of campaign contributions and aggregate welfare. In a rational expectations model one would not assume that large contributions imply lower welfare for voters, since voters wold not systematically underestimate the influence of contributions on political behavior. Their model implicitly assumes that voters irrationally support governments choosing nonwealth-maximizing policies. In terms of our discussion, the Grossman and Helpman result is possible as long as some voters do not recognize a relationship between campaign donations and how politicians vote. Given that this relationship is even debated among economists (e.g., Bronars and Lott, 1994), if the relationship in fact exists, it is possible that Grossman and Helpman's conclusions will follow.

VII. STANDARD HYPOTHESIS TESTING AND THE REJECTION OF WEAK HYPOTHESES

Perhaps the most blatant example of identification errors is made in hypothesis testing. This important example has been saved for last because it requires an additional twist to the theory.

As set up in section II describing the theory, we assume that individuals first set up a model and then estimate the strengths of the identified relationships. To keep the analysis as straightforward as possible, we limited the discussion to these two steps. However, in scientific study, as well as in everyday life, this two-step process is not likely to be a once-and-for-all event, but must be viewed as an ongoing process where the models are formed and estimated several times. The results from step two thus affect future attempts at determining the correct model.

Any kind of problem in step two affecting the estimation of how strong a relationship is can lead to mistakes in how a model is respecified. If, for instance, the relationship y(x) is estimated to be a weak one, it may be excluded from consideration in future modeling, as retaining knowledge/information is costly. In other words, not only do purely random errors occur in identifying the true model (which was shown to create a downward aggregate bias) but estimation problems can further selectively influence the identification process.

Standard "scientific" hypothesis testing clearly epitomizes this problem. If regression coefficients differ from zero, they will typically be viewed as relevant only if they are found to be "significantly" different from zero at a given preset significance level. The very fact that the null hypothesis (usually) states that the coefficient is zero automatically creates a bias in future modeling. A step-two failure to gather enough support for the true relationship leads to its exclusion in future step-one modeling. The bias in favor of [H.sub.0] means that we are more likely to reject true relationships than accept false ones. A host of usual problems in estimating the coefficient can then potentially feed back and result in the exclusion of a variable from future modeling. For instance, data problems such as large measurement errors or the unavailability of a large data set can cause the standard errors of the estimates to be too large to yield significant results.

If each individual economic actor used standard hypothesis testing methods, a large number of those who identified the correct relationship y(x) in step one would not find a significant relationship and so would not be included in the group from which comes the representative coefficient estimate in step two (see section II for why random errors supposedly cancel out under rational expectations). The actors who failed to obtain a significant coefficient would accept the null hypothesis rather than the coefficient estimate. This group thus views the world the same way as those who never even estimated the relationship in the first place.(24)

As is the case with setting up the correct model in the first place, ordinary economic actors should have less incentive and face higher costs in figuring out the correct coefficient estimate than do professional economists, and therefore have a greater tendency to reject the estimate in favor of zero. Again, there should also be differences between various classes of economic actors, depending on the returns and the costs involved. Perhaps it can be objected that there is little reason for the public to adhere to strict hypothesis testing rules. Nevertheless, our argument still holds if there are some individuals who in a less precise sense "just can't pin down the effect" or "get too small of an effect to bother about" and therefore choose to ignore the y(x) relationship, as this means that they set it at zero instead of using their point estimate for the coefficient.

We are by no means criticizing the current practice of hypothesis testing, since it has a very good reason behind it: we would be "drowned" by regression results from all kinds of studies if somewhat strict criteria were not applied to sort out what is to be considered "interesting enough." Yet, we should simultaneously understand the bias involved and recognize that a similar hypothesis-testing process may well take place when economic actors evaluate causal relationships.

To conclude, the step-two estimation problem thus accentuates the step-one identification problem we described. It may be very difficult to empirically separate the two effects from each other.

VIII. SOME LIMITATIONS AND OBJECTIONS

Two possible objections to our model ought to be addressed: learning and arbitrage. If prediction errors are analyzed, why would not modeling as well as estimated coefficients improve, approaching the true model with correct coefficient estimates? Any bias from omitting variables could, accordingly, be eliminated over time.(25,26) While learning over time undoubtedly takes place, one crucial factor is how quickly it occurs. If life spans were infinitely long, and the world were stationary, there would "only" be a few constraints, such as imperfect information and limited brain capacity that would prevent perfect modeling. With finite lives (and costs of transferring knowledge to new generations), as well as a changing world, the prospect of near-perfect knowledge seems slim in our view. After all, as pointed out above, merely to count the various specifications in a six-variable setup would take over thirty years.

In many markets, even large misperceptions by a substantial portion of the population might not matter much economically, since a few well-informed agents can engage in arbitrage.(27) This paper is clearly of no relevance to these cases.(28) However, in such areas as macroeconomics, the possibilities for arbitrage in labor markets are limited because of prohibitions against slavery, and thus the presence of a large number of relatively uneducated workers has the potential for major effects on output. Other areas relevant to macroeconomics can also suffer from lack of arbitrage, as the costs of buying and selling are substantial. For instance, a small business that incorrectly forecasts the price level, and therefore makes less than perfect investment decisions, is not automatically taken over by a firm that makes better price forecasts.(29)

Another area of limited possibilities for arbitrage is public choice. In the political arena arbitrage (i.e., vote-buying) is illegal and hence costly. Therefore, any systematic ignorance on the part of the voters should be reflected in what types of politicians gain office and hence on what policies are put into effect. Thus, in the political business cycle example discussed above, there was no economic incentive for well-informed voters to buy up the votes of those who (due to ignorance) favored politicians who induced political business cycles or other welfare reducing policies.

IX. CONCLUSIONS

Rational expectations has, in our view, not taken the logical next step of allowing for random errors in identifying relationships. We find that accounting for the failure of some people to identify a relationship creates a downward bias in the aggregate perception of that relationship. The fact that some people make the mistake of identifying relationships where none exist does not in any sense "balance" this error. Rational expectations theory thus has overstated the public's understanding of causal relationships.

Empirical studies rejecting the postulated rational expectations predictions have provided a challenge to the very assumption of individual rationality. By contrast, we show that these empirical results can be explained using a rationally calculating framework, once we allow individuals to make nonsystematic errors in identifying relationships. Our paper complements recent work by Haltiwanger and Waldman [1985; 1989b] and Akerlof and Yellen [1985]. While we point out that rational agents are likely to make systematic mistakes, their work demonstrates that these mistakes can have large effects on the market equilibrium.

1. See also Heiner [1985; 1986].

2. See also Haltiwanger and Waldman [1989a] for a related discussion.

3. See footnote 26 for an analogous discussion.

4. The usual statistical assumptions are made. If x is measured with error, the individuals should be assumed to use inverse regressions. Even if the individuals are not using traditional statistical techniques, unbiased approximations of such techniques (e.g., fitting a straight line) would yield the same conclusions.

5. In this paper we assume identification errors to be random. However, in reality there are often patterns in how mistakes are made, and with any particular pattern there is also a systematic bias. For instance, a common mistake is to confuse the true relationship with its inverse (e.g., x = g(y) instead of y = f(x)). Another occurs when two or more relationships in a causal chain are replaced by one. Both cases are premised on the true relationship not being identified (or misestimated), and the estimated strength therefore depends on the strength of the omitted one.

6. To illustrate this point, consider Cochrane's 1988 article about the random walk for GNP. He finds that GNP growth is positively autocorrelated at short lags, but that there are many small negative autocorrelations at long lags.

7. Ex ante, it may seem that there is no bias, if one views the world as having a 50 percent chance of being an x-influences-y world and a 50 percent chance of being a y-influences-x world. But given the true state of the world rather than given the original perception of the world, the false-positive does not in any meaningful sense counteract the described bias.

8. Whereas the economist's payoff for a better model of forecasting next year's price level may be a published article and a salary raise, the average person's payoff may be limited to a slightly better prediction of his future budget constraint. The costs are lower for the economist, as he has sunk investment in education and better access to computers and data. For empirical evidence on the specific difference between knowledge by experts and the public, see our public choice discussion in section V. An interesting aspect is that individuals might do reasonably well adopting some simpler rule that avoids evaluating causal relationships. For instance, expected future price levels may just be extrapolations of past prices. Also, behavior changes - such as "shopping around" more carefully for goods or for employment - can serve as substitutes to forming better causal models.

9. See, for instance, a related point raised by Seater [1993] where he discusses the empirical evidence regarding the Ricardian Equivalence Theorem.

10. Other exogenous shocks, e.g., from the consumption or investment functions, could also be added separately or be integrated into F.

11. There is a mutatis mutandis assumption about the interest rate here. The aggregate demand curve is not a rectangular hyperbola because different price levels along the curve correspond to different real money balances, with adjustment in the nominal interest rate to maintain equilibrium in the money market. The IS-LM analysis is thus assumed in the construction of the aggregate demand curve and the variations of the nominal interest rate need not explicitly be dealt with in this aggregate demand and supply analysis. Fiscal policy can be said to affect the relationship between output and the price level for a given money supply (M) because fiscal policy affects interest rates, which alters the demand for real money balances.

12. Equilibrium in the labor market is represented bay [L.sup.d](W/P) = [L.sup.s](W/[P.sup.e]), where W is the nominal wage, [L.sup.d] the quantity of labor demanded, and [L.sup.s] the quantity supplied. Labor demand is derived from the production function as the marginal product of labor, and labor supply from workers attempting to maximize utility.

13. This is true even if the money supply was not observed well or the consequences of changes in the money supply were not fully known.

14. Changes in the money demand function, fiscal policy, and (as mentioned in footnote 10) changes in the consumption and investment functions affect the aggregate demand curve. The aggregate supply curve can be affected by anything influencing worker preferences or the production function. Examples of other potential factors that can either be treated separately or worked into the aggregate demand and supply framework are seasonality, strikes, and foreign influences via trade or capital markets.

15. Identifying systematic errors may be rather difficult even if we are dealing with only one variable over time. Numerous economists have put in great efforts to try to identify whether GNP follows a "random walk" or whether its movements can be described as deviations around a long-term trend (see also footnote 6). Thus, would systematic errors in one's own predictions just be due to stubborn autocorrelation in the variable under consideration, or would it signify that the theoretical model should include some other variable? These would be nontrivial issues for the worker to resolve.

16. Just as the inclusion of knowledgeable individuals reduces the swing in output, it is plausible that the adjustment time required to approximately reach the classical solution is less. The price expectations of those who merely observe prices adjust gradually as they observe the prices of more and more individual goods changing. Without general inflationary expectations, their adjustment of [P.sup.e] may be expressed as something like:

[Mathematical Expression Omitted]

In such a case, inclusion of k knowledgeable individuals causes prices to adjust quicker and induces the n - k individuals to change [P.sup.e] more, which further helps to speed up the adjustment process. In other words, reaching approximately the classical solution where P [approximately equal to] [P.sup.e] for the population as a whole is not only achieved more quickly because of the very inclusion of k individuals who always have k[P.sup.e] = P, but also through the effect on the speed of adjustment of the other n - k individuals' price expectations.

17. It must be emphasized that although our example discusses money supply changes as a cause of the business cycle, the same argument can equally well be applied to other policies. The level of spending and/or taxation (represented by F in the aggregate supply equation) or policies to directly stimulate certain components of private spending could vary cyclically as well. Whether such examples could be important hinges on one's presumption regarding how effective fiscal policy is. In any case, our argument about the shifting of the short-run aggregate supply curve, and how it depends on the degree of identification errors made, is very generally applicable and by no means linked to money supply shocks being of sole importance.

18. Other papers by former policymakers, such as Poole and Meiselman [1986], indicate the widespread acceptance of the notion of political business cycles.

19. Peltzman [1990] provides evidence that the voting market comes "close to full utilization of the available information" and that the marginal voter uses information on policy outcomes, such as income and inflation, in making re-election decisions. Peltzman criticizes most of the existing literature for assuming that voters have amnesia and that they use only the information available immediately prior to the election when deciding how to vote.

20. For a literature survey on whether politicians engage in opportunistic behavior, see Bender and Lott [1996].

21. Again see Bender and Lott [1996] for a survey of this literature.

22. Our discussion is also relevant to recent political debates over imposing price controls on pharmaceuticals. Given the ten- to twelve-year delay between the discovery of new drugs and the completion of the drug review process, the negative effect of controls in terms of reduced future innovation will only be observed by the general public after a long lag.

23. Gramlich's [1983] study of price expectations confirms that households (as opposed to economists) do put faith in wage-price controls as an antidote for inflation.

24. Although the individuals who fail to obtain a significant relationship now have identical beliefs to the group of n - k individuals who fail to identify the relationship, we cannot simply add these individuals to this group and assume the remaining individuals on average have an unbiased estimate. The result is more complicated, because the people who ended up rejecting [H.sub.1] in favor of [H.sub.0] are more likely to be the ones obtaining low coefficient estimates. (If the standard distribution of the estimate is uncorrelated to the size of the estimate, those with low estimates will more often include the value zero within the confidence interval for the estimate.) Therefore, the remaining individuals, those who believe in y(x), should have estimates that on average are somewhat too high, although it is impossible to make a generalization about how high this overestimate is except that it does not counterbalance the effect in the whole aggregate of some individuals rejecting their coefficient estimates in favor of zero.

25. Also, see our macroeconomic modeling discussion on this problem in section V.

26. Cyert and DeGroot [1974] and DeCanio [1979] are examples of where learning causes convergence to rational expectations equilibria. Taylor's [1975] model results in eventual convergence to a rational expectations equilibrium but with an interim period during which predictions behave like adaptive expectations. Various alternative adjustment paths are discussed in Heiner [1989], with dynamics before equilibrium exhibiting adaptive rather than rational expectations. Heiner also points out the case where agents can remain a constant distance from the optimal target, because the optimal target itself does not converge to a stationary, period-one solution.

27. Haltiwanger and Waldman [1985] investigate the ramifications of allowing some agents to process information in a very sophisticated manner while others are much more limited in their capabilities. They find that when there are "congestion effects," such as in their freeway example, the equilibrium tends to be dominated by the sophisticated agents. In contrast, when the world is characterized by synergistic effects, such as in their computer-choice case, the equilibrium tends to be dominated by the naive agents.

28. Mishkin [1981] found that the bond market exhibits rational forecasting behavior, efficiently exploiting available information. This conclusion is not surprising, as well-informed speculators are likely to disproportionately dominate the outcome.

29. Information can obviously also be sold to workers or firms, but given that information is costly it is not efficient for everyone to buy this information nor will it be costless for buyers of this information to determine which sellers are advocating the correct model.

REFERENCES

Akerlof, George A., and Janet L. Yellen. "Can Small Deviations from Rationality Make Significant Differences to Economic Equilibria?" American Economic Review, September 1985, 708-20.

Alesina, Alberto, and Jeffery Sachs. "Political Parties and the Business Cycle in the U.S. 1948-1984." Journal of Money, Credit, and Banking, February 1988, 63-82.

Allen, Stuart D. "The Federal Reserve and the Electoral Cycle." Journal of Money, Credit, and Banking, February 1986, 88-94.

Allen, Stuart D., Joseph M. Sulock, and William A. Sabo. "The Political Business Cycle: How Significant?" Public Finance Quarterly, January 1986, 107-112.

Beare, John. Macroeconomics: Cycles, Growth, and Policy in a Monetary Economy. New York: Macmillan Publishing, 1978.

Beck, Nathaniel. "The Fed and the Political Business Cycle." Contemporary Policy Issues, April 1991, 25-38.

Bender, Bruce, and John R. Lott, Jr. "Legislator Voting and Shirking: A Critical Review of the Literature." Public Choice, forthcoming 1996.

Blendon, Robert J., Tracey Stelzer Hyams, and John M. Benson. "Bridging the Gap between Expert and Public Views on Health Care Reform." Journal of the American Medical Association, May 19, 1993, 2573-7.

Bronars, Stephen G., and John R. Lott, Jr., "Do Campaign Donations Alter How a Politician Votes?" The Wharton School Working Paper, 1994.

Cochrane, John. "How Big Is the Random Walk in GNP?" Journal of Political Economy, October 1988, 893-920.

Cyert, Richard M., and Morris H. DeGroot. "Rational Expectations and Bayesian Analysis." Journal of Political Economy, May/June 1974, 521-36.

Davidson, Lawrence S., Michele Fratianni, and Jurgen von Hagen. "Testing for Political Business Cycles." Journal of Policy Modeling, Spring 1990, 35-59.

DeCanio, Stephen. "Rational Expectations and Learning from Experience." Quarterly Journal of Economics, February 1979, 47-57.

De Leeuw, Frank, and Michael J. McKelvey. "Price Expectations of Business Firms: Bias in the Short and Long Run." American Economic Review, March 1984, 99-109.

Ellis, Christopher J. "An Equilibrium Politico-Economic Model." Economic Inquiry, July 1989, 521-8.

Ellis, Christopher J., and Mark A. Thoma. "Credibility and Political Business Cycles." Journal of Macroeconomics, Winter 1993, 69-89.

Findlay, David W. "The Political Business Cycle and Republican Administrations: An Empirical Investigation." Public Finance Quarterly, July 1990, 328-38.

Frey, Bruno S., Werner W. Pommerehne, Fredrich Schneider, and Guy Gilbert. "Consensus and Dissension among Economists: An Empirical Inquiry." American Economic Review, December 1984, 986-94.

Friedman, Benjamin M. "Optimal Expectations and the Extreme Information Assumptions of 'Rational Expectations' Models." Journal of Monetary Economics, January 1979, 23-41.

Gramlich, Edward. "Models of Inflation Expectations Formation: A Comparison of Household and Economist Forecasts." Journal of Money, Credit, and Banking, May 1983, 155-73.

Grier, Kevin B. "Presidential Elections and Federal Reserve Policy: An Empirical Test." Southern Economic Journal, October 1987, 475-86.

Grossman, Gene M., and Elhanan Helpman. "The Politics of Free-Trade Agreement," American Economic Review, September 1995, 667-90.

Haltiwanger, John, and Michael Waldman. "Rational Expectations and the Limits of Rationality: An Analysis of Heterogeneity." American Economic Review, June 1985, 326-40.

-----. "Rational Expectations in the Aggregate." Economic Inquiry, October 1989a, 619-36.

-----. "Limited Rationality and Strategic Complements: The Implications for Macroeconomics." Quarterly Journal of Economics, August 1989b, 463-83.

Heiner, Ronald A. "Origin of Predictable Behavior: Further Modeling and Applications." American Economic Review, May 1985, 391-96.

-----. "Imperfect Decisions and the Law: On the Evolution of Legal Precedent and Rules." Journal of Legal Studies, June 1986, 227-61.

-----. "The Necessity of Imperfect Decisions." Journal of Economic Behavior and Organization, 10(1), 1988, 29-55.

-----. "The Origin of Predictable Dynamic Behavior." Journal of Economic Behavior and Organization, 12(2), 1989, 233-57.

Keil, Manfred W., "Is the Political Business Cycle Really Dead?" Southern Economic Journal, July 1988, 86-99.

Lott, John R., Jr. "Political Cheating." Public Choice, March 1987, 169-87.

Lott, John R., Jr., and Gertrud M. Fremling. "Time Dependent Information Costs, Price Controls, and Successive Government Intervention." Journal of Law, Economics, and Organization, Fall 1989, 293-306.

Lowell, Michael J. "Tests of the Rational Expectations Hypothesis." American Economic Review, March 1986, 110-24.

McCallum, B. T. "The Political Business Cycle: An Empirical Test." Southern Economic Journal, September 1978, 504-15.

Michaels, Robert J. "Reinterpreting the Role of Inflation in Politico-Economic Models." Public Choice, 1986, 113-24.

Mishkin, Frederic S. "Are Market Forecasts Rational?" American Economic Review, June 1981, 295-306.

Nordhaus, William D. "The Political Business Cycle." Review of Economic Studies, April 1975, 169-90.

-----. "Alternative Approaches to the Political Business Cycle." Brookings Papers on Economic Activity, no. 2, 1989, 1-49.

Peltzman, Sam. "How Efficient Is the Voting Market?" Journal of Law and Economics, April 1990, 27-63.

Poole, William, and David I. Meiselman. "Monetary Control and the Political Business Cycle/Avoidable Uncertainty and the Effects of Monetary Policy: Why Even Experts Can't Forecast." Cato Journal, Winter 1986, 685-707.

Richards, Daniel J. "Unanticipated Money and the Political Business Cycle." Journal of Money, Credit, and Banking, November 1986, 447-57.

Seater, John J. "Ricardian Equivalence." Journal of Economic Literature, March 1993, 142-90.

Simon, Herbert G. "Theories of Decision-Making in Economics and Behavioral Science." American Economic Review, June 1959, 253-83.

-----. "Rational Decision Making in Business Organizations." American Economic Review, September 1979, 493-513.

Taylor, John B. "Monetary Policy during a Transition to Rational Expectations." Journal of Political Economy, October 1975, 1009-21.

Wittman, Donald, "Democratic Market Failure: Myth or Reality," Roundtable discussion at the American Economic Association Meetings, 1996.

Fremling received her Ph.D. in economics from UCLA. Lott is the John M. Olin Visiting Law and Economics Fellow at the School of Law at the University of Chicago. We would like to thank Dennis Jansen, Michael Waldman, and three anonymous referees from this journal for their very helpful comments.
COPYRIGHT 1996 Western Economic Association International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Fremling, Gertrud M.; Lott, John R., Jr.
Publication:Economic Inquiry
Date:Apr 1, 1996
Words:10299
Previous Article:Economic losses due to forecasting error and the U.S. populist movement.
Next Article:New estimates of the optimal tax on alcohol.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters