# Monetary rewards and decision cost in experimental economics.

This paper provides a theoretical framework and some evidence about
how the size of payoffs affects outcomes in laboratory experiments. We
examine theoretical issues related to the question of how payoffs can
matter, and what the trade-offs with nonmonetary arguments might be in
individual utility functions. Essentially, we accomplish this with an
effort theory of subject behavior.

Experimentalists frequently argue that experimental subjects may have other motivations besides monetary gain that impinge upon the subject's decision making and that experimental results should be interpred with this caveat in mind.(1) The literature on adaptive and behavioral economic modelling often cites decision-making cost as part of the implicit justification for such models.(2) Conlisk |1988~ provides some examples of how "optimizing cost" (we will use the term "decision cost") can be explicitly integrated into modelling problems and suggests a generalization for the class of quadratic loss functions. Our approach is to formulate the decision cost problem as one of balancing the benefit against the effort cost of reducing "error," the latter defined as the difference between the optimal decision in the absence of decision cost and the agent's actual decision. This normalization has the advantage that the implications of the model can be directly tested from data on the error properties of a wide range of reported experiments. Our approach also attempts to encompass and formalize the argument that decision makers may fail to optimize because of the problem of flat maxima, as in von Winterfeldt and Edwards |1986~, or because of low opportunity cost for deviations from the optimal as in Harrison |1989~. Since standard theory predicts that decision makers will make optimal decisions however gently rounded is the payoff function, the theory is misspecified and needs modification. When the theory is properly specified, there should be nothing left to say about opportunity cost or flat maxima; i.e., when the benefit margin is weighed against decision cost, then there should be nothing left to forego. This is consistent with the arguments of Harrison and von Winterfeldt and Edwards.

The theoretical approach we examine is based on a perspective originally suggested by Simon |1956~, and operationalized by Siegel |1959~: rational choice theory is a correct first approximation to the analysis of decision behavior, but it is incomplete, and making it more complete requires the guidance of data from experimental designs motivated by this objective. Simon's original thesis was that "To predict how economic man will behave we need to know not only that he is rational, but also how he perceives the world--what alternatives he sees and what consequences he attaches to them" |1956, 271~. Thus there is no denial of human rationality; the issue is in what sense are individuals rational and how far can we go with abstract objective principles of how "rational" people "should" act.

But if a study of payoff effects is in need of a theoretical foundation, it also requires evidence. If traditional economic models assume that only monetary reward matters, psychologists tend to assume that such rewards do not matter.(3) The facts reviewed below support the more common sense view that rewards matter, and that neither of the polar views--only reward matters, or reward does not matter--are sustainable across the range of experimental economics. There will always be discrepancy between precise theory and observation, and thus room for theory improvement. Since rational theory postulates motivated decision makers, it follows that varying reward levels is one of the many important tools needed to explore this discrepancy. Our fundamental view is that the experimentalist has as much to learn from experimental subjects about subjective rationality, as human decision makers have to learn from the models that we call "rational."

I. MOTIVATION THEORY IN THE PRESENCE OF DECISION COST

In this section we develop a simple theoretical framework to help: (i) improve understanding of the circumstances that might yield predicted optimal decisions or deviations therefrom; and (ii) provide guidance in experimental design and in interpreting observations.

We begin with a statement of rational theory, as derived from the perspective of the theorist/experimentalist. Letting X, W, |theta~ and Z be convex sets, the variables we want to identify are defined below.

1. x |is an element of~ X, the subject's message decision variable such as price, quantity, bid, forecast, etc. This variable is defined by the experimenter's interpretation of a theory in the context of a particular experimental design and institution.

2. w |epsilon~ W, an environmental variable controlled as a "treatment" by the experimenter such as commodity value(s), asset endowment, production cost, etc.

3. |theta~|epsilon~|theta~, a random variable with distribution function F(|theta~) defined on |theta~. The function F, chosen by the experimenter, generates the appropriate probabilities in games against nature or the appropriate uncertainty about other player types when modelling the subject's choice in a Harsanyi game of incomplete information. Thus, in a private value auction, |theta~ is the uncertain value for each of the N - 1 competitors of a given bidder.

4. |pi~(x, w, |theta~), the outcome function, controlled by the experimenter, denominated in experimental money (tokens, Francs, etc.), and based on the motivation assumptions in the theoretical model. The function |pi~ is assumed to be strictly concave in x, so that given w and F(|theta~), the experimenter predicts that x* will be the unique optimal x chosen by the subject.

5. |lambda~, the scalar payoff transformation rate, controlled by the experimenter, that converts experimental money into the reward medium. We assume that this marginal conversion rate is constant, although there are experiments in which it is not; i.e., the conversion rate is some nonlinear increasing function |lambda~(|pi~) of outcome.

The standard expected utility function, in terms of the above variables, is written

(1) U(x, w; |lambda~, |pi~, F) = ||integral~.sub.|theta~~ u||lambda~|pi~(x, w, |theta~)~ dF(|theta~).

The first-order condition for x* = arg max U is

(2) |lambda~||phi~.sub.1~ = 0, where ||phi~.sub.1~ = ||integral~.sub.|theta~~ |u.sub.1~||pi~.sub.1~~dF(|theta~)

where a subscript j denotes differentiation with respect to argument j. If utility is increasing in reward (|lambda~ |is greater than~ 0) then (2) implies ||phi~.sub.1~ = 0, with solution x* = x(w,F). The function x(w,F) is the source of testable experimental hypotheses concerning the subject's predicted choice, x*. Note that if u is linear, or a power function of |pi~ then x* is independent of |lambda~ |is greater than~ 0 and is optimal however small is the opportunity cost, ||pi~.sub.11~(x*).

We now examine the same problem from the perspective of the decision maker (subject). To achieve this we augment the list of variables (1) - (5) with the following:

6. y |epsilon~ X, the value of the decision variable chosen by the subject, given his/her perception, evaluation, analysis and understanding of the instructions (augmented by experience where there is replication) and task that he/she is to perform. Outcome is now written |pi~(y, w, |theta~)

7. z |epsilon~ Z, an unobserved decision process variable controlled consciously, or unconsciously, by the subject in executing the task that results in y. One can think of z as the decision cost or effort (concentration, attention, thinking, monitoring, reporting, acting) which the subject applies to the task presented by the experimenter. As with quarks in particle physics we may have no direct measures of z, but we look for traces of its effects on the choice of y. If z is recognized as lurking within every subjective model of decision, then we are primed to expect to find its traces, and where z is thought to be of substantial importance (as in Siegel's model below and in the general model we propose) seek to establish this proposition by manipulation of the experimental procedures that affect z and thus y.

8. Now consider the equation

(3) y = x* + |epsilon~(z,s)

e.g., |Mathematical Expression Omitted~, where |epsilon~ (z,s) is a function, normalized with respect to x*, specifying the effect of z on subject choice of y. Think of |xi~(z) as the subject's production transformation function of effort, z, into decision y. An unobserved random variable, s, describing the "state" of the person at the time of decision, induces randomness on |epsilon~ conditional on z. Observations on the effects of s are obtained by repeated play of the task. More effort is postulated to narrow the distance between predicted optimal (x*) and actual (y) choices, and thereby increases payoff. |epsilon~ is naturally interpreted as prediction decision error, and it is random across repeated play choices of y for given z.(4) Some hypothesized properties of the error function are suggested in the discussion below.

9. |mu~ |is greater than~ 0, is a scalar characteristic of the subject which measures the monetary equivalent of the subjective value of outcome |pi~ on the assumption that there is self-satisfaction, weak or strong, in achieving any outcome |pi~. This parameter is assumed to be additive with the reward scalar, |lambda~ |is greater than or equal to~ 0, and allows the model to account for nonrandom behavior when the salient exogenous reward is |lambda~ = 0.

It will be evident to the reader that any of the variables x, y, z, w might be represented by vectors in place of scalars, but the latter are sufficient for examining the principles we want to address. Also, we omit the subscript i on the appropriate variables and functions, it being understood that the perspective is always that of some particular person, i, such as yourself.

We can now write the subjective expected utility function using the new variables,

(4) |psi~(y, z, w; |lambda~, |mu~, |pi~, F~ = ||integral~.sub.|theta~~ u|(|mu~+|lambda~)|pi~(y, w, |theta~), z~ dF(|theta~)

where |u.sub.2~ |is less than~ 0 is the marginal decision cost (disutility) of effort, z. Substituting from equation (3), the first-order condition for z* = arg max |psi~ is

(5) ||phi~.sub.1~ |is greater than or equal to~ ||phi~.sub.2~/(|mu~+|lambda~) ||epsilon~.sub.1~,

where ||phi~.sub.1~ = ||integral~.sub.|theta~~ |u.sub.1~||pi~.sub.1~dF(|theta~);

||phi~.sub.2~ = ||integral~.sub.|theta~~ |u.sub.2~dF(|theta~).

We will examine three cases, each representing a possible solution to (5).

(i) Bounded Rationality Case

When |is greater than~ holds in (5) we have a constrained solution with z* on the boundary of the set Z, e.g. if |Mathematical Expression Omitted~ we have |Mathematical Expression Omitted~. This "bounded rationality" case can be important: there are physiological and intellectual limitations on human decision making ability; when these limits are binding the agent's constrained optimal decision is |Mathematical Expression Omitted~ independent of the reward, |lambda~. One should think of |lambda~ as operating on motivation, not physiological and mental capacity. This case provides one formalization of Simon's concept of bounded rationality in decision making.

Now consider interior solutions where the equality condition holds in (5). First, note that in contrast with equations (1)-(2), we have in (4)-(5) a well-defined maximum problem when |lambda~ = 0. This is essential in explaining why subject decisions are not just random responses in the absence of salient rewards.

(ii) Pure Decision Error Case

Consider the degenerate case in which marginal decision cost ||phi~.sub.2~ |is equivalent to~ 0 and |epsilon~ (z, s) |is equivalent to~ |epsilon~ (s) in (3) and (4). Under these conditions effort does not enter the criterion function (4), the costless direct decision variable is y, and instead of (5) we get the condition ||phi~.sub.1~ = 0, which determines y* = y(w), where y* = x* + |epsilon~ (s). This formulation is the same as in (1) and (2) except that it implements the decision-making hypothesis with an econometric specification of a decision error term (see McElroy |1987~ for an examination of error models in production, cost and derived demand equation systems). This is usually recognized, ex post, in the form of the assumption that decision error, y* - x* = |epsilon~ is randomly distributed with mean zero and variance ||sigma~.sup.2~; i.e., |epsilon~ is not biased. Hypothesis testing normally proceeds on this maintained assumption. As we will see, in the survey below the data often do not contradict the condition E(|epsilon~) = 0--subject choices, y*, are distributed around a mean (or median) that is "close" to x*. But there are exceptions, and at least some of these exceptions occur when the Euclidean distance between x* and the boundary of the set X is at or near zero. In that case the data suggest that E(|epsilon~) |is not equal to~ 0. If decision error is random, then E(|epsilon~) = 0 is incompatible with boundary maxima. So the idea is this: part of the reason why data may be consistent with predictions is that x* is far enough into the interior of X that random unbiased decision errors cause no difficulty. Of course they may be biased, but for sure errors are biased at boundary optima, where the distribution of |epsilon~ is asymmetrically truncated.

(iii) Dominance Case

Now consider the more general interior maximum defined by (5). In particular (5) informs us that if equilibrium marginal decision cost (-||phi~.sub.2~/(|mu~+|lambda~) ||epsilon~.sub.1~) goes to zero as |Mathematical Expression Omitted~ then we have dominance at the reward level |Mathematical Expression Omitted~ and higher; i.e., rewards are sufficiently salient to swamp decision cost effort.(5) Whether this property holds in any particular case, and what level of |Mathematical Expression Omitted~, if any, is sufficient for dominance is entirely an empirical question. We have already seen why this property might not hold: the solution value z* from (5) may be on the boundary of Z. Additional physical, mental or sensory effort may not be possible. Thus in a signal detection experiment, once a subject approaches the boundary of his/her auditory capability, little if any additional auditory improvement may be forthcoming by escalating reward payments. Similar considerations may apply for some subjects in almost any task.

The methodological implications of the above analysis are clear. In a new experimental situation, if the experimenter finds that decision error, |epsilon~, is biased enough to contradict the theory, then the first thing to question is the experimental instructions and procedures. Can they be simplified? If not (the task is inherently difficult), does experience help? These are techniques that, a fortiori, may help to reduce decision cost. The second thing to question is the payoff level. Try doubling, tripling, or an n-fold increase in |lambda~. We do it frequently, and report in Smith and Walker |1992~ the effects of five-, ten- and twenty-fold increases in auction experiments. This is not done for realism since there are both low stake and high stake economic decisions in life, and all are of interest. You manipulate payoffs to increase understanding of possible trade-offs between the benefits and costs of optimal decisions, and to explore the depths and limits of objective optimality.

Where our model of the technology of errors is applied to the Nash equilibrium analysis of behavior, we assume that subjects are "boundedly rational" in the sense that they do not behave as if their equilibrium choice behavior takes into account the error properties of their rivals' choices (or is a best response to other subjects' actual error prone choices). That such errors may affect the calculation of Nash "trembling-hand" equilibria has been demonstrated in the path-breaking theoretical work of Selten |1975~, provided that the error structure of decision making is common knowledge: "...all the players have the same notions about how their fellow players slip..." (Kreps |1990, 439~). But experimental studies in bargaining, oligopoly and auction markets going back to Fouraker and Siegel |1963~ have found that Nash models of single play behavior that assume common (payoff) knowledge actually perform best in repeated games under private (incomplete) information and depart from such models under common information. Consequently, such models truly exhibit equilibrium behavior in that subjects tend to gravitate to, and remain near, such an equilibrium, but with error. In the examples in section III, we have not found a need to suppose that, from their point of view, subjects are solving for a trembling-hand equilibria. Subjects are getting it right on average in the interior optimum cases. Thus, the simpler Nash models account for the central tendencies of the data, but not for the error.(6)

II. SOME COMPARATIVE TREATMENT PREDICTIONS WITH ADDITIVE SEPARABILITY

In this section we consider the implications of the case in which |psi~ in (4) can be written in the additively separable form:

(6) |psi~ = |phi~(x* + |epsilon~) - C(z,|gamma~)

where

|phi~(x* + |epsilon~) = ||integral~.sub.|theta~ u|(|mu~+ |lambda~)|pi~(x* + |epsilon~(z,s), w, |theta~)~ dF (|theta~).

The function C(z,|gamma~) expresses the subjective cost of effort z, with shift parameter |gamma~. In addition to the conditions on |epsilon~(z,s) = s|xi~(z) in (3) we assume: ||phi~.sub.1~ |is greater than~ 0, ||phi~.sub.11~ |is less than~ 0, |C.sub.1~ |is greater than~ 0, |C.sub.11~ |is greater than~ 0, |C.sub.12~ |is greater than~ 0. Also let s |is an element of~ S have the distribution function H(s). In Smith and Walker |1992~ we apply these assumptions, and test their implications, for first price auction theory.

Now approximate |phi~ in (6) with the first three terms of its Taylor's expansion at the point x*. Then, since ||phi~.sub.1~(x*) = 0, the linear term involving |epsilon~ vanishes and we are left with,

(7) |phi~(x* + |epsilon~) |approximately equal to~ |phi~(x*) ||epsilon~.sup.2~/2.

Next, substitute from (6) and (7) and define

(8) |psi~(z) = ||integral~.sub.s~ |psi~(s)dH(s) = |phi~(x*)+||phi~.sub.11~(x*)var(s)||xi~.sup.2~(z)/2-C(z,|gamma~ ).

From the subject's perspective, the problem is to choose z* = arg max |psi~(z), which is determined by(7)

|Mathematical Expression Omitted~

By differentiating the equilibrium condition (9) it is straightforward to sign the following derivatives:

dz*/d|lambda~ |is greater than~0; dz*/d|gamma~ |is less than~0;

d var|epsilon~/d|lambda~ |is less than~0; d var|epsilon~/d|gamma~ |is greater than~0.

Increases in payoffs and/or decreases in decision cost are associated with increased decision effort, the observed consequence of which is a reduced variance of decision error. One "treatment" for lowering decision costs is experience: with increased experience decisions become easier and more routine, and we predict a reduction in decision error variance for given payoff levels.

III. EFFECT OF INCENTIVE REWARDS AND OPPORTUNITY COST ON PERFORMANCE IN EXPERIMENTAL ECONOMICS

There is a long experimental literature, going back at least to Siegel |1961~ and Siegel and Fouraker |1960~, in which monetary payments affecting subject opportunity cost are varied as a treatment variable, and their controlled effects on performance are measured. There is also a large experimental literature on choice among risky alternatives by cognitive psychologists. Most of the psychology literature reports the results of experiments conducted without monetary reinforcement, but in which the "subject is instructed to do his best" as in Siegel |1961, 767~. Psychologists defend such hypothetical choice procedures on the grounds that money either does not matter or matters insignificantly, so that monetary rewards are unnecessary. For example Dawes |1988, 122, 124, 131, 259~ cites several examples of decision-making experiments in which the use of monetary rewards yields results "the same" or "nearly" the same as when choices were hypothetical: Slovic, Fischoff and Lichtenstein |1982~, Grether and Plott |1979~, Tversky and Kahneman |1983~ and Tversky and Edwards |1966~. But some contrary citations in the psychology literature show that monetary incentives do matter: Goodman, Saltzman, Edwards and Krantz conclude, "These data, though far from conclusive, should not enhance the confidence of those who use elicitation methods based on obtaining certainty equivalents of imaginary bets" |1979, 398~; Siegel, Siegel and Andrews state "...we have little confidence in experiments in which the 'payoffs' are points, credits(8) or tokens" |1964, 148~; and Messick and Brayfield |1964~, passim, and Kroll, Levy and Rapoport |1988~ agree. Even in the psychology literature there is evidence of cases where rewards matter.

In the economics literature there is the important study of 240 farmers in India by Binswanger |1980; 1981~ comparing hypothetical choice among gambles with choices whose real payoffs ranged to levels exceeding the subjects' monthly incomes; the hypothetical results were not consistent with the motivated measures of risk aversion; with payoffs varied across three levels, subjects tended to show increased risk aversion at higher payoffs. Similarly, Wolf and Pohlman |1983~ compare hypothetical with actual bids of a Treasury bill dealer and find that the dealer's measure of constant relative risk aversion using actual bid data is four times larger than under hypothetical assessment. In a recent study of risk preferences under high monetary incentives in China, Kachelmeier and Shehata |1991~ report a significant difference between subject responses under low and very high monetary payoffs, and no difference between hypothetical and low monetary payments, but the usual anomalies long documented in tests of expected utility theory remain.

Several other studies report data in which monetary rewards make a difference in results. Plott and Smith |1978, 142~ report results in which marginal trades occur far more frequently with commission incentives than without; Fiorina and Plott |1978~ report committee decisions in which both mean deviations from theoretical predictions and standard errors are reduced by escalating reward levels; Grether |1981~ reports individual decision making experiments in which the incidence of "confused" behavior is reduced with monetary rewards, but subjects who appear not to be confused behave about the same with or without monetary rewards.

A dramatic example of how payoff levels can matter is found in Kroll, Levy and Rapoport |1988~, who provide experimental tests of the separation theorem and the capital asset pricing model in a computer-controlled portfolio selection task. Two experiments are reported: experiment 1 (thirty subjects) and experiment 2 (twelve subjects). The payoffs in experiment 2 were ten times greater than the payoffs in experiment 1, averaging $165 per subject, or about $44 per hour (thirty times the prevailing student hourly wage in Israel). The authors find that performance is significantly improved, relative to the capital asset pricing model, by the tenfold increase in stakes, and suggest that "This finding casts some doubt on the validity of the results of many experiments on decision making which involve trivial amounts of money or no money at all" |1988, 514~.

Forsythe, et al., |1988~ find that results in the dictator game are affected significantly by monetary incentives and that under no-pay conditions the results in ultimatum games are inconclusive because they fail to be replicable. Doubling payoffs does not affect behavior. With monetary incentives the authors strongly reject the fairness hypothesis.

Finally, an important study by McClelland, et al., |1991~ directly manipulates foregone expected profit in incentive decision mechanisms with treatments making the payoff optimum more or less peaked. They find that where the mechanism is "transparently" simple (low decision cost) flat maxima do as well as peaked maxima, but where the mechanism is "opaque," requiring search, the absolute deviation of subjects' bids from the optimal was significantly reduced when the payoff function was more peaked.

A. Decision Making and Decision Cost Under Uncertainty

The study by Tversky and Edwards |1966~ is of particular interest since they found that paying (charging) five cents (as compared with no salient reward) when a subject makes a correct (incorrect) prediction, is sufficient to yield outcomes closer to "the optimal" outcome. The task is the standard binary choice prediction experiment: two lights illuminate by an "independent trials" Bernoulli process with fixed probabilities p and 1-p, but these probabilities are unknown to the subjects. The standard result, replicated dozens of times without subject monetary reinforcement, but with the exhortation that subjects do their best, is for the average subject to reach a stable asymptote characterized by probability matching. That is, the pooled proportion of times the more frequent event is predicted is |Mathematical Expression Omitted~. Since the expected number of correct predictions is xp + (1-x) (1-p), when the more frequent event is chosen with frequency x, the "optimal" response is to set x* = 1 (p |is greater than~ 1/2). Tversky and Edwards report higher (than matching) pooled total frequencies for 1000 trials: |Mathematical Expression Omitted~ when p = 0.60 and |Mathematical Expression Omitted~ when p = 0.70; the asymptotic levels (not reported) can be presumed to be somewhat higher. But they conclude, "Though most differences between the treatment groups were in the direction predicted by a normative model, Ss were far indeed from what one would expect on the basis of such a model" |1966, 682~. In passing they conjecture that "A formal model for the obtained data might incorporate a notion such as cost associated with making a decision".

In fact a formal model attempting to do this had been published and tested somewhat earlier by Siegel |1959~; Siegel and Goldstein |1959~; Siegel |1961~; Siegel and Andrews |1962~; and Siegel, Siegel and Andrews |1964~. Instead of accepting the standard conclusion that people did not behave rationally and rejecting the utility theory of choice, Siegel elected to explore the possibility that the theory was essentially correct but incomplete. In particular, citing Simon |1956~, he argued that one should keep in mind the distinction between objective rationality, as viewed by the experimenter, and subjective rationality as viewed by the subject, given his perceptual and evaluational premises. In effect, Siegel placed himself in the position of a subject faced with hundreds of trials in the binary choice experiment. He postulated that (i) in the absence of monetary reinforcement the only reward would be the satisfaction (dissatisfaction) of a correct (incorrect) prediction, and (ii) the task is incredibly boring, since it involves both cognitive and kinesthetic monotony, and in this context there was a utility from varying one's prediction. A general two-state form of Siegel's model is to write the expected utility function (2) above in the form

(10) U = u(|a.sub.11~)px + u(|a.sub.12~)x(1-p) + u(|a.sub.21~)p(1-x)+ u(|a.sub.22~)(1-x)(1-p) + bx(1-x).

Again p is the probability of event |E.sub.1~, (1-p) the probability of event |E.sub.2~, and x is the proportion of trials (the probability for one trial) that the subject chooses |E.sub.1~. The term u(|a.sub.ij~) is the utility of outcome |a.sub.ij~, where i refers to the prediction (choice) of |E.sub.i~, and j refers to the subsequent occurrence of event |E.sub.j~. Hence |a.sub.11~ is the outcome when the subject correctly predicts |E.sub.1~, and |a.sub.12~ the outcome when |E.sub.1~ is incorrectly predicted. Now suppose we assume that |Mathematical Expression Omitted~ where |Mathematical Expression Omitted~ is the utility of the outcome (i,j) in the absence of monetary reward, |a.sub.ij~ is the monetary payment (or charge) when (i,j) obtains, and |u.sub.m~ is the utility of money.

It is seen that (10) is simply a special form of (4); one in which the control variable is F |is equivalent to~ p |is an element of~ |0, 1~, |theta~ is 1 if |E.sub.1~ occurs and 0 if |E.sub.2~ obtains, "effort" is assumed to be measured directly by z |is equivalent to~ x |is equivalent to~ y |is an element of~ |0,1~, and the utility of outcome is additively separable from the term bx(1-x), which Siegel calls the utility of response variability (or the subjective value of relieving monotony). Response variability is measured by x(1-x), a function which has the desirable property that it is maximized at x = 1/2, when diversification is largest. The constant b is then the marginal utility of variability. Siegel's particular test model is the special case in which (i) |Mathematical Expression Omitted~, if i = j, namely that the reward a is paid when the subject's prediction is correct on either event, and the outcome utility |u.sup.0~ for a correct prediction is the same for either event; (ii) |Mathematical Expression Omitted~, if i |is not equal to~ j, where |Mathematical Expression Omitted~ is the reward (cost if |Mathematical Expression Omitted~ |is less than~ 0) when the prediction is wrong on either event, and outcome utility is zero any time the prediction is incorrect. Then (10) becomes

|Mathematical Expression Omitted~

where |u.sup.0~ + |u.sub.m~(a) is the marginal utility of a correct prediction, and |Mathematical Expression Omitted~ the marginal utility of an incorrect prediction. Since U is everywhere strictly concave on |0,1~, for an interior maximum of |Mathematical Expression Omitted~ we want to satisfy

|Mathematical Expression Omitted~

There are three cases for which Siegel reports data.

Case 1. |Mathematical Expression Omitted~, the no payoff treatment. Then (11) yields

|Mathematical Expression Omitted~

This case is particularly interesting because it explains probability matching behavior. If the marginal rate of substitution of variability for a correct prediction is unity, (|u.sup.0~/b) = 1, then from (7.0), |Mathematical Expression Omitted~

Case 2. |Mathematical Expression Omitted~, the payoff treatment; i.e., you get paid when you are right, pay nothing when you are wrong. Then from (11),

|Mathematical Expression Omitted~

Case 3. |Mathematical Expression Omitted~, the payoff-loss treatment; you receive a cents when you are right, lose a cents when you are wrong. Then

|Mathematical Expression Omitted~

Since |Mathematical Expression Omitted~, by construction we get the testable implication that |Mathematical Expression Omitted~.

Based on data in Siegel et al., |1964~, Figure 1 provides histograms of the distribution of subjects' choice frequencies, |Mathematical Expression Omitted~, in the final (stable-state) block of twenty trials (100 total trials) under each of the reward conditions: no payoff, payoff, payoff-loss. In the payoff condition a = five cents, and in payoff-loss, |Mathematical Expression Omitted~. As predicted by the Siegel model there is an observed increase in the pooled mean choice proportion, |Mathematical Expression Omitted~, with increasing payoff motivation. We also compute from Siegel et al. |1964~ the root mean square (decision) error, S, in Figure 1, and note that it declines monotonically with increasing motivation. Subject predictions not only shift toward the objective optimal choice, x* = 1, with increasing rewards the variability of choices decreases, and under the highest motivation, payoff-loss, one in four subjects are at this boundary maximum.(9)

Siegel's model proposed a resolution of the paradox of "irrational" behavior in binary choice and provided new testable implications that were consistent with the results of new experiments. He showed that the previous psychology literature, which had concluded that people were not expected-utility maximizers, was the exception that proved the rule: subjects had no monetary incentive to maximize expected utility.

How far one can go in using decision cost concepts to resolve anomalies in standard individual decision theory remains open. A test case may be provided by the interesting work of Herrnstein and his coworkers, e.g., Herrnstein, Lowenstein, Prelec, and Vaughn |1991~. They study a much more complicated environment for the subject than the Bernoulli binary choice problem in which the reward from playing right or left depends upon the fraction of right-key choices in the previous N trials, where N is a treatment variable controlled by the experimenter. In the steady state, if R is the number of right-key choices in the last N trials, then the payoff is

|pi~(R/N)=(R/N)f(R/N)+|(N-R)/N~g(R/N), 0|is not less than~R|is not less than~N

where f(|center dot~) and g(|center dot~) are the current payoffs on right and left, respectively. If N is large, the effect of the current choice on future behavior is small and myopically difficult to perceive. Maximization for interior solutions is determined by the condition that

|Mathematical Expression Omitted~

Matching behavior in this case (Herrnstein calls it melioration) leads to the condition that f(R/N) = g(R/N). Herrnstein et al. |1991~ report results with varying degrees of support for the two hypotheses. For example, better information and rewards ('coin values') improved maximization marginally. But the payoff functions are all characterized by flat maxima, thus making the decision problem sensitive to decision costs and other factors affecting net subjective value.

B. Bilateral Bargaining and Cournot Oligopoly

In their first work on bilateral bargaining Siegel and Fouraker |1960~ studied a simple two-person--one buyer, one seller--form of what later became known as the double auction. The buyer is given a profit schedule based on a concave redemption function R(Q) for differing quantities, Q, of the commodity he might purchase from the seller, and the latter is given a profit schedule derived from a convex cost function, C(Q), for different quantities she might sell to the buyer. The message space for each is the two-tuple (P,Q), a price and a quantity bid or offered. Thus the buyer (seller) might begin with (|P.sub.1~, |Q.sub.1~). The seller either accepts or makes an offer, (|P.sub.2~, |Q.sub.2~). The buyer responds with an acceptance, or a new bid (P3, Q3), and so on until an agreement is reached or the time lime expires. In one experiment, consisting of eleven bargaining pairs, the Pareto optimal solution had the property that a one-unit deviation in quantity from the optimum led to total profit deviations of ten and sixteen cents. Referring to column (1) in Table I, we call this the "low" payoff condition. The authors' expressed concern was that this relatively "flat maximum" might contribute to the variability of outcomes across the bargaining pairs. Consequently, they altered the payoff tables so that the joint profit declined symmetrically by sixty cents with a one-unit deviation in quantity from the optimum and conducted two replications of the original experiment (twenty-two bargain pairs). This is referred to as the "high" payoff condition in Table I. Note that the mean error declined for $0.545 to $0.091, and, as reported by the authors, this treatment had no statistically significant effect on the strong tendency for bargainers to approach the predicted Pareto optimal outcome. However, the mean square error declined substantially from the low to high payoff condition so that increasing the opportunity cost of missing the optimum induced a tighter clustering of the data in the neighborhood of the optimum.

This concern for the relevance of payoff levels and the opportunity cost of deviations from the theoretical predictions carried over into their subsequent studies of bargaining and oligopoly. In Fouraker and Siegel |1963~, their two-person bargaining experiments were extended to the first-mover case in which the seller begins by announcing a price, followed by the buyer choosing a quantity. In the repeated game this process is replicated for a total of nineteen regular transactions, followed by an announced "final" twentieth transaction, followed by a special twenty-first transaction in which all payoff were tripled. TABULAR DATA OMITTED In Table I we summarize three experiments, columns (2), (3) and (4) in which the results of the twentieth "low" payoff transactions are compared with the twenty-first "high" payoff transactions. Comparing the means, |M.sub.L~ and |M.sub.H~, in these columns, we generally observe for both buyers and sellers in the bargaining pairs comparably small mean error deviations under the two payoff conditions. But the mean square error, |Mathematical Expression Omitted~, tends to be higher for the low payoff (and low opportunity cost) condition than the mean square error, |Mathematical Expression Omitted~, for high payoffs. An exception occurs in column (4) for the buyers. In this case one buyer among the twelve "high" payoff bargaining pairs responded with a punishing quantity of zero. This outlier depressed the mean error and greatly increased the mean square error. In columns (5) and (6) we report the results from Siegel and Fouraker in which payoffs were manipulated in their Cournot oligopoly experiments. Here the authors departed from their use of a final triple-payoff round with the same subjects. Instead they ran one group of duopolists and one of triopolists with bonus rewards in addition to the profit table rewards used in the regular groups. The bonuses were $8, $5 and $2 paid to the first, second and third highest profit makers. As recorded in columns (5) and (6), this caused no important change in mean error between the "low" and "high" payoff groups. The mean square error declined for duopolies and increased slightly for triopolies (the latter occasioned by one outlying observation).

From the above summary it is apparent that although Siegel and Fouraker undertook no thoroughly systematic investigation of the effect of payoff opportunity costs on market outcomes, they nevertheless demonstrated sensitivity to the possibility that such effects might be important. In particular their data suggest that the most likely effect of increasing the opportunity cost of nonoptimal decisions is to reduce the mean square error deviation from optimality.

Recently, Drago and Heywood |1989, 993~ report data for tournament and piece rate experiments as in Bull, Schotter and Weigelt |1987~ showing a very large reduction in the variance of observations when the payoff function is transformed so that it is more sharply peaked. Support for the predicted optimal behavior is observed, however, in all reported payoff environments. The tournament is a strategic game; the piece rate a game against nature. In both environments, the optimum is an interior point in a nonnegative real interval.

C. Double Auction Markets

In the "swastika" supply and demand market underlying the results in Table II each of eleven buyers is assigned an induced value of $4.20 and each of sixteen sellers a cost of $3.10 as in Smith |1965; 1976~. Thus excess supply is e - 5 units at all prices in the interval |3.10, 4.20~. A commission of $0.05 is paid to provide a minimum inducement to trade marginal units. Under "low" payoffs four of twenty-seven subjects were chosen randomly to be paid. Under "high" payoffs all were paid. The competitive equilibrium is $3.10 in the sense that price will tend to decrease at any price above $3.10, although there is excess supply at $3.10. In this case the equilibrium is at a boundary optimum with all surplus obtained by the buyers. In Table II we list the mean and mean square errors by "low" and "high" payoff condition for each trading period. The somewhat slower convergence for these markets than is the rule for more symmetric markets is particularly pronounced under the "low" payoff treatment. Note that experience across periods lowers error variance for both low and high payoff treatments. In this design all price error (deviations from equilibrium) are necessarily TABULAR DATA OMITTED positive for individually rational agents. Hence, decision error is biased and insofar as low motivation increases such error the effect must necessarily reduce support for theoretical predictions.

Jamal and Sunder |1991~ have undertaken the first systematic examination of the effects of (salient) monetary rewards in oral double auction trading using symmetric supply and demand designs. Their preliminary results support the conclusion that in the absence of prior experience and salient rewards (i.e., using fixed payments, independent of performance), markets do not converge reliably to the competitive equilibrium prediction, but do so converge in the presence of such rewards. However, once subjects are experienced using salient rewards, subjects converge in the usual double auction manner although they receive only fixed nonsalient rewards. Our interpretation is that in effect they become detached professionals, whose actions require little thought or attention, once sufficiently motivated to have mastered the process of double auction trading in simple environments.

VI. SUMMARY AND CONCLUSIONS

A survey of experimental papers which report data on the comparative effects of subject monetary rewards (including no rewards) shows a tendency for the error variance of the observations around the predicted optimal level to decline with increased monetary reward. Some studies report observations that fail to support the predictions of rational models, but as reward level is increased the data shift toward these predictions. Many of these latter studies have the common characteristic that the predictions of rational theory represent a solution on the boundary of the constraint set. For example in the binary choice task, the optimal response is to predict the more frequent event 100 percent of the time. Any decision error in these contexts necessarily yields a central tendency that deviates from the rational prediction. Before such cases can be judged to have falsified the theory, it is necessary to establish that increased payoffs fail to move the observations closer to the predicted boundary maxima.

Many of these results are consistent with an "effort" or labor theory of decision making. According to this theory better decisions--decisions closer to the optimum, as computed from the point of view of the experimenter/theorist--require increased cognitive and response effort which is disutilitarian. From the point of view of the decision maker the problem is to achieve a balance between the benefits of better decision making and the effort cost of decision. The experimenter/theorist predicts an optimal decision which is a special case of the decision that is optimal from the perspective of the subject. Since increasing the reward level causes an increase in effort, the new model predicts that subject decisions will move closer to the theorist's optimum and result in a reduction in the variance of decision error. But this predicted shift toward optimality is qualified if effort is already constrained by the maximum that can be supplied, which would be the case for very complex decision problems. An example of the latter may be in the task studied by Herrnstein, et al. |1991~.

1. As in Siegel |1959~, Smith |1976; 1980; 1982~ and Wilcox |1989; 1992~.

2. Some who do so are Day and Groves |1975~, Nelson and Winter |1982~ and Heiner |1985~.

3. See, for example, Siegel |1959~ and von Winterfeldt and Edwards |1986~, also Kroll, Levy end Rapoport |1988~ who are among the important exceptions. In fact rewards are not of exclusionary importance, and one of our concerns in the model to be presented is to account for the fact that one does not observe arbitrary and random behavior when there are no salient rewards. But because rewards do matter, they cannot be ignored in testing the models proposed by economic end game theorists. We think psychologists have focused on experiments without rewards because they are primarily interested in cognitive processes as in Smith |1991~. Their research suggests that monetary rewards are not crucial in studying such processes, but this is controversial.

4. Note, however, that |epsilon~ is not "error" from the point of view of the subject weighing (albeit unconsciously) benefit against decision cost. It is the experimentalist who interprets |epsilon~ as a prediction error of the theory, whose randomness derives from the unobserved random variable, s. For other theoretical treatment of the effect of errors specific to particular decision problems, see Hey and Di Cagno |1990~ and Berg and Dickhaut |1990~. In none of these approaches need the subject be aware of the effects of effort on decision. Our motivation is to model the decline in errors that is often associated with increased payoffs.

5. Cf, Harrison |1989~ and Smith |1976~.

6. But there are clearly games in which one can account for the predictive failure of the complete information model by reformulation as an incomplete information game in which each player responds strategically to the error in the play of other(s). For an excellent example see McKelvey and Palfrey |1990~ where the standard model fails to predict outcomes in the centipede game, but a reformulation as a game of incomplete information, in which the players make action errors and hold beliefs subject to error, is able to account for the experimental data. As they suggest, the model could probably be improved by making error rates depend upon decision utility differences.

7. See Theil |1971, 192~ for a derivation showing that the variance of error in a behavioral equation is inversely proportional to the second derivative of the criterion function at its optimum. Theil's interpretation, however, is the reverse of ours in that the actual decision (y, in our notation) is treated as nonstochastic, and the optimal decision (our x*) as stochastic as in Theil |1971, 193~; nor does Theil interpret the error as an economic variable subject to control by the decision effort of the agent.

8. It is now well documented that grade credits compare well with monetary rewards when the payments are salient as in Isaac, Walker and Williams |1991~ and in Kormendi and Plott |1982~.

9. There are two technical problems concerning this literature. It appears that in all cases the research design constrained the event realizations so that in fact the process was not Bernoullian. Siegel et al. |1964~ followed the earlier literature in randomizing by blocks of twenty trials. This means that for p = 0.75 in every sequence of twenty trials the realizations were constrained to yield fifteen "left" and five "right"; i.e. sampling occurred without replacement. A second shortcoming of this literature is that in its day no econometric procedures were available to estimate individual asymptotic probabilities using all the data, e.g., logit, and the analysis did not focus on individual behavior which is what Siegel's model is about.

REFERENCES

Berg, Joyce E., and John W. Dickhaut. "Preference Reversals: Incentives Do Matter." University of Chicago, November 1990.

Binswanger, Hans P. "Attitudes Toward Risk: Experimental Measurement in Rural India." American Journal of Agricultural Economics, August 1980, 395-407.

-----. "Attitudes Toward Risk: Theoretical Implications of an Experiment in Rural India." Economic Journal, December 1981, 867-90.

Bull, Clive, Andrew Schotter, and Keith Weigelt. "Tournaments and Piece Rates: An Experimental Study." Journal of Political Economy, February 1987, 1-33.

Conlisk, John. "Optimization Cost." Journal of Economic Behavior and Organization, April 1988, 213-28.

Dawes, Robyn M. Rational Choice In An Uncertain World. New York: Harcourt Brace Jonanovich, 1988.

Day, Richard H., and Theodore Groves. Adaptive Economic Models. New York: Academic Press, 1975.

Drago, Robert, and John S. Heywood. "Tournaments, Piece Rates, and the Shape of the Payoff Function." Journal of Political Economy, August 1989, 992-8.

Fiorina, Morris P., and Charles R. Plott. "Committee Decisions Under Majority Rule." American Political Science Review, June 1978, 575-98.

Forsythe, Robert, Joel L. Horowitz, N. E. Savin, and Martin Sefton. "Replicability, Fairness and Pay in Experiments with Simple Bargaining Games." University of Iowa Working Paper 88-30, December 1988.

Fouraker, Lawrence, and Sidney Siegel. Bargaining and Group Decision Making. New York: McGraw Hill Book Co., 1963.

Goodman, Barbara, Mark Saltzman, Ward Edwards, and David H. Krantz. "Prediction of Bids for Two-Outcome Gambles in a Casino Setting." Organizational Behavior and Human Performance, 24(3), December 1979, 382-99.

Grether, David M. "Financial Incentive Effects and individual Decision Making." Social Science Working Paper No. 401, California Institute of Technology, 1981.

Grether, David, and Charles Plott. "Economic Theory of Choice and the Preference Reversal Phenomenon." American Economic Review, September 1979, 623-38.

Harrison, Glenn. "Theory and Misbehavior in First Price Auctions." American Economic Review, September 1989, 749-62.

Heiner, R.A. "Uncertainty, Signal Detection Experiments, and Modelling Behavior," in The New Institutional Economics, edited by R. Langlois. New York: Cambridge University Press, 1986, 59-115.

Herrnstein, R. J., G. Lowenstein, D. Prelec, and W. Vaughn, Jr. "Utility Maximization and Melioration: Internalities in individual Choice." Department of Psychology, Harvard University, Draft, 1 April 1991.

Hey, John, and D. Di Cagno. "Circles and Triangles: An Experimental Estimation of Indifference Lines in the Marschak-Machina Triangle." Journal of Behavioral Decision Making, 3(4), 1990, 279-306.

Isaac, R. Mark, James M. Walker, and Arlington W. Williams. "Group Size and the Voluntary Provision of Public Goods: Experimental Evidence Utilizing Large Groups." Indiana University Working Paper, 1991.

Jamal, Karim, and Shyam Sunder. "Money Vs. Gaming: Effects of Salient Monetary Payments in Double Oral Auctions." Organizational Behavior and Human Decision Processes, 49(1), June 1991, 151-66.

Kachelmeier, Steven J., and Mohamed Shehata. "Examining Risk Preferences Under High Monetary Incentives: Experimental Evidence From the People's Republic of China." Graduate School of Business, University of Texas at Austin, Draft, June 1991.

Kormendi, Roger C., and Charles R. Plott. "Committee Decisions Under Alternative Procedural Rules: An Experimental Study Applying a New Nonmonetary Method of Preference Inducement." Journal of Economic Behavior and Organization, June/Sept. 1982, 175-95.

Kreps, David M. A Course in Microeconomic Theory. Princeton, N.J.: Princeton University Press, 1990.

Kroll, Yoram, Haim Levy, and Amnon Rapoport. "Experimental Tests of the Separation Theorem and the Capital Asset Pricing Model." American Economic Review, June 1988, 500-19.

McClelland, Gary, Michael McKee, William Schulze, Elizabeth Beckett, and Julie Irwin. "Task Transparency versus Payoff Dominance in Mechanism Design: An Analysis of the BDM," Laboratory for Economics and Psychology, University of Colorado, June 1991.

McElroy, Margorie B. "Additive General Error Models for Production Cost and Derived Demand or Share Systems." Journal of Political Economy, August 1987, 737-57.

McKelvey, Richard D., and Thomas R. Palfrey. "An Experimental Study of the Centipede Game." Social Science Working Paper No. 732, California Institute of Technology, May 1990.

Messick, Samuel, and Arthur H. Brayfield. Decision and Choice. McGraw-Hill, 1964.

Nelson, Richard R., and S. G. Winter. An Evolutionary Theory of Economic Change. Cambridge, Mass: Harvard University Press, 1982.

Plott, Charles R., and Vernon L. Smith. "An Experimental Examination of Two Exchange Institutious." Review of Economic Studies, February 1978, 133-53.

Selten, Reinhard. "Re-examination of the Perfectness Concept for Equilibrium Points in Extensive Games." International Journal of Game Theory, 4(1), 1975, 25-55.

Siegel, Sidney. "Theoretical Models of Choice and Strategy Behavior: Stable State Behavior in the Two-Choice Uncertain Outcome Situation." Psychometrika, 24(4), December 1959, 303-16.

-----. "Decision Making and Learning Under Varying Conditions of Reinforcement." Annals of the New York Academy of Science, 89, 28 January 1961, 766-83.

Siegel, Sidney, and Julia Andrews. "Magnitude of Reinforcement and Choice Behavior in Children." Journal of Experimental Psychology, 63(4), April 1962, 337-41.

Siegel, Sidney, and Lawrence Fouraker. Bargaining and Group Decision Making: Experiments in Bilateral Monopoly. New York: McGraw-Hill 1960.

Siegel, Sidney, and D. A. Goldstein. "Decision-Making Behavior in a Two-Choice Uncertain Outcome Situation." Journal of Experimental Psychology, 57(1), January 1959, 37-42.

Siegel, Sidney, Alberta Siegel, and Julia Andrews. Choice, Strategy, and Utility. New York: McGraw-Hill, 1964.

Simon, Herbert A. "A Comparison of Game Theory and Learning Theory." Psychometrika, 21(3), September 1956, 267-72.

Slovic, Paul, B. Fischoff, and S. Lichtenstein. New Directions for Methodology of Social and Behavioral Science: Question Framing and Response Consistency, No. 11, San Francisco: Jossey-Bass, 1982.

Smith, Vernon L. "Experimental Auction Markets and the Walrasian Hypothesis." Journal of Political Economy, August 1965, 387-93.

-----. "Experimental Economics: Induced Value Theory." American Economics Review, May 1976, 274-79.

-----. "Relevance of Laboratory Experiments to Testing Resource Allocation Theory," in Evaluation of Econometric Models, edited by J. Kmenta and J. Ramsey. New York Academic Press, 1980, 345-77.

-----. "Microeconomic Systems as an Experimental Science." American Economic Review, December 1982, 923-55.

-----. "Rational Choice: The Contrast Between Economics and Psychology." Journal of Political Economy, November 1991, 877-97.

Smith, Vernon L., and James M. Walker. "Rewards, Experience and Decision Costs in First Price Auctions." Economic Inquiry, April 1993, 237-44.

Theil, Henri. "An Economic Theory of the Second Moments of Disturbances of Behavioral Equations." American Economic Review, March 1971, 190-4.

Tversky, Amos, and Ward Edwards. "Information Versus Reward in Binary Choice." Journal of Experimental Psychology, 71(5), May 1966, 680-3.

Tversky, Amos, and Daniel Kahneman. "Extensional versus intuitive Reasoning: The Conjunction Fallacy in Probability Judgement." Psychological Review, 90(4), October 1983, 293-315.

von Winterfeldt, Detlof, and Ward Edwards. Decision Analysis and Behavioral Research. Cambridge: Cambridge University Press, 1986.

Wilcox, Nathaniel T. "Well-Defined Loss Metrics and the Situations that Demand Them." Economic Science Association Meetings, Tucson, AZ, 28-29 October 1989.

-----. "Incentives, Complexity, and Time Allocation in a Decision-Making Environment." Public Choice/Economic Science Association Meetings, New Orleans, 27-29 March 1992.

Wolf, Charles, and Larry Pohlman. "The Recovery of Risk Preferences from Actual Choice." Econometrica, May 1983, 843-50.

Experimentalists frequently argue that experimental subjects may have other motivations besides monetary gain that impinge upon the subject's decision making and that experimental results should be interpred with this caveat in mind.(1) The literature on adaptive and behavioral economic modelling often cites decision-making cost as part of the implicit justification for such models.(2) Conlisk |1988~ provides some examples of how "optimizing cost" (we will use the term "decision cost") can be explicitly integrated into modelling problems and suggests a generalization for the class of quadratic loss functions. Our approach is to formulate the decision cost problem as one of balancing the benefit against the effort cost of reducing "error," the latter defined as the difference between the optimal decision in the absence of decision cost and the agent's actual decision. This normalization has the advantage that the implications of the model can be directly tested from data on the error properties of a wide range of reported experiments. Our approach also attempts to encompass and formalize the argument that decision makers may fail to optimize because of the problem of flat maxima, as in von Winterfeldt and Edwards |1986~, or because of low opportunity cost for deviations from the optimal as in Harrison |1989~. Since standard theory predicts that decision makers will make optimal decisions however gently rounded is the payoff function, the theory is misspecified and needs modification. When the theory is properly specified, there should be nothing left to say about opportunity cost or flat maxima; i.e., when the benefit margin is weighed against decision cost, then there should be nothing left to forego. This is consistent with the arguments of Harrison and von Winterfeldt and Edwards.

The theoretical approach we examine is based on a perspective originally suggested by Simon |1956~, and operationalized by Siegel |1959~: rational choice theory is a correct first approximation to the analysis of decision behavior, but it is incomplete, and making it more complete requires the guidance of data from experimental designs motivated by this objective. Simon's original thesis was that "To predict how economic man will behave we need to know not only that he is rational, but also how he perceives the world--what alternatives he sees and what consequences he attaches to them" |1956, 271~. Thus there is no denial of human rationality; the issue is in what sense are individuals rational and how far can we go with abstract objective principles of how "rational" people "should" act.

But if a study of payoff effects is in need of a theoretical foundation, it also requires evidence. If traditional economic models assume that only monetary reward matters, psychologists tend to assume that such rewards do not matter.(3) The facts reviewed below support the more common sense view that rewards matter, and that neither of the polar views--only reward matters, or reward does not matter--are sustainable across the range of experimental economics. There will always be discrepancy between precise theory and observation, and thus room for theory improvement. Since rational theory postulates motivated decision makers, it follows that varying reward levels is one of the many important tools needed to explore this discrepancy. Our fundamental view is that the experimentalist has as much to learn from experimental subjects about subjective rationality, as human decision makers have to learn from the models that we call "rational."

I. MOTIVATION THEORY IN THE PRESENCE OF DECISION COST

In this section we develop a simple theoretical framework to help: (i) improve understanding of the circumstances that might yield predicted optimal decisions or deviations therefrom; and (ii) provide guidance in experimental design and in interpreting observations.

We begin with a statement of rational theory, as derived from the perspective of the theorist/experimentalist. Letting X, W, |theta~ and Z be convex sets, the variables we want to identify are defined below.

1. x |is an element of~ X, the subject's message decision variable such as price, quantity, bid, forecast, etc. This variable is defined by the experimenter's interpretation of a theory in the context of a particular experimental design and institution.

2. w |epsilon~ W, an environmental variable controlled as a "treatment" by the experimenter such as commodity value(s), asset endowment, production cost, etc.

3. |theta~|epsilon~|theta~, a random variable with distribution function F(|theta~) defined on |theta~. The function F, chosen by the experimenter, generates the appropriate probabilities in games against nature or the appropriate uncertainty about other player types when modelling the subject's choice in a Harsanyi game of incomplete information. Thus, in a private value auction, |theta~ is the uncertain value for each of the N - 1 competitors of a given bidder.

4. |pi~(x, w, |theta~), the outcome function, controlled by the experimenter, denominated in experimental money (tokens, Francs, etc.), and based on the motivation assumptions in the theoretical model. The function |pi~ is assumed to be strictly concave in x, so that given w and F(|theta~), the experimenter predicts that x* will be the unique optimal x chosen by the subject.

5. |lambda~, the scalar payoff transformation rate, controlled by the experimenter, that converts experimental money into the reward medium. We assume that this marginal conversion rate is constant, although there are experiments in which it is not; i.e., the conversion rate is some nonlinear increasing function |lambda~(|pi~) of outcome.

The standard expected utility function, in terms of the above variables, is written

(1) U(x, w; |lambda~, |pi~, F) = ||integral~.sub.|theta~~ u||lambda~|pi~(x, w, |theta~)~ dF(|theta~).

The first-order condition for x* = arg max U is

(2) |lambda~||phi~.sub.1~ = 0, where ||phi~.sub.1~ = ||integral~.sub.|theta~~ |u.sub.1~||pi~.sub.1~~dF(|theta~)

where a subscript j denotes differentiation with respect to argument j. If utility is increasing in reward (|lambda~ |is greater than~ 0) then (2) implies ||phi~.sub.1~ = 0, with solution x* = x(w,F). The function x(w,F) is the source of testable experimental hypotheses concerning the subject's predicted choice, x*. Note that if u is linear, or a power function of |pi~ then x* is independent of |lambda~ |is greater than~ 0 and is optimal however small is the opportunity cost, ||pi~.sub.11~(x*).

We now examine the same problem from the perspective of the decision maker (subject). To achieve this we augment the list of variables (1) - (5) with the following:

6. y |epsilon~ X, the value of the decision variable chosen by the subject, given his/her perception, evaluation, analysis and understanding of the instructions (augmented by experience where there is replication) and task that he/she is to perform. Outcome is now written |pi~(y, w, |theta~)

7. z |epsilon~ Z, an unobserved decision process variable controlled consciously, or unconsciously, by the subject in executing the task that results in y. One can think of z as the decision cost or effort (concentration, attention, thinking, monitoring, reporting, acting) which the subject applies to the task presented by the experimenter. As with quarks in particle physics we may have no direct measures of z, but we look for traces of its effects on the choice of y. If z is recognized as lurking within every subjective model of decision, then we are primed to expect to find its traces, and where z is thought to be of substantial importance (as in Siegel's model below and in the general model we propose) seek to establish this proposition by manipulation of the experimental procedures that affect z and thus y.

8. Now consider the equation

(3) y = x* + |epsilon~(z,s)

e.g., |Mathematical Expression Omitted~, where |epsilon~ (z,s) is a function, normalized with respect to x*, specifying the effect of z on subject choice of y. Think of |xi~(z) as the subject's production transformation function of effort, z, into decision y. An unobserved random variable, s, describing the "state" of the person at the time of decision, induces randomness on |epsilon~ conditional on z. Observations on the effects of s are obtained by repeated play of the task. More effort is postulated to narrow the distance between predicted optimal (x*) and actual (y) choices, and thereby increases payoff. |epsilon~ is naturally interpreted as prediction decision error, and it is random across repeated play choices of y for given z.(4) Some hypothesized properties of the error function are suggested in the discussion below.

9. |mu~ |is greater than~ 0, is a scalar characteristic of the subject which measures the monetary equivalent of the subjective value of outcome |pi~ on the assumption that there is self-satisfaction, weak or strong, in achieving any outcome |pi~. This parameter is assumed to be additive with the reward scalar, |lambda~ |is greater than or equal to~ 0, and allows the model to account for nonrandom behavior when the salient exogenous reward is |lambda~ = 0.

It will be evident to the reader that any of the variables x, y, z, w might be represented by vectors in place of scalars, but the latter are sufficient for examining the principles we want to address. Also, we omit the subscript i on the appropriate variables and functions, it being understood that the perspective is always that of some particular person, i, such as yourself.

We can now write the subjective expected utility function using the new variables,

(4) |psi~(y, z, w; |lambda~, |mu~, |pi~, F~ = ||integral~.sub.|theta~~ u|(|mu~+|lambda~)|pi~(y, w, |theta~), z~ dF(|theta~)

where |u.sub.2~ |is less than~ 0 is the marginal decision cost (disutility) of effort, z. Substituting from equation (3), the first-order condition for z* = arg max |psi~ is

(5) ||phi~.sub.1~ |is greater than or equal to~ ||phi~.sub.2~/(|mu~+|lambda~) ||epsilon~.sub.1~,

where ||phi~.sub.1~ = ||integral~.sub.|theta~~ |u.sub.1~||pi~.sub.1~dF(|theta~);

||phi~.sub.2~ = ||integral~.sub.|theta~~ |u.sub.2~dF(|theta~).

We will examine three cases, each representing a possible solution to (5).

(i) Bounded Rationality Case

When |is greater than~ holds in (5) we have a constrained solution with z* on the boundary of the set Z, e.g. if |Mathematical Expression Omitted~ we have |Mathematical Expression Omitted~. This "bounded rationality" case can be important: there are physiological and intellectual limitations on human decision making ability; when these limits are binding the agent's constrained optimal decision is |Mathematical Expression Omitted~ independent of the reward, |lambda~. One should think of |lambda~ as operating on motivation, not physiological and mental capacity. This case provides one formalization of Simon's concept of bounded rationality in decision making.

Now consider interior solutions where the equality condition holds in (5). First, note that in contrast with equations (1)-(2), we have in (4)-(5) a well-defined maximum problem when |lambda~ = 0. This is essential in explaining why subject decisions are not just random responses in the absence of salient rewards.

(ii) Pure Decision Error Case

Consider the degenerate case in which marginal decision cost ||phi~.sub.2~ |is equivalent to~ 0 and |epsilon~ (z, s) |is equivalent to~ |epsilon~ (s) in (3) and (4). Under these conditions effort does not enter the criterion function (4), the costless direct decision variable is y, and instead of (5) we get the condition ||phi~.sub.1~ = 0, which determines y* = y(w), where y* = x* + |epsilon~ (s). This formulation is the same as in (1) and (2) except that it implements the decision-making hypothesis with an econometric specification of a decision error term (see McElroy |1987~ for an examination of error models in production, cost and derived demand equation systems). This is usually recognized, ex post, in the form of the assumption that decision error, y* - x* = |epsilon~ is randomly distributed with mean zero and variance ||sigma~.sup.2~; i.e., |epsilon~ is not biased. Hypothesis testing normally proceeds on this maintained assumption. As we will see, in the survey below the data often do not contradict the condition E(|epsilon~) = 0--subject choices, y*, are distributed around a mean (or median) that is "close" to x*. But there are exceptions, and at least some of these exceptions occur when the Euclidean distance between x* and the boundary of the set X is at or near zero. In that case the data suggest that E(|epsilon~) |is not equal to~ 0. If decision error is random, then E(|epsilon~) = 0 is incompatible with boundary maxima. So the idea is this: part of the reason why data may be consistent with predictions is that x* is far enough into the interior of X that random unbiased decision errors cause no difficulty. Of course they may be biased, but for sure errors are biased at boundary optima, where the distribution of |epsilon~ is asymmetrically truncated.

(iii) Dominance Case

Now consider the more general interior maximum defined by (5). In particular (5) informs us that if equilibrium marginal decision cost (-||phi~.sub.2~/(|mu~+|lambda~) ||epsilon~.sub.1~) goes to zero as |Mathematical Expression Omitted~ then we have dominance at the reward level |Mathematical Expression Omitted~ and higher; i.e., rewards are sufficiently salient to swamp decision cost effort.(5) Whether this property holds in any particular case, and what level of |Mathematical Expression Omitted~, if any, is sufficient for dominance is entirely an empirical question. We have already seen why this property might not hold: the solution value z* from (5) may be on the boundary of Z. Additional physical, mental or sensory effort may not be possible. Thus in a signal detection experiment, once a subject approaches the boundary of his/her auditory capability, little if any additional auditory improvement may be forthcoming by escalating reward payments. Similar considerations may apply for some subjects in almost any task.

The methodological implications of the above analysis are clear. In a new experimental situation, if the experimenter finds that decision error, |epsilon~, is biased enough to contradict the theory, then the first thing to question is the experimental instructions and procedures. Can they be simplified? If not (the task is inherently difficult), does experience help? These are techniques that, a fortiori, may help to reduce decision cost. The second thing to question is the payoff level. Try doubling, tripling, or an n-fold increase in |lambda~. We do it frequently, and report in Smith and Walker |1992~ the effects of five-, ten- and twenty-fold increases in auction experiments. This is not done for realism since there are both low stake and high stake economic decisions in life, and all are of interest. You manipulate payoffs to increase understanding of possible trade-offs between the benefits and costs of optimal decisions, and to explore the depths and limits of objective optimality.

Where our model of the technology of errors is applied to the Nash equilibrium analysis of behavior, we assume that subjects are "boundedly rational" in the sense that they do not behave as if their equilibrium choice behavior takes into account the error properties of their rivals' choices (or is a best response to other subjects' actual error prone choices). That such errors may affect the calculation of Nash "trembling-hand" equilibria has been demonstrated in the path-breaking theoretical work of Selten |1975~, provided that the error structure of decision making is common knowledge: "...all the players have the same notions about how their fellow players slip..." (Kreps |1990, 439~). But experimental studies in bargaining, oligopoly and auction markets going back to Fouraker and Siegel |1963~ have found that Nash models of single play behavior that assume common (payoff) knowledge actually perform best in repeated games under private (incomplete) information and depart from such models under common information. Consequently, such models truly exhibit equilibrium behavior in that subjects tend to gravitate to, and remain near, such an equilibrium, but with error. In the examples in section III, we have not found a need to suppose that, from their point of view, subjects are solving for a trembling-hand equilibria. Subjects are getting it right on average in the interior optimum cases. Thus, the simpler Nash models account for the central tendencies of the data, but not for the error.(6)

II. SOME COMPARATIVE TREATMENT PREDICTIONS WITH ADDITIVE SEPARABILITY

In this section we consider the implications of the case in which |psi~ in (4) can be written in the additively separable form:

(6) |psi~ = |phi~(x* + |epsilon~) - C(z,|gamma~)

where

|phi~(x* + |epsilon~) = ||integral~.sub.|theta~ u|(|mu~+ |lambda~)|pi~(x* + |epsilon~(z,s), w, |theta~)~ dF (|theta~).

The function C(z,|gamma~) expresses the subjective cost of effort z, with shift parameter |gamma~. In addition to the conditions on |epsilon~(z,s) = s|xi~(z) in (3) we assume: ||phi~.sub.1~ |is greater than~ 0, ||phi~.sub.11~ |is less than~ 0, |C.sub.1~ |is greater than~ 0, |C.sub.11~ |is greater than~ 0, |C.sub.12~ |is greater than~ 0. Also let s |is an element of~ S have the distribution function H(s). In Smith and Walker |1992~ we apply these assumptions, and test their implications, for first price auction theory.

Now approximate |phi~ in (6) with the first three terms of its Taylor's expansion at the point x*. Then, since ||phi~.sub.1~(x*) = 0, the linear term involving |epsilon~ vanishes and we are left with,

(7) |phi~(x* + |epsilon~) |approximately equal to~ |phi~(x*) ||epsilon~.sup.2~/2.

Next, substitute from (6) and (7) and define

(8) |psi~(z) = ||integral~.sub.s~ |psi~(s)dH(s) = |phi~(x*)+||phi~.sub.11~(x*)var(s)||xi~.sup.2~(z)/2-C(z,|gamma~ ).

From the subject's perspective, the problem is to choose z* = arg max |psi~(z), which is determined by(7)

|Mathematical Expression Omitted~

By differentiating the equilibrium condition (9) it is straightforward to sign the following derivatives:

dz*/d|lambda~ |is greater than~0; dz*/d|gamma~ |is less than~0;

d var|epsilon~/d|lambda~ |is less than~0; d var|epsilon~/d|gamma~ |is greater than~0.

Increases in payoffs and/or decreases in decision cost are associated with increased decision effort, the observed consequence of which is a reduced variance of decision error. One "treatment" for lowering decision costs is experience: with increased experience decisions become easier and more routine, and we predict a reduction in decision error variance for given payoff levels.

III. EFFECT OF INCENTIVE REWARDS AND OPPORTUNITY COST ON PERFORMANCE IN EXPERIMENTAL ECONOMICS

There is a long experimental literature, going back at least to Siegel |1961~ and Siegel and Fouraker |1960~, in which monetary payments affecting subject opportunity cost are varied as a treatment variable, and their controlled effects on performance are measured. There is also a large experimental literature on choice among risky alternatives by cognitive psychologists. Most of the psychology literature reports the results of experiments conducted without monetary reinforcement, but in which the "subject is instructed to do his best" as in Siegel |1961, 767~. Psychologists defend such hypothetical choice procedures on the grounds that money either does not matter or matters insignificantly, so that monetary rewards are unnecessary. For example Dawes |1988, 122, 124, 131, 259~ cites several examples of decision-making experiments in which the use of monetary rewards yields results "the same" or "nearly" the same as when choices were hypothetical: Slovic, Fischoff and Lichtenstein |1982~, Grether and Plott |1979~, Tversky and Kahneman |1983~ and Tversky and Edwards |1966~. But some contrary citations in the psychology literature show that monetary incentives do matter: Goodman, Saltzman, Edwards and Krantz conclude, "These data, though far from conclusive, should not enhance the confidence of those who use elicitation methods based on obtaining certainty equivalents of imaginary bets" |1979, 398~; Siegel, Siegel and Andrews state "...we have little confidence in experiments in which the 'payoffs' are points, credits(8) or tokens" |1964, 148~; and Messick and Brayfield |1964~, passim, and Kroll, Levy and Rapoport |1988~ agree. Even in the psychology literature there is evidence of cases where rewards matter.

In the economics literature there is the important study of 240 farmers in India by Binswanger |1980; 1981~ comparing hypothetical choice among gambles with choices whose real payoffs ranged to levels exceeding the subjects' monthly incomes; the hypothetical results were not consistent with the motivated measures of risk aversion; with payoffs varied across three levels, subjects tended to show increased risk aversion at higher payoffs. Similarly, Wolf and Pohlman |1983~ compare hypothetical with actual bids of a Treasury bill dealer and find that the dealer's measure of constant relative risk aversion using actual bid data is four times larger than under hypothetical assessment. In a recent study of risk preferences under high monetary incentives in China, Kachelmeier and Shehata |1991~ report a significant difference between subject responses under low and very high monetary payoffs, and no difference between hypothetical and low monetary payments, but the usual anomalies long documented in tests of expected utility theory remain.

Several other studies report data in which monetary rewards make a difference in results. Plott and Smith |1978, 142~ report results in which marginal trades occur far more frequently with commission incentives than without; Fiorina and Plott |1978~ report committee decisions in which both mean deviations from theoretical predictions and standard errors are reduced by escalating reward levels; Grether |1981~ reports individual decision making experiments in which the incidence of "confused" behavior is reduced with monetary rewards, but subjects who appear not to be confused behave about the same with or without monetary rewards.

A dramatic example of how payoff levels can matter is found in Kroll, Levy and Rapoport |1988~, who provide experimental tests of the separation theorem and the capital asset pricing model in a computer-controlled portfolio selection task. Two experiments are reported: experiment 1 (thirty subjects) and experiment 2 (twelve subjects). The payoffs in experiment 2 were ten times greater than the payoffs in experiment 1, averaging $165 per subject, or about $44 per hour (thirty times the prevailing student hourly wage in Israel). The authors find that performance is significantly improved, relative to the capital asset pricing model, by the tenfold increase in stakes, and suggest that "This finding casts some doubt on the validity of the results of many experiments on decision making which involve trivial amounts of money or no money at all" |1988, 514~.

Forsythe, et al., |1988~ find that results in the dictator game are affected significantly by monetary incentives and that under no-pay conditions the results in ultimatum games are inconclusive because they fail to be replicable. Doubling payoffs does not affect behavior. With monetary incentives the authors strongly reject the fairness hypothesis.

Finally, an important study by McClelland, et al., |1991~ directly manipulates foregone expected profit in incentive decision mechanisms with treatments making the payoff optimum more or less peaked. They find that where the mechanism is "transparently" simple (low decision cost) flat maxima do as well as peaked maxima, but where the mechanism is "opaque," requiring search, the absolute deviation of subjects' bids from the optimal was significantly reduced when the payoff function was more peaked.

A. Decision Making and Decision Cost Under Uncertainty

The study by Tversky and Edwards |1966~ is of particular interest since they found that paying (charging) five cents (as compared with no salient reward) when a subject makes a correct (incorrect) prediction, is sufficient to yield outcomes closer to "the optimal" outcome. The task is the standard binary choice prediction experiment: two lights illuminate by an "independent trials" Bernoulli process with fixed probabilities p and 1-p, but these probabilities are unknown to the subjects. The standard result, replicated dozens of times without subject monetary reinforcement, but with the exhortation that subjects do their best, is for the average subject to reach a stable asymptote characterized by probability matching. That is, the pooled proportion of times the more frequent event is predicted is |Mathematical Expression Omitted~. Since the expected number of correct predictions is xp + (1-x) (1-p), when the more frequent event is chosen with frequency x, the "optimal" response is to set x* = 1 (p |is greater than~ 1/2). Tversky and Edwards report higher (than matching) pooled total frequencies for 1000 trials: |Mathematical Expression Omitted~ when p = 0.60 and |Mathematical Expression Omitted~ when p = 0.70; the asymptotic levels (not reported) can be presumed to be somewhat higher. But they conclude, "Though most differences between the treatment groups were in the direction predicted by a normative model, Ss were far indeed from what one would expect on the basis of such a model" |1966, 682~. In passing they conjecture that "A formal model for the obtained data might incorporate a notion such as cost associated with making a decision".

In fact a formal model attempting to do this had been published and tested somewhat earlier by Siegel |1959~; Siegel and Goldstein |1959~; Siegel |1961~; Siegel and Andrews |1962~; and Siegel, Siegel and Andrews |1964~. Instead of accepting the standard conclusion that people did not behave rationally and rejecting the utility theory of choice, Siegel elected to explore the possibility that the theory was essentially correct but incomplete. In particular, citing Simon |1956~, he argued that one should keep in mind the distinction between objective rationality, as viewed by the experimenter, and subjective rationality as viewed by the subject, given his perceptual and evaluational premises. In effect, Siegel placed himself in the position of a subject faced with hundreds of trials in the binary choice experiment. He postulated that (i) in the absence of monetary reinforcement the only reward would be the satisfaction (dissatisfaction) of a correct (incorrect) prediction, and (ii) the task is incredibly boring, since it involves both cognitive and kinesthetic monotony, and in this context there was a utility from varying one's prediction. A general two-state form of Siegel's model is to write the expected utility function (2) above in the form

(10) U = u(|a.sub.11~)px + u(|a.sub.12~)x(1-p) + u(|a.sub.21~)p(1-x)+ u(|a.sub.22~)(1-x)(1-p) + bx(1-x).

Again p is the probability of event |E.sub.1~, (1-p) the probability of event |E.sub.2~, and x is the proportion of trials (the probability for one trial) that the subject chooses |E.sub.1~. The term u(|a.sub.ij~) is the utility of outcome |a.sub.ij~, where i refers to the prediction (choice) of |E.sub.i~, and j refers to the subsequent occurrence of event |E.sub.j~. Hence |a.sub.11~ is the outcome when the subject correctly predicts |E.sub.1~, and |a.sub.12~ the outcome when |E.sub.1~ is incorrectly predicted. Now suppose we assume that |Mathematical Expression Omitted~ where |Mathematical Expression Omitted~ is the utility of the outcome (i,j) in the absence of monetary reward, |a.sub.ij~ is the monetary payment (or charge) when (i,j) obtains, and |u.sub.m~ is the utility of money.

It is seen that (10) is simply a special form of (4); one in which the control variable is F |is equivalent to~ p |is an element of~ |0, 1~, |theta~ is 1 if |E.sub.1~ occurs and 0 if |E.sub.2~ obtains, "effort" is assumed to be measured directly by z |is equivalent to~ x |is equivalent to~ y |is an element of~ |0,1~, and the utility of outcome is additively separable from the term bx(1-x), which Siegel calls the utility of response variability (or the subjective value of relieving monotony). Response variability is measured by x(1-x), a function which has the desirable property that it is maximized at x = 1/2, when diversification is largest. The constant b is then the marginal utility of variability. Siegel's particular test model is the special case in which (i) |Mathematical Expression Omitted~, if i = j, namely that the reward a is paid when the subject's prediction is correct on either event, and the outcome utility |u.sup.0~ for a correct prediction is the same for either event; (ii) |Mathematical Expression Omitted~, if i |is not equal to~ j, where |Mathematical Expression Omitted~ is the reward (cost if |Mathematical Expression Omitted~ |is less than~ 0) when the prediction is wrong on either event, and outcome utility is zero any time the prediction is incorrect. Then (10) becomes

|Mathematical Expression Omitted~

where |u.sup.0~ + |u.sub.m~(a) is the marginal utility of a correct prediction, and |Mathematical Expression Omitted~ the marginal utility of an incorrect prediction. Since U is everywhere strictly concave on |0,1~, for an interior maximum of |Mathematical Expression Omitted~ we want to satisfy

|Mathematical Expression Omitted~

There are three cases for which Siegel reports data.

Case 1. |Mathematical Expression Omitted~, the no payoff treatment. Then (11) yields

|Mathematical Expression Omitted~

This case is particularly interesting because it explains probability matching behavior. If the marginal rate of substitution of variability for a correct prediction is unity, (|u.sup.0~/b) = 1, then from (7.0), |Mathematical Expression Omitted~

Case 2. |Mathematical Expression Omitted~, the payoff treatment; i.e., you get paid when you are right, pay nothing when you are wrong. Then from (11),

|Mathematical Expression Omitted~

Case 3. |Mathematical Expression Omitted~, the payoff-loss treatment; you receive a cents when you are right, lose a cents when you are wrong. Then

|Mathematical Expression Omitted~

Since |Mathematical Expression Omitted~, by construction we get the testable implication that |Mathematical Expression Omitted~.

Based on data in Siegel et al., |1964~, Figure 1 provides histograms of the distribution of subjects' choice frequencies, |Mathematical Expression Omitted~, in the final (stable-state) block of twenty trials (100 total trials) under each of the reward conditions: no payoff, payoff, payoff-loss. In the payoff condition a = five cents, and in payoff-loss, |Mathematical Expression Omitted~. As predicted by the Siegel model there is an observed increase in the pooled mean choice proportion, |Mathematical Expression Omitted~, with increasing payoff motivation. We also compute from Siegel et al. |1964~ the root mean square (decision) error, S, in Figure 1, and note that it declines monotonically with increasing motivation. Subject predictions not only shift toward the objective optimal choice, x* = 1, with increasing rewards the variability of choices decreases, and under the highest motivation, payoff-loss, one in four subjects are at this boundary maximum.(9)

Siegel's model proposed a resolution of the paradox of "irrational" behavior in binary choice and provided new testable implications that were consistent with the results of new experiments. He showed that the previous psychology literature, which had concluded that people were not expected-utility maximizers, was the exception that proved the rule: subjects had no monetary incentive to maximize expected utility.

How far one can go in using decision cost concepts to resolve anomalies in standard individual decision theory remains open. A test case may be provided by the interesting work of Herrnstein and his coworkers, e.g., Herrnstein, Lowenstein, Prelec, and Vaughn |1991~. They study a much more complicated environment for the subject than the Bernoulli binary choice problem in which the reward from playing right or left depends upon the fraction of right-key choices in the previous N trials, where N is a treatment variable controlled by the experimenter. In the steady state, if R is the number of right-key choices in the last N trials, then the payoff is

|pi~(R/N)=(R/N)f(R/N)+|(N-R)/N~g(R/N), 0|is not less than~R|is not less than~N

where f(|center dot~) and g(|center dot~) are the current payoffs on right and left, respectively. If N is large, the effect of the current choice on future behavior is small and myopically difficult to perceive. Maximization for interior solutions is determined by the condition that

|Mathematical Expression Omitted~

Matching behavior in this case (Herrnstein calls it melioration) leads to the condition that f(R/N) = g(R/N). Herrnstein et al. |1991~ report results with varying degrees of support for the two hypotheses. For example, better information and rewards ('coin values') improved maximization marginally. But the payoff functions are all characterized by flat maxima, thus making the decision problem sensitive to decision costs and other factors affecting net subjective value.

B. Bilateral Bargaining and Cournot Oligopoly

In their first work on bilateral bargaining Siegel and Fouraker |1960~ studied a simple two-person--one buyer, one seller--form of what later became known as the double auction. The buyer is given a profit schedule based on a concave redemption function R(Q) for differing quantities, Q, of the commodity he might purchase from the seller, and the latter is given a profit schedule derived from a convex cost function, C(Q), for different quantities she might sell to the buyer. The message space for each is the two-tuple (P,Q), a price and a quantity bid or offered. Thus the buyer (seller) might begin with (|P.sub.1~, |Q.sub.1~). The seller either accepts or makes an offer, (|P.sub.2~, |Q.sub.2~). The buyer responds with an acceptance, or a new bid (P3, Q3), and so on until an agreement is reached or the time lime expires. In one experiment, consisting of eleven bargaining pairs, the Pareto optimal solution had the property that a one-unit deviation in quantity from the optimum led to total profit deviations of ten and sixteen cents. Referring to column (1) in Table I, we call this the "low" payoff condition. The authors' expressed concern was that this relatively "flat maximum" might contribute to the variability of outcomes across the bargaining pairs. Consequently, they altered the payoff tables so that the joint profit declined symmetrically by sixty cents with a one-unit deviation in quantity from the optimum and conducted two replications of the original experiment (twenty-two bargain pairs). This is referred to as the "high" payoff condition in Table I. Note that the mean error declined for $0.545 to $0.091, and, as reported by the authors, this treatment had no statistically significant effect on the strong tendency for bargainers to approach the predicted Pareto optimal outcome. However, the mean square error declined substantially from the low to high payoff condition so that increasing the opportunity cost of missing the optimum induced a tighter clustering of the data in the neighborhood of the optimum.

This concern for the relevance of payoff levels and the opportunity cost of deviations from the theoretical predictions carried over into their subsequent studies of bargaining and oligopoly. In Fouraker and Siegel |1963~, their two-person bargaining experiments were extended to the first-mover case in which the seller begins by announcing a price, followed by the buyer choosing a quantity. In the repeated game this process is replicated for a total of nineteen regular transactions, followed by an announced "final" twentieth transaction, followed by a special twenty-first transaction in which all payoff were tripled. TABULAR DATA OMITTED In Table I we summarize three experiments, columns (2), (3) and (4) in which the results of the twentieth "low" payoff transactions are compared with the twenty-first "high" payoff transactions. Comparing the means, |M.sub.L~ and |M.sub.H~, in these columns, we generally observe for both buyers and sellers in the bargaining pairs comparably small mean error deviations under the two payoff conditions. But the mean square error, |Mathematical Expression Omitted~, tends to be higher for the low payoff (and low opportunity cost) condition than the mean square error, |Mathematical Expression Omitted~, for high payoffs. An exception occurs in column (4) for the buyers. In this case one buyer among the twelve "high" payoff bargaining pairs responded with a punishing quantity of zero. This outlier depressed the mean error and greatly increased the mean square error. In columns (5) and (6) we report the results from Siegel and Fouraker in which payoffs were manipulated in their Cournot oligopoly experiments. Here the authors departed from their use of a final triple-payoff round with the same subjects. Instead they ran one group of duopolists and one of triopolists with bonus rewards in addition to the profit table rewards used in the regular groups. The bonuses were $8, $5 and $2 paid to the first, second and third highest profit makers. As recorded in columns (5) and (6), this caused no important change in mean error between the "low" and "high" payoff groups. The mean square error declined for duopolies and increased slightly for triopolies (the latter occasioned by one outlying observation).

From the above summary it is apparent that although Siegel and Fouraker undertook no thoroughly systematic investigation of the effect of payoff opportunity costs on market outcomes, they nevertheless demonstrated sensitivity to the possibility that such effects might be important. In particular their data suggest that the most likely effect of increasing the opportunity cost of nonoptimal decisions is to reduce the mean square error deviation from optimality.

Recently, Drago and Heywood |1989, 993~ report data for tournament and piece rate experiments as in Bull, Schotter and Weigelt |1987~ showing a very large reduction in the variance of observations when the payoff function is transformed so that it is more sharply peaked. Support for the predicted optimal behavior is observed, however, in all reported payoff environments. The tournament is a strategic game; the piece rate a game against nature. In both environments, the optimum is an interior point in a nonnegative real interval.

C. Double Auction Markets

In the "swastika" supply and demand market underlying the results in Table II each of eleven buyers is assigned an induced value of $4.20 and each of sixteen sellers a cost of $3.10 as in Smith |1965; 1976~. Thus excess supply is e - 5 units at all prices in the interval |3.10, 4.20~. A commission of $0.05 is paid to provide a minimum inducement to trade marginal units. Under "low" payoffs four of twenty-seven subjects were chosen randomly to be paid. Under "high" payoffs all were paid. The competitive equilibrium is $3.10 in the sense that price will tend to decrease at any price above $3.10, although there is excess supply at $3.10. In this case the equilibrium is at a boundary optimum with all surplus obtained by the buyers. In Table II we list the mean and mean square errors by "low" and "high" payoff condition for each trading period. The somewhat slower convergence for these markets than is the rule for more symmetric markets is particularly pronounced under the "low" payoff treatment. Note that experience across periods lowers error variance for both low and high payoff treatments. In this design all price error (deviations from equilibrium) are necessarily TABULAR DATA OMITTED positive for individually rational agents. Hence, decision error is biased and insofar as low motivation increases such error the effect must necessarily reduce support for theoretical predictions.

Jamal and Sunder |1991~ have undertaken the first systematic examination of the effects of (salient) monetary rewards in oral double auction trading using symmetric supply and demand designs. Their preliminary results support the conclusion that in the absence of prior experience and salient rewards (i.e., using fixed payments, independent of performance), markets do not converge reliably to the competitive equilibrium prediction, but do so converge in the presence of such rewards. However, once subjects are experienced using salient rewards, subjects converge in the usual double auction manner although they receive only fixed nonsalient rewards. Our interpretation is that in effect they become detached professionals, whose actions require little thought or attention, once sufficiently motivated to have mastered the process of double auction trading in simple environments.

VI. SUMMARY AND CONCLUSIONS

A survey of experimental papers which report data on the comparative effects of subject monetary rewards (including no rewards) shows a tendency for the error variance of the observations around the predicted optimal level to decline with increased monetary reward. Some studies report observations that fail to support the predictions of rational models, but as reward level is increased the data shift toward these predictions. Many of these latter studies have the common characteristic that the predictions of rational theory represent a solution on the boundary of the constraint set. For example in the binary choice task, the optimal response is to predict the more frequent event 100 percent of the time. Any decision error in these contexts necessarily yields a central tendency that deviates from the rational prediction. Before such cases can be judged to have falsified the theory, it is necessary to establish that increased payoffs fail to move the observations closer to the predicted boundary maxima.

Many of these results are consistent with an "effort" or labor theory of decision making. According to this theory better decisions--decisions closer to the optimum, as computed from the point of view of the experimenter/theorist--require increased cognitive and response effort which is disutilitarian. From the point of view of the decision maker the problem is to achieve a balance between the benefits of better decision making and the effort cost of decision. The experimenter/theorist predicts an optimal decision which is a special case of the decision that is optimal from the perspective of the subject. Since increasing the reward level causes an increase in effort, the new model predicts that subject decisions will move closer to the theorist's optimum and result in a reduction in the variance of decision error. But this predicted shift toward optimality is qualified if effort is already constrained by the maximum that can be supplied, which would be the case for very complex decision problems. An example of the latter may be in the task studied by Herrnstein, et al. |1991~.

1. As in Siegel |1959~, Smith |1976; 1980; 1982~ and Wilcox |1989; 1992~.

2. Some who do so are Day and Groves |1975~, Nelson and Winter |1982~ and Heiner |1985~.

3. See, for example, Siegel |1959~ and von Winterfeldt and Edwards |1986~, also Kroll, Levy end Rapoport |1988~ who are among the important exceptions. In fact rewards are not of exclusionary importance, and one of our concerns in the model to be presented is to account for the fact that one does not observe arbitrary and random behavior when there are no salient rewards. But because rewards do matter, they cannot be ignored in testing the models proposed by economic end game theorists. We think psychologists have focused on experiments without rewards because they are primarily interested in cognitive processes as in Smith |1991~. Their research suggests that monetary rewards are not crucial in studying such processes, but this is controversial.

4. Note, however, that |epsilon~ is not "error" from the point of view of the subject weighing (albeit unconsciously) benefit against decision cost. It is the experimentalist who interprets |epsilon~ as a prediction error of the theory, whose randomness derives from the unobserved random variable, s. For other theoretical treatment of the effect of errors specific to particular decision problems, see Hey and Di Cagno |1990~ and Berg and Dickhaut |1990~. In none of these approaches need the subject be aware of the effects of effort on decision. Our motivation is to model the decline in errors that is often associated with increased payoffs.

5. Cf, Harrison |1989~ and Smith |1976~.

6. But there are clearly games in which one can account for the predictive failure of the complete information model by reformulation as an incomplete information game in which each player responds strategically to the error in the play of other(s). For an excellent example see McKelvey and Palfrey |1990~ where the standard model fails to predict outcomes in the centipede game, but a reformulation as a game of incomplete information, in which the players make action errors and hold beliefs subject to error, is able to account for the experimental data. As they suggest, the model could probably be improved by making error rates depend upon decision utility differences.

7. See Theil |1971, 192~ for a derivation showing that the variance of error in a behavioral equation is inversely proportional to the second derivative of the criterion function at its optimum. Theil's interpretation, however, is the reverse of ours in that the actual decision (y, in our notation) is treated as nonstochastic, and the optimal decision (our x*) as stochastic as in Theil |1971, 193~; nor does Theil interpret the error as an economic variable subject to control by the decision effort of the agent.

8. It is now well documented that grade credits compare well with monetary rewards when the payments are salient as in Isaac, Walker and Williams |1991~ and in Kormendi and Plott |1982~.

9. There are two technical problems concerning this literature. It appears that in all cases the research design constrained the event realizations so that in fact the process was not Bernoullian. Siegel et al. |1964~ followed the earlier literature in randomizing by blocks of twenty trials. This means that for p = 0.75 in every sequence of twenty trials the realizations were constrained to yield fifteen "left" and five "right"; i.e. sampling occurred without replacement. A second shortcoming of this literature is that in its day no econometric procedures were available to estimate individual asymptotic probabilities using all the data, e.g., logit, and the analysis did not focus on individual behavior which is what Siegel's model is about.

REFERENCES

Berg, Joyce E., and John W. Dickhaut. "Preference Reversals: Incentives Do Matter." University of Chicago, November 1990.

Binswanger, Hans P. "Attitudes Toward Risk: Experimental Measurement in Rural India." American Journal of Agricultural Economics, August 1980, 395-407.

-----. "Attitudes Toward Risk: Theoretical Implications of an Experiment in Rural India." Economic Journal, December 1981, 867-90.

Bull, Clive, Andrew Schotter, and Keith Weigelt. "Tournaments and Piece Rates: An Experimental Study." Journal of Political Economy, February 1987, 1-33.

Conlisk, John. "Optimization Cost." Journal of Economic Behavior and Organization, April 1988, 213-28.

Dawes, Robyn M. Rational Choice In An Uncertain World. New York: Harcourt Brace Jonanovich, 1988.

Day, Richard H., and Theodore Groves. Adaptive Economic Models. New York: Academic Press, 1975.

Drago, Robert, and John S. Heywood. "Tournaments, Piece Rates, and the Shape of the Payoff Function." Journal of Political Economy, August 1989, 992-8.

Fiorina, Morris P., and Charles R. Plott. "Committee Decisions Under Majority Rule." American Political Science Review, June 1978, 575-98.

Forsythe, Robert, Joel L. Horowitz, N. E. Savin, and Martin Sefton. "Replicability, Fairness and Pay in Experiments with Simple Bargaining Games." University of Iowa Working Paper 88-30, December 1988.

Fouraker, Lawrence, and Sidney Siegel. Bargaining and Group Decision Making. New York: McGraw Hill Book Co., 1963.

Goodman, Barbara, Mark Saltzman, Ward Edwards, and David H. Krantz. "Prediction of Bids for Two-Outcome Gambles in a Casino Setting." Organizational Behavior and Human Performance, 24(3), December 1979, 382-99.

Grether, David M. "Financial Incentive Effects and individual Decision Making." Social Science Working Paper No. 401, California Institute of Technology, 1981.

Grether, David, and Charles Plott. "Economic Theory of Choice and the Preference Reversal Phenomenon." American Economic Review, September 1979, 623-38.

Harrison, Glenn. "Theory and Misbehavior in First Price Auctions." American Economic Review, September 1989, 749-62.

Heiner, R.A. "Uncertainty, Signal Detection Experiments, and Modelling Behavior," in The New Institutional Economics, edited by R. Langlois. New York: Cambridge University Press, 1986, 59-115.

Herrnstein, R. J., G. Lowenstein, D. Prelec, and W. Vaughn, Jr. "Utility Maximization and Melioration: Internalities in individual Choice." Department of Psychology, Harvard University, Draft, 1 April 1991.

Hey, John, and D. Di Cagno. "Circles and Triangles: An Experimental Estimation of Indifference Lines in the Marschak-Machina Triangle." Journal of Behavioral Decision Making, 3(4), 1990, 279-306.

Isaac, R. Mark, James M. Walker, and Arlington W. Williams. "Group Size and the Voluntary Provision of Public Goods: Experimental Evidence Utilizing Large Groups." Indiana University Working Paper, 1991.

Jamal, Karim, and Shyam Sunder. "Money Vs. Gaming: Effects of Salient Monetary Payments in Double Oral Auctions." Organizational Behavior and Human Decision Processes, 49(1), June 1991, 151-66.

Kachelmeier, Steven J., and Mohamed Shehata. "Examining Risk Preferences Under High Monetary Incentives: Experimental Evidence From the People's Republic of China." Graduate School of Business, University of Texas at Austin, Draft, June 1991.

Kormendi, Roger C., and Charles R. Plott. "Committee Decisions Under Alternative Procedural Rules: An Experimental Study Applying a New Nonmonetary Method of Preference Inducement." Journal of Economic Behavior and Organization, June/Sept. 1982, 175-95.

Kreps, David M. A Course in Microeconomic Theory. Princeton, N.J.: Princeton University Press, 1990.

Kroll, Yoram, Haim Levy, and Amnon Rapoport. "Experimental Tests of the Separation Theorem and the Capital Asset Pricing Model." American Economic Review, June 1988, 500-19.

McClelland, Gary, Michael McKee, William Schulze, Elizabeth Beckett, and Julie Irwin. "Task Transparency versus Payoff Dominance in Mechanism Design: An Analysis of the BDM," Laboratory for Economics and Psychology, University of Colorado, June 1991.

McElroy, Margorie B. "Additive General Error Models for Production Cost and Derived Demand or Share Systems." Journal of Political Economy, August 1987, 737-57.

McKelvey, Richard D., and Thomas R. Palfrey. "An Experimental Study of the Centipede Game." Social Science Working Paper No. 732, California Institute of Technology, May 1990.

Messick, Samuel, and Arthur H. Brayfield. Decision and Choice. McGraw-Hill, 1964.

Nelson, Richard R., and S. G. Winter. An Evolutionary Theory of Economic Change. Cambridge, Mass: Harvard University Press, 1982.

Plott, Charles R., and Vernon L. Smith. "An Experimental Examination of Two Exchange Institutious." Review of Economic Studies, February 1978, 133-53.

Selten, Reinhard. "Re-examination of the Perfectness Concept for Equilibrium Points in Extensive Games." International Journal of Game Theory, 4(1), 1975, 25-55.

Siegel, Sidney. "Theoretical Models of Choice and Strategy Behavior: Stable State Behavior in the Two-Choice Uncertain Outcome Situation." Psychometrika, 24(4), December 1959, 303-16.

-----. "Decision Making and Learning Under Varying Conditions of Reinforcement." Annals of the New York Academy of Science, 89, 28 January 1961, 766-83.

Siegel, Sidney, and Julia Andrews. "Magnitude of Reinforcement and Choice Behavior in Children." Journal of Experimental Psychology, 63(4), April 1962, 337-41.

Siegel, Sidney, and Lawrence Fouraker. Bargaining and Group Decision Making: Experiments in Bilateral Monopoly. New York: McGraw-Hill 1960.

Siegel, Sidney, and D. A. Goldstein. "Decision-Making Behavior in a Two-Choice Uncertain Outcome Situation." Journal of Experimental Psychology, 57(1), January 1959, 37-42.

Siegel, Sidney, Alberta Siegel, and Julia Andrews. Choice, Strategy, and Utility. New York: McGraw-Hill, 1964.

Simon, Herbert A. "A Comparison of Game Theory and Learning Theory." Psychometrika, 21(3), September 1956, 267-72.

Slovic, Paul, B. Fischoff, and S. Lichtenstein. New Directions for Methodology of Social and Behavioral Science: Question Framing and Response Consistency, No. 11, San Francisco: Jossey-Bass, 1982.

Smith, Vernon L. "Experimental Auction Markets and the Walrasian Hypothesis." Journal of Political Economy, August 1965, 387-93.

-----. "Experimental Economics: Induced Value Theory." American Economics Review, May 1976, 274-79.

-----. "Relevance of Laboratory Experiments to Testing Resource Allocation Theory," in Evaluation of Econometric Models, edited by J. Kmenta and J. Ramsey. New York Academic Press, 1980, 345-77.

-----. "Microeconomic Systems as an Experimental Science." American Economic Review, December 1982, 923-55.

-----. "Rational Choice: The Contrast Between Economics and Psychology." Journal of Political Economy, November 1991, 877-97.

Smith, Vernon L., and James M. Walker. "Rewards, Experience and Decision Costs in First Price Auctions." Economic Inquiry, April 1993, 237-44.

Theil, Henri. "An Economic Theory of the Second Moments of Disturbances of Behavioral Equations." American Economic Review, March 1971, 190-4.

Tversky, Amos, and Ward Edwards. "Information Versus Reward in Binary Choice." Journal of Experimental Psychology, 71(5), May 1966, 680-3.

Tversky, Amos, and Daniel Kahneman. "Extensional versus intuitive Reasoning: The Conjunction Fallacy in Probability Judgement." Psychological Review, 90(4), October 1983, 293-315.

von Winterfeldt, Detlof, and Ward Edwards. Decision Analysis and Behavioral Research. Cambridge: Cambridge University Press, 1986.

Wilcox, Nathaniel T. "Well-Defined Loss Metrics and the Situations that Demand Them." Economic Science Association Meetings, Tucson, AZ, 28-29 October 1989.

-----. "Incentives, Complexity, and Time Allocation in a Decision-Making Environment." Public Choice/Economic Science Association Meetings, New Orleans, 27-29 March 1992.

Wolf, Charles, and Larry Pohlman. "The Recovery of Risk Preferences from Actual Choice." Econometrica, May 1983, 843-50.

Printer friendly Cite/link Email Feedback | |

Author: | Smith, Vernon L.; Walker, James M. |
---|---|

Publication: | Economic Inquiry |

Date: | Apr 1, 1993 |

Words: | 8684 |

Previous Article: | Business cycle asymmetry: a deeper look. |

Next Article: | Money demand during hyperinflation and stabilization: Bolivia, 1980-1988. |

Topics: |