Printer Friendly

Representation of odds in terms of frequencies reduces probability discounting.

Studies of probability discounting attempt to determine the degree to which the value of an outcome is reduced because it is probabilistic or uncertain. By determining the subjective values of an outcome across a range of probabilities, the rate at which subjective value changes for individuals or populations can be quantified. Mazur's (1987) hyperbolic delay discounting equation has been widely accepted in behavior analysis as a good descriptor of rate of delay discounting, and a similar hyperbolic discounting equation (Rachlin, Raineri, & Cross, 1991) has been shown to account for much of the variance with probability discounting.

[v.sub.d] = V/[1 + h[theta]] (1)

In this equation, the discounted value of an outcome ([v.sub.d]) is equal to the ratio of the undiscounted value (V) and the quantity one plus odds-against [[theta]: equal to (1-probability)/probability] multiplied by a discounting parameter (h). This free parameter (h) provides a measure of the tendency to prefer smaller, certain rewards to larger, probabilistic outcomes. High h values indicate greater reduction in subjective value resulting from uncertainty (more discounting). Low h values indicate little reduction in subjective value (less discounting), with a value of zero indicating no discounting at all. Data from probability discounting experiments are well-described by the hyperbolic equation, often accounting for a very high proportion of the variance.

One common finding in studies of probability discounting is the reverse magnitude effect, where large, probabilistic outcomes are discounted more than small, similarly probabilistic outcomes (Myerson, Green, Hanson, Holt, & Estle, 2003). To date, research on discounting has offered little explanation for this effect while largely ignoring findings in decision-making studies of risk.

Figure 1 shows two hypothetical weighting functions where stated probabilities have been converted to decision weights (Kahneman & Tversky, 1984). These decision weights are similar to indifference points obtained from studies of discounting. The diagonal line represents expected value, and points along this line indicate subjective values across the range of probabilities. Points above this line indicate risk seeking, with decision weights greater than expected value; points below this line indicate risk aversion, with decision weights less than expected value. The two curved lines show a tendency to underestimate high and moderate probabilities (risk aversion) and overestimate low probabilities (risk seeking), with line B showing greater risk aversion for high and moderate probabilities than line A. Large magnitudes produce greater deviations for decision weights from stated probabilities in this way (line B); risk aversion increases as magnitude increases (Christensen, Parker, Silberberg, & Hursh, 1998; Hogarth & Einhorn, 1990; Tversky & Kahneman, 1992; see Kuhberger, Schulte-Mecklenbeck, & Perner, 1999 for a meta-analysis of risk studies). This insight from studies of decision making suggests that the reverse magnitude effect from discounting studies may be the byproduct of the greater deviations of decision weights from expected value for large magnitudes resulting from differential risk aversion; greater risk aversion at moderate and high probabilities would lead to lower indifference points and more discounting.

[FIGURE 1 OMITTED]

Risk aversion for moderate and high probabilities is a factor in probability discounting studies because the procedures typically ask for choices in one-shot situations where there is a possibility of receiving nothing. Because a single probabilistic choice results in a win or a no-win, a gamble cannot be said to be equal to expected value or some index of worth (Lopes, 1981); the only possible outcomes are the full value of winning the gamble or nothing. Previous research suggests that increasing the repetitions of a particular prospect could reduce risk aversion (Redelmeier & Tversky, 1992; Samuelson, 1963). Risk aversion at moderate and high probabilities tends to be reduced, and behavior is more in accordance with choices based on expected value when there are repeated opportunities of a gamble (Jou & Shanteau, 1995; Keren & Wagenaar, 1987; Koehler, Gibbs, & Hogarth, 1994; Wedell & Bockenholt, 1994).

Reduction in risk aversion at moderate and high probabilities may nullify the reverse magnitude effect observed in studies of probability discounting. However, differential data resulting from repeating choice situations presents new problems in interpretation. Any change in observed discounting may be a result of other factors examined in choice behavior, including temporal patterning (Kudadjie-Gyamfi & Rachlin, 1996; Rachlin, 1995) and choice bracketing (Read & Loewenstein, 1995; Thaler, 1999). Though it is unclear how indifference points would be affected as a function of temporal patterning and choice bracketing, both predict some change in behavior when multiple choices are made at one time than across time. Additionally, because individuals sometimes spontaneously attend to the sum of all individual prospects (Redelmeier & Tversky, 1992), the expected outcome and corresponding magnitude of the choice situation may change. By subdividing risk in situations where individuals may aggregate multiple prospects (similar to the procedure of Keren & Wagneaar, 1987, Experiment 1), risk aversion can be reduced (Samuelson, 1963) without altering expected value. Reduced risk aversion at moderate and high probabilities should lead to higher indifference points at those probabilities and consequently, decreased discounting parameters relative to probability discounting studies with one-shot situations.

Another common finding in decision-making research involves the reduction in errors of Bayesian reasoning when probabilistic information is presented in terms of frequencies. The percentage of three cognitive illusions, namely the Overconfidence Bias, the Conjunction Fallacy, and Base-Rate Fallacy, decreases significantly when quantitative information is presented in terms of frequencies rather than percentages (Cosmides & Tooby, 1996; Gigerenzer, 1994). Briefly, the Overconfidence Bias is the tendency of individuals to overstate their degree of confidence in their own knowledge. The Conjunction Fallacy is the tendency of individuals to estimate the likelihood of the conjunction of two events being higher than each individual event. Base Rate Fallacy is the tendency to ignore preexisting probabilities in calculating present or future probabilities. The study of risk by Koehler et al. (1994) suggests that "people do a better job reasoning about probabilistic outcomes when they are prompted to think about outcomes in terms of relative frequencies" (p. 188). The relevant issue for the current study is that probabilistic information, when presented as relative frequencies, is either interpreted or processed differently. In questionnaire studies of probability discounting, likelihood of a win is most often presented in terms of percentages (Du, Green, & Myerson, 2002; Green, Myerson, & Ostaszewski, 1999; Mitchell, 1999; Rachlin et al., 1991; Richards, Zhang, Mitchell, & de Wit, 1999). Given what appears to be some difficulty understanding percentages, the same information presented as frequencies may lead to different results.

Three conditions (two experimental and one control) were examined. In the control condition, a traditional probability-discounting procedure was employed where odds of winning were presented as percentages. In one experimental condition, the same odds of winning were presented as relative frequencies. In the second experimental condition, the probabilistic option was subdivided into 10 probabilistic prospects with outcome per win reduced to maintain the same expected value as the other two conditions. Overall differences in discounting were expected between the control condition and the first experimental condition. Though direction of change is not predicted, representation of probabilistic information as relative frequencies could alter indifference points. Furthermore, the reverse magnitude effect was expected in the control condition. No effect of magnitude, where discounting is comparable across magnitudes, was expected in the second experimental condition as a result of reduced risk aversion.

Method

Participants

Twenty-nine college students, 12 male and 17 female, between 18 and 21 years of age were recruited for this study. The data from 3 participants (2 male and 1 female) were excluded from analyses because they systematically increased indifference points as probabilities decreased in at least one condition. Two participants failed to complete one sheet of the questionnaire and were therefore excluded from analysis of indifference points but included elsewhere. Data from the remaining 24 participants were included in all analyses. These participants were recruited from a psychology departmental subject pool and received credit in an introductory level class for participation.

Materials

A paper and pencil questionnaire was employed to determine indifference points for nine probabilities (.99, .95, .90, .75, .50, .25, .10, .05, and .01) at two magnitudes ($10, 1000) in three conditions (One-Shot Percentage, One-Shot Frequency, Repeated Frequency). Participants were asked to imagine a jar containing blue (win) and red (no-win) marbles in different proportions. In each row of the questionnaire, participants chose between two available, hypothetical alternatives: a smaller monetary amount available with certainty (left column, called the adjusting amount) or a larger monetary amount available according to the probability determined by the proportion of blue marbles (right column, called the standard amount). On a given sheet of the questionnaire, the larger, standard amount remained constant while the adjusting amount varied from $0 to the standard amount, in 2.5% increments. Participants were asked to mark the alternative they prefer on each row of the questionnaire. The order in which the adjusting amounts were presented was random.

In the one-shot percentage condition, the proportion of blue to all marbles was presented in terms of percentages. They were: 99%, 95%, 90%, 75%, 50%, 25%, 10%, 5%, and 1%. In the one-shot frequency condition, the proportion of blue to all marbles was presented in terms of frequencies: 99 out of 100, and so forth. This condition was identical to the one-shot percentage condition in all other respects. In the repeated frequency condition, the proportion of blue to all marbles was presented as relative frequencies. However, participants were asked to choose between a certain amount and 10 chances (with replacement) of the uncertain amount. These uncertain amounts were reduced so that the expected value was the same as those in the one-shot conditions. For instance, the $1000 magnitude in the one-shot frequency condition has an expected value of $500 when 50 out of 100 marbles are blue. Its comparison in this condition was 10 chances to win $100 dollars with 50 blue marbles out of 100 in each of those chances (expected value of $500). The same relative frequencies reported above were used with 1/10 of the magnitude for the uncertain reward. Over 10 opportunities, expected values equaled those from the one-shot conditions.

Procedure

All participants were exposed to the repeated frequency, one-shot percentage, and one-shot frequency conditions over one 1-hr session. Participants were exposed to one of the one-shot conditions first, followed by the repeated frequency condition. The remaining one-shot condition closed out the session. Participants were exposed to both magnitudes in each condition, with their order of presentation remaining consistent across conditions. Because the one-shot percentage and one-shot frequency conditions are most similar, it was important that participants not undergo those procedures consecutively. This controlled for demand characteristics where participants may attempt to remain consistent over the two conditions rather than answering honestly. The amount of time and the cognitive load necessary to complete the repeated frequency condition should have effectively prevented use of short-term memory to artificially provide consistent answers across conditions. Half of the participants were exposed to the one-shot percentage condition first and half were exposed to the one-shot frequency condition first. The order in which participants experienced magnitudes (increasing or decreasing) was also counterbalanced.

The questionnaires for each condition/magnitude combination were stapled together into packets. Each packet was numbered and these numbers indicated the order in which they were to be completed. All packets were handed to participants at the beginning of the session in the appropriate order. Participants were instructed to complete all pages in the order in which they were given to them and not to turn back to a page previously completed. They were not allowed to talk or engage in any other activity during the experimental session. All participants were monitored to ensure compliance with these instructions.

Data Analysis

For scoring purposes, the adjusting amounts were rank ordered (from highest to lowest) for each page of the questionnaire. The highest adjusting amount was placed in the top row, with the next highest amount placed below that, and so forth. In cases where a discrete preference switch occurred, preference for the adjusting amount continued as the adjusting amount decreased until one point at which preference switched to the standard amount, where preference remained. Indifference points were calculated in these instances as the arithmetic mean of the lowest adjusting value where the adjusting alternative was preferred and the highest adjusting value where the standard alternative was preferred. With a nondiscrete preference switch, preference switched from the adjusting alternative to the standard alternative more than once; preference switched back and forth between the alternatives as the adjusting value decreased. To ensure no systematic bias of inferred indifference points, the indifference points were calculated in these instances as the arithmetic mean of the following points: (1) the adjusting value of the first preference switch from adjusting value to standard value and (2) the adjusting value of the last preference switch from adjusting value to standard value. These two conditions demanded that the indifference point was equidistant from the adjusting value at which all greater adjusting amounts were preferred and the adjusting value at which all lesser adjusting amounts were rejected.

Statistical Method

Indifference points were recorded as proportions of the standard amount and all analyses were conducted with these proportions; reference to indifference points indicate the proportion of the standard amount rather than the absolute value (i.e., indifference points of $950 in the $1000 magnitude condition and $9.50 in the $10 condition both equal .95 as a proportion). Repeated-measures comparisons of indifference points were conducted across conditions and magnitudes with parametric analyses. Stated probabilities were converted into odds against [[theta] = (1 - probability)/probability] and nonlinear regression was used to fit the data to Equation 1. Discounting parameters were calculated for each subject in each condition for each magnitude. Natural logarithm transformations were conducted on these positively skewed discounting parameters to normalize the data and allow for parametric analyses. Linear regressions were conducted on the median indifference points for each condition/magnitude combination as a function of stated probability of winning.

Results

With nine probabilities in each of six conditions (3 conditions X 2 magnitudes), 54 indifference points were obtained from each participant. Of all indifference points obtained, 21% exhibited nondiscrete preference switches, with the mean equal to 11.58 nondiscrete preference switches per person. Nondiscrete preference switches occurred about equally in all conditions. Table 1 lists mean indifference points (as a proportion of the standard amount) at each proportion for each unique condition. Indifference points generally decreased as probability decreased and as magnitude increased.

A 3 (condition) X 2 (magnitude) X 9 (probability) repeated-measures Analysis of Variance (ANOVA) was conducted with indifference points. Indifference points were highest in the one-shot frequency condition (M = .50), followed by the repeated frequency (M = .49) and one-shot percentage (M = .46) conditions. The main effect of condition was significant, F(2, 46) = 3.52, p < .05, and Scheffe tests revealed significantly higher indifference points, p < .05, for the one-shot frequency condition than the one-shot percentage condition. None of the other comparisons were statistically significant. Mean indifference points were significantly greater, F(1, 23) = 16.62, p < .05, with the $10 magnitude (M = .51) than the $1000 magnitude (M = .45). The main effect of probability, F(8, 84) = 510.57, p < .05, was significant, and Scheffe tests revealed statistically significant differences, p < .05, between all probabilities except .99 and .95, and .95 and .90. Of all interactions, only the magnitude X probability interaction was significant, F(8, 184) = 2.11, p < .05.

Median indifference points were plotted for all magnitudes and conditions (see Figure 2). These points were used to calculate discounting parameters for the hyperbolic discounting equation by converting stated probabilities into odds against to obtain [theta] of Equation 1 (Rachlin et al., 1991). This model accounted for greater than 95% of the variance in all condition/magnitude combinations of Figure 2. In all three conditions, the $1000 standard alternative was discounted more than the $10 standard alternative.

Discounting parameters (h) were obtained for each participant in each condition. Preliminary analyses were conducted on the natural logarithm-transformed discounting parameters to rule out order effects; this was confirmed with neither effects of order of condition, F(1, 24) = .87, p > .05, nor of order of magnitude, F(1, 24) = 1.13, p > .05. A 2 (magnitude) X 3 (condition) repeated-measures ANOVA was calculated. Means of each condition/magnitude combination are exhibited in Figure 3; results from the analysis of indifference points were confirmed. There was a significant effect of condition, F(2, 50) = 3.71, p < .05, with Scheffe tests revealing a significant difference between the one-shot percentage (M = .49) and one-shot frequency conditions (M = -.13, p < .05), but no difference between the repeated frequency condition (M = .08) and any other condition. Discounting was also significantly greater with the $1000 magnitude (M = .46) than the $10 magnitude (M = -.16), F(1, 25) = 7.63, p < .05).

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

Figure 4 is similar to the hypothetical function shown in Figure 1 with obtained medians from the current experiment. The top graph of Figure 4 shows the indifference point (equivalent to decision weights of Figure 1) as a function of stated probability for the $10 magnitude in all conditions. The bottom graph of Figure 4 shows the certain value for the $1000 magnitude in all conditions. The solid diagonal lines represent expected value. These graphs exhibit degree of risk aversion and risk seeking in the experimental conditions. A difference between magnitudes appears at the lower probabilities; with the small magnitude, all points below .50 probability fall above the line of expected value. With the large magnitude, all points below .50 probability fall along the line of expected value. Linear regressions with these points were calculated on each magnitude x condition combination, and each analysis accounted for greater than 99% of the variance. Slopes/y-intercepts for the one-shot percentage, one-shot frequency, and repeated frequency with the small magnitude were 0.829/0.090, 0.830/0.103, and 0.842/0.035, respectively. With the large magnitude, slopes/y-intercepts were 0.869/.018, 0.901/.031, and 0.881/0.035, respectively.

[FIGURE 4 OMITTED]

Discussion

Analysis of indifference points revealed higher means for the one-shot frequency condition than the one-shot percentage condition. Indifference points were also higher for the $10 magnitude condition than the $1000 magnitude condition for nearly all probabilities in the current study. An effect of magnitude was not observed in only the two highest probabilities (.99 and .95) of the current study. The same pattern of results was obtained in all three conditions. Discounting parameters were obtained by converting stated probabilities into odds against, and fitting indifference points to a hyperbolic equation (Rachlin et al., 1991). This model provided a good fit to the data. Parametric analysis with natural logarithm-transformed parameters supported the analysis of indifference points; the one-shot frequency condition led to lower discounting parameters than the one-shot percentage condition, with no differences between the other pairs of conditions.

As expected, the one-shot percentage and one-shot frequency conditions led to different rates of discounting. Though these conditions were procedurally identical with the exception of how distribution of win/no-win marbles were identified, discounting parameters were significantly lower when probabilistic information was presented in the terms of relative frequencies. Though the pattern of results across magnitudes was similar in both one-shot conditions, this result suggests that how information is presented in studies of probability discounting can significantly affect discounting parameters. Unlike the literature on cognitive illusions where human performance can be compared to an objective standard (e.g., Bayes' theorem), errors in studies of discounting do not exist because no "correct" answer exists; although exponential discounting is considered normative by some (Kirby, 1997), we know of no prescriptive rate of discounting. When considering that behavior is more self-controlled in the absence of drug use (Baker, Johnson, & Bickel, 2003; Coffey, Gudleski, Saladin, & Brady, 2003; Madden, Petry, Badger, & Bickel, 1997), generalized risky behavior (Crean, de Wit, & Richards, 2000), and pathological gambling (Petry, 2001), the present results suggest that previous studies of probability discounting may have systematically led to overestimates of discounting parameters, or that self-controlled behavior can be enhanced in some instances when probabilistic information is presented as relative frequencies.

This result does not invalidate previous findings in studies of probability discounting. In particular, nothing can be said about results from studies in which probabilities are implied (via spinners, Anderson, Richell, & Bradshaw, 2003; Rachlin, Logue, Gibbon, & Frankel, 1986), stated as fractions (Rachlin, Brown, & Cross, 2000), or expressed in any manner other than explicitly stated as percentages or probabilities. Additionally, this result does not question the validity of studies explicitly stating percentages (Du et al., 2002; Green et al., 1999; Rachlin et al., 1991) because those studies compared rates of probability discounting across different magnitudes or populations. Relative rate of discounting was important in these studies, and there is nothing in the present results that suggests that these relationships should change when information is presented as relative frequencies. Rather, these results suggest that there may be an important distinction between the discounting that results from the subjective transformation of amounts and percentages into decision weights and a history or experience with a reinforcement contingency (relative frequencies, Rachlin et al., 1986).

Minimally, how probabilistic information is presented in future studies of probability discounting must be addressed. Do education and experience with probabilities result in a convergence of how percentages and relative frequencies are interpreted? Conversely, are individuals with less education or experience with probabilities more likely to exhibit a difference as a result of how the information is presented? The inability of medical students at prestigious institutions to avoid the base-rate fallacy in the medical diagnosis problem (Cosmides & Tooby, 1996) suggests that education alone may have no benefit on the confluence of percentages and frequencies. However, a doctor, both educated and experienced in the medical diagnosis problem, may be less likely to fall victim to the fallacy. These questions posed above are relevant for both studies of discounting and choice.

Another analysis of these data involved the effect of magnitude. The repeated frequency condition was expected to eliminate the reverse magnitude effect observed in previous studies of probability discounting. We believed that subdividing risk would reduce the strong risk aversion observed with large magnitudes, particularly at the moderate and high probabilities, and rate of discounting across magnitudes would be similar. This was not observed, with the large magnitude discounted more than the small magnitude. Although the differences between magnitudes in each condition did not reach significance, they were all in the same direction and approximately the same degree (see Figure 3). In the one-shot percentage, one-shot frequency, and repeated frequency conditions, there was an increase of 0.53, 0.65, and 0.57 natural logarithm units of h from the small to large magnitudes, respectively. In this way, the expected effect of magnitude was observed in the one-shot percentage condition. The absence of this effect might have been expected with the repeated frequency condition because behavior has been shown to be more in accordance with choices based on expected value when probabilistic opportunities are repeated (Jou & Shanteau, 1995; Keren & Wagenaar, 1987; Koehler et al., 1994; Wedell & Bockenholt, 1994). By reducing the possibility of receiving nothing without altering the expected value, this condition could have addressed the possibility that the reverse magnitude effect was a byproduct of greater risk aversion with large magnitudes. However, an effect of magnitude comparable to the one-shot percentage condition was obtained.

A possible explanation for the observed data could be that degree of risk aversion is not the factor that drives differential effects of magnitude in studies of probability discounting. However, a more likely explanation is that, with the current procedure, participants did not spontaneously attend to the sum of all prospects. Rather, they may have "recognize[d] the impact of aggregation, albeit insufficiently" (Redelmeier & Tversky, 1992, p. 192). If participants did not fully aggregate the outcomes or considered each of the 10 outcomes in the repeated frequency conditions completely separately, risk aversion would not have been affected and there is no reason to expect similar results across magnitudes. We are unaware of literature that specifically addresses the conditions under which people spontaneously aggregate numerous outcomes, but suspect therein lies the manner in which subdivision of risk can effectively reduce differential risk aversion at moderate and high probabilities.

Whereas the large magnitude outcome was still discounted more than the small magnitude outcome in the present study, Figure 4 suggests that the effect of magnitude was not a result of risk aversion, and thus its absence would not be observed in the repeated frequency condition. The points along the upper right (high probabilities) fall below the line of expected value at both magnitudes, suggesting a similar degree of risk aversion at high probabilities. In fact, the analysis of indifference points revealed significant differences between magnitudes across all conditions at all probabilities except .99 and .95. This is not likely a ceiling effect for two reasons: There is no comparable floor effect, and Christensen et al. (1998) found a strong and positive relationship between risk aversion and magnitude, particularly at high probabilities. Because probability was the dependent variable in that study, we were unable to obtain indifference points across the full range of probabilities for all magnitudes; the range of probabilities observed with the highest magnitude ($10,000) was approximately .50 to .90. Therefore, there are no data for comparison at low probabilities and high magnitudes.

For the current study, the observed difference between degree of discounting at different magnitudes appears to be a result of degree of risk seeking at low probabilities. With the small magnitude, indifference points at low probabilities generally fall above the expected value line, exhibiting an overweighting of those probabilities. With the large magnitude, most indifference points at low probabilities fall along the line of expected value and exhibit no systematic bias. Furthermore, y intercepts from linear regression lines are higher for the small magnitude conditions, and indifference points at high probabilities across magnitudes are comparable. These observations suggest no difference between magnitudes for likely events and greater risk seeking for small magnitudes for unlikely events. Small magnitudes would then have lower discounting parameters than large magnitudes not because there is less risk aversion at high probabilities, but because there is more risk seeking at low probabilities. Rachlin, Siegel, and Cross (1994) suggest a probability threshold, where very low probabilistic gambles are valued higher than the expected value. At extremely low probabilities (beyond the threshold), the expected value of a small or moderate reward is so low that individuals may prefer the opportunity, however unlikely, of receiving the full amount of the reward. With large magnitudes, the expected value may be sufficient for individuals to avoid risk and accept a certain outcome near the expected value. This type of interaction between magnitude and probability threshold may then explain the differential effect of magnitude.

The current study examined two relevant issues in studies of decision making and applied them to an examination of probability discounting. First, a reduction in risk aversion as a function of subdividing risk was expected to result in no differential effect of magnitude. This was not found, as the reverse magnitude effect was observed in all three conditions. Subdividing risk may result in reductions in discounting, but this may require a procedure in which the participant is more likely to interpret the outcome of their choices as the sum of the multiple independent outcomes. It seems that the current procedure did not facilitate this process. Second, less discounting was observed when probabilities were presented as relative frequencies rather than percentages. It is unclear why this should be, considering that the same information was essentially presented in both the percentage and frequency conditions. If the additional mathematical step required to calculate percentages placed a higher cognitive load on participants, this may itself have resulted in higher probability discounting. Hinson, Jameson, and Whitney (2003) have found increased delay discounting as a function of load on working memory. An alternative explanation is that participants are more likely to interpret percentages as one-shot events where relative frequencies facilitate a multiple-shot perspective. There are likely to be numerous other reasonable explanations, and future studies should attempt to clarify the basic process by which probability discounting is affected by the form of quantitative information.

References

ANDERSON, I. M., RICHELL, R. A., & BRADSHAW, C. M. (2003). The effect of acute tryptophan depletion on probabilistic choice. Journal of Psychopharmacology, 17, 3-7.

BAKER, F., JOHNSON, M. W., & BICKEL, W. K. (2003). Delay discounting in current and never-before smokers: Similarities and differences across commodity, sign, and magnitude. Journal of Abnormal Psychology, 112, 382-392.

CHRISTENSEN, J., PARKER, S., SILBERBERG, A., & HURSH, S. (1998). Trade-offs in choice between risk and delay depends on monetary amounts. Journal of the Experimental Analysis of Behavior, 69, 123-139.

COFFEY, S. F., GUDLESKI, G. D., SALADIN, M. E., & BRADY, K. T. (2003). Impulsivity and rapid discounting of delayed hypothetical rewards in cocaine-dependent individuals. Experimental and Clinical Psychopharmacology, 11, 18-25.

COSMIDES, L., & TOOBY, J. (1996). Are humans good statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58, 1-73.

CREAN, J. P., DE WIT, H., & RICHARDS, J. B. (2000). Reward discounting as a measure of impulsive behavior in a psychiatric outpatient population. Experimental and Clinical Psychopharmacology, 8, 155-162.

DU, W., GREEN, L., & MYERSON, J. (2002). Cross-cultural comparisons of discounting delayed and probabilistic rewards. The Psychological Record, 52, 479-492.

GIGERENZER, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright & P. Ayton (Eds.), Subjective probability. Chichester, UK: John Wiley & Sons.

GREEN, L., MYERSON, J., & OSTASZEWSKI, P. (1999). Amount of reward has opposite effects on the discounting of delayed and probabilistic outcomes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 418-427.

HINSON, J. M., JAMESON, T. L., & WHITNEY, P. (2003). Impulsive decision making and working memory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29, 298-306.

HOGARTH, R. M., & EINHORN, H. J. (1990). Venture theory: A model of decision weights. Management Science, 36, 780-803.

JOU, J., & SHANTEAU, J. (1995). Gestalt and dynamic processes in decision making. Behavioural Processes, 33, 305-318.

KAHNEMAN, D., & TVERSKY, A. (1984). Choices, values, and frames. American Psychologist, 39, 341-350.

KEREN, G., & WAGENAAR, W. A. (1987). Violation of utility theory in unique and repeated gambles. Journal of Experimental Psychology: Learning, Memory, & Cognitions, 13, 387-391.

KIRBY, K. N. (1997). Bidding on the future: Evidence against normative discounting of delayed rewards. Journal of Experimental Psychology: General, 126, 54-70.

KOEHLER, J. J., GIBBS, B. J., & HOGARTH, R. M. (1994). Shattering the illusion of control: Multi-shot versus single-shot gambles. Journal of Behavioral Decision Making, 7, 183-191.

KUDADJIE-GYAMFI, E., & RACHLIN, H. (1996). Temporal patterning in choice among delayed outcomes. Organizational Behavior and Human Decision Processes, 65, 61-67.

KUHBERGER, A., SCHULTE-MECKLENBECK, M., & PERNER, J. (1999). The effect of framing reflection, probability, and payoff on risk preference in choice. Organizational Behavior and Human Decision Processes, 78, 204-231.

LOPES, L. L. (1981). Notes, comments, and new findings: Decision making in the short run. Journal of Experimental Psychology: Human Learning and Memory, 7, 377-385.

MADDEN, G. J., PETRY, N. M., BADGER, G. J., & BICKEL, W. K. (1997). Impulsive and self-controlled choices in opioid-dependent patients non-drug-using control participants: Drug and monetary rewards. Experimental and Clinical Psychopharmacology, 5, 256-262.

MAZUR, J. (1987). An adjusting procedure for studying delayed reinforcement. In M. Commons, J. Mazur, J. Nevin, & H. Rachlin (Eds.), The effect of delay and of intervening events on reinforcement value (pp. 55-73). Hillsdale, NJ: Lawrence Erlbaum.

MITCHELL, S. H. (1999). Measures of impulsivity in cigarette smokers and nonsmokers. Psychopharmacology, 146, 455-464.

MYERSON, J., GREEN, L., HANSON, J. S., HOLT, D., & ESTLE, S. J. (2003). Discounting delayed and probabilistic rewards, Journal of Economic Psychology, 24, 619-635.

PETRY, N. M. (2001). Pathological gamblers, with and without substance abuse disorders, discount delayed rewards at high rates. Journal of Abnormal Psychology, 110, 482-487.

RACHLIN, H. (1995). Self-control: Beyond commitment. Behavioral and Brain Sciences, 18, 109-159.

RACHLIN, H., BROWN, J., & CROSS, D. (2000). Discounting in judgment of delay and probability. Journal of Behavioral Decision Making, 13, 145-159.

RACHLIN, H., LOGUE, A. W., GIBBON, J., & FRANKEL, M. (1986). Cognition and behavior in studies of choice. Psychological Review, 93, 33-45.

RACHLIN, H., RAINERI, A., & CROSS, D. (1991). Subjective probability and delay. Journal of the Experimental Analysis of Behavior, 55, 233-244.

RACHLIN, H., SIEGEL, E., & CROSS, D. (1994). Lotteries and the time horizon. Psychological Science, 5, 390-393.

READ, D., & LOEWENSTEIN, G. (1995). Diversification bias: Explaining the discrepancy in variety seeking between combined and separated choices. Journal of Experimental Psychology: Applied, 1, 34-49.

REDELMEIER, D. A., & TVERSKY, A. (1992). On the framing of multiple prospects. Psychological Science, 3, 191-193.

RICHARDS, J. B., ZHANG, L., MITCHELL, S. H., & DE WIT, H. (1999). Delay or probability discounting in a model of impulsive behavior: Effect of alcohol. Journal of the Experimental Analysis of Behavior, 71, 121-143.

SAMUELSON, P. A. (1963). Risk and uncertainty: A fallacy of large numbers. Scientia, 98, 108-113.

THALER, R. H. (1999). Mental accounting matters. Journal of Behavioral Decision Making, 12, 183-206.

TVERSKY, A., & KAHNEMAN, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297-323.

WEDELL, D. H., & BOCKENHOLT, U. (1994). Contemplating single versus multiple encounters of a risky prospect. American Journal of Psychology, 107, 499-518.

RICHARD YI and WARREN K. BICKEL

University of Vermont

This research was funded in part by National Institute on Drug Abuse Grants R01 DA11692 and T32 DA07242. The authors thank Xochitl de la Piedad, Federico Sanabria, Matthew W. Johnson, Kirstin M. Gatchalian, and Amy Prue for their assistance with this manuscript. Both authors are now located at the University of Arkansas for Medical Sciences in Little Rock. Correspondence should be addressed to Richard Yi, Center for Addiction Research, Department of Psychiatry, College of Medicine, UAMS, Slot 843, Little Rock, AR 72205. (E-mail: ryi@uams.edu).
Table 1 Means of Indifference Points (as Proportion of Standard Amount)
for Each Condition X Magnitude Combination

 Percentage Frequency Repeated
Proportion $10 $1000 $10 $1000 $10 $1000

0.99 .858 .850 .923 .859 .875 .842
0.95 .788 .791 .862 .815 .844 .818
0.90 .802 .724 .840 .761 .812 .771
0.75 .693 .602 .738 .648 .711 .694
0.50 .473 .403 .546 .496 .560 .481
0.25 .400 .244 .372 .279 .389 .280
0.10 .216 .121 .243 .172 .247 138
0.05 .156 .081 .177 .124 .157 .084
0.01 .094 .042 .092 .048 .077 .053
COPYRIGHT 2005 The Psychological Record
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Yi, Richard; Bickel, Warren K.
Publication:The Psychological Record
Geographic Code:1USA
Date:Sep 22, 2005
Words:5722
Previous Article:Preliminary findings on the effects of self-referring and evaluative stimuli on stimulus equivalence class formation.
Next Article:Empirically understanding understanding can make problems go away: the case of the Chinese room.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters