Printer Friendly

Molar and molecular contingencies and effects of punishment in a human self-control paradigm.

Self-control is often defined as the choice for a large delayed reinforcer over a small more immediate reinforcer. Impulsiveness is defined as the opposite choice (e.g., Logue, 1988; Rachlin & Green, 1972). In self-control studies using adult human subjects, responding is typically highly self-controlled (e.g., Belke, Pierce, & Powell, 1989; Logue, King, Chavarro, & Volpe, 1990; Logue, Pena-Correal, Rodriguez, & Kabela, 1986; Millar & Navarick, 1984) and often well described by molar maximization accounts (see Logue et al., 1990), with four types of exceptions. First, when responding is negatively reinforced, humans have tended to respond impulsively, that is, immediate brief escape from aversive noise is preferred over longer but delayed escape from aversive noise (Navarick, 1982; Solnick, Kannenberg, Eckerman, & Waller, 1980). Second, self-control may be compromised when a distracting stimulus (e.g., a radio; Logue et al., 1990) or an aversive stimulus (e.g., loud beeps; Flora, Schieferecke, & Bremenkamp, 1992) occurs while subjects' responding is maintained by positive reinforcement. Third, humans may be impulsive when food is the reinforcer (Forzano & Logue, 1992). Fourth, impulsive responding occurs when impulsive responding produces a reinforcement density that is greater than the reinforcement density produced by self-controlled responding (e.g., Flora & Pavlik, 1992; Millar & Navarick, 1984; Navarick 1986). Density of reinforcement is defined as the amount of reinforcement divided by total time between reinforcements including any prereinforcer delay, reinforcement delivery period, postreinforcer delay, and time spent to produce the reinforcer (e.g., often, simply, points per second per trial).

In free-operant procedures where self-controlled responding has occurred and the results are described well by molar maximization, the self-controlled response also has produced the greatest density of reinforcement. For example, Sonuga-Barke, Lea, and Webley (1989) used a concurrent-chains schedule with an initial VI 10-s link to study self-control in children. Terminal link responding produced one reinforcer after 10 s for the impulsive response or two reinforcers after 20, 30, 40, or 50 s for the self-control choice. Responding by the 12-year-old children (the oldest group studied) was almost exclusively impulsive when impulsive responding produced the greatest density (calculated with the VI 10-s link) of reinforcement (conditions with 40- and 50-s terminal links) and tended to be self-controlled when self-controlled responding produced the greater density of reinforcement (conditions with 20- and 30-s terminal links). In a between-subjects experiment, Flora (1990, Experiment 5) gave college students 30 choices between 4 points delivered immediately (impulsive) or 10 points after a delay of 10 s (self-control) following either a random-interval (RI) 1.5-s or a RI 30-s schedule. The mean number of self-controlled choices during the final 15 trials was 5.75 for subjects in the RI 1.5-s group and 15.0 for subjects in the RI 30-s group. It appears then that reinforcement density may control choice in self-control studies using human subjects and free operant procedures. Because initial intervals produce a relatively greater decrease in the density of reinforcement for the impulsive choice than for the self-control choice, self-control responding often occurs when VI schedules have been used (e.g., Belke et al., 1989; Logue et al., 1986).

In discrete-trial procedures used to study self-control choice in humans, when there have been postreinforcer delays following impulsive responses to equate trial durations for impulsive and self-control responses, the density of reinforcement has been greatest for the self-control response and self-control responding has occurred (e.g., Flora & Pavlik, 1992; Navarick, 1986). Likewise, self-control responding has been observed when a constant prereinforcer delay has been added to both the impulsive and self-control alternatives, resulting in a relatively greater decrease in the overall density of reinforcement for the impulsive choice and a greater density of reinforcement for the self-control choice (Flora & Pavlik, 1992; Navarick, 1986). In discrete-trial procedures without postreinforcer or prereinforcer delays the density of reinforcement has been greatest for the impulsive response and impulsive responding has occurred (Flora & Pavlik, 1992; Millar & Navarick, 1984; Navarick, 1986). [Presence or absence of postreinforcer delays generally do not have an effect on pigeon's preference for small but immediate reinforcement, but prereinforcer delays do affect choice in pigeons (e.g., Logue, Smith, & Rachlin, 1985)].

Because impulsive responding by humans seems to occur only when impulsive responding produces the greatest density of reinforcement (excluding studies using food reinforcers and negative reinforcement), and impulsive responding is thus the most economically adaptive choice (at least from the subject's limited contact with the contingencies) it appears to be a misnomer to call such responding "impulsive." Further, in situations where impulsive responding does not occur there is either an interval that must pass between contingent impulsive responses, as in free-operant procedures, or a prereinforcer and/or postreinforcer delay must pass between impulsive responses, as in discrete-trial procedures.

Implicit in, and perhaps a partially defining feature of, many "impulsive" behaviors is that impulsive behaviors may and often do occur in reinforced response bursts or "binges" (e.g., impulsive gambling, shopping sprees, impulsive eating and drinking binges). Initial intervals in free-operant procedures and prereinforcer and/or postreinforcer delays in discrete-trials procedures preclude the possibility of impulsive "binges" relative to self-control responses, and therefore procedures that use such intervals and/or delays may not provide the best possible human experimental model of choice between impulsiveness and self-control.

Similar to impulsive behavior, Vachlin (Rachlin, 1992; Raineri & Rachlin, 1993) has recently proposed a model of "addictive behavior" that suggests with an increasing rate of engaging in a behavior that eventually becomes addictive, the value of reinforcement for that behavior increases. Eventually then, at the molecular level (temporally immediate), the value of the reinforcer maintaining the addictive behavior may be greater than the value of reinforcers for behaviors which ultimately may have greater value at the molar level (temporally extended). Similarly, an impulsive response binge (for example, a drinking binge) may temporally produce a high reinforcement density at the molecular level but ultimately result in a decreased level of reinforcement at the molar level (e.g., possible job loss).

Experiment 1 attempted to establish a human self-control paradigm that would allow for impulsive response "binges" to produce a greater reinforcement density at the molecular level but where self-control responding would produce greater reinforcement density at the molar level. A counterbalanced within-subject discrete-trial paradigm was used. In the delay condition the standard discrete-trial procedure with postreinforcer delays that temporally equated impulsive trials with self-control trials was used. In the no delay condition there were no postreinforcer delays following an impulsive response which allowed for impulsive response bursts. But in the no delay conditions there was a "wait" period after every 10 trials equal to the product of the length of the postreinforcer delay in the delay condition (19 s) and the number of impulsive responses made on the preceding 10 trials. Impulsive response binges in the no delay condition produced a greater reinforcement density at the molecular, or trial-by-trial, level but decreased the total possible amount of reinforcement and therefore reduced reinforcement density at the molar, or session, level.

Experiment 1

Method

Subjects

The subjects were 18 general psychology students at Fort Hays State University who gained extra credit by participating in the experiment.

Apparatus

The apparatus used for the experiment was a Commodore 128 computer interfaced with two blue push buttons. The buttons protruded from a small blue platform (10 cm high by 25 cm long by 10 cm wide). One button was located at the center of the front half and one button was located at the center of the rear half of the platform. The platform was positioned directly in front of the computer monitor. The subject sat directly in front of the monitor. The computer keyboard was placed to the right of the monitor and not used by the subjects.

Responses on the front button are referred to as "impulsive choices" and always produced 2 points immediately. Responses on the rear button are referred to as "self-control choices" and always produced 10 points after 19 s. The location of self-control and impulsive choices was purposely not counterbalanced so that impulsive response bursts might be facilitated by the location of the button that produced immediate reinforcement.

Procedure and Design

Nine subjects, 4 male, 5 female, served in the no delay-delay group. Nine subjects, 5 male, 4 female, served in the delay-no delay group. Subjects were assigned to alternating groups by the order in which they signed up for participation.

The experiment was conducted in a well lit 10m x 10m room, which was bare except for a table, two chairs, the computer, and a sign above the computer monitor that said: "The person who earns the most points wins $20.00, Second place wins $10.00, Third wins $5.00." Responses were defined as closure of a microswitch, produced by an appropriate button press.

Upon arriving at the experimental room, all subjects were first assigned to conditions and allowed to read and sign an informed consent form. The subjects then were read the following instructions:

I will be in the hall during the experiment and I cannot answer any questions once the session starts. All of the instructions you will need will be given by the computer. If the computer reads "please wait" then please remain in your seat and wait. Are you ready?

Once the subject indicated that he/she was ready, the experimenter pressed a key on the computer to start the experiment, went outside the experimental room, and closed the door. The computer screen immediately turned red and "This experiment lasts 20 minutes" was printed on the upper right corner of the screen for 2 s (This statement was somewhat inaccurate because it did not take into account the time needed to read instructions, the eight forced choice trials or subject's response latencies). The screen then turned green and the following instructions were printed in black on the screen:

Now, your task is to earn as many points as possible. The person who earns the most points wins $20.00, second place wins $10.00, third wins $5.00. The currently winning I.D. numbers will be posted on the door. If the computer reads 'please wait,' then please sit and wait. You may have to wait for a few minutes. The computer will always keep track of points earned, but the computer may not always display your point total. When the computer reads 'push the front button,' push the button on the front of the platform. When the computer reads 'push the rear button,' push the button on the rear of the platform. When the computer reads 'choose a button and push it,' you should push the button of your choice. Please push a button when the computer asks you to do so. Now, press the front button when you are ready to start.

After the front button was pressed, the screen immediately cleared and turned blue and remained blue for the remainder of the experiment. All subjects started with one impulsive forced choice trial, followed by one self-control forced choice trial, followed again by an impulsive and self-control forced choice trial that was followed by 30 choice trials. After the 30 choice trials the cycle of forced choices and 30 choice trials was repeated for the new condition producing a total of 60 choice trials in the experiment.

For the no delay-delay subjects, when the screen turned blue, "press the front button" was printed on the center of the screen, prompting an impulsive forced choice.

Once the front button was pressed the prompt was cleared from the screen, a 50-db, 0.05-s beep sounded, 2 points were added to the subject's score (which was initially 0), and "Your points=" followed by the subject's score was on the screen for 1 s. Thus, the impulsive choice produced an immediate reinforcement period in which 2 points were delivered over 1 s (or 2 points per 1 s, a single trial density of 2.0/s).

After the forced choice, impulsive reinforcement period, the screen immediately cleared, and "press the rear button" was printed on the center of the screen. Once the rear button was pressed, the prompt was cleared, and "please wait" was printed on the center of the screen for 19 s. After 19 s, the screen cleared, a 50-db 0.05-s beep sounded, 10 points were added to the score, and "Your points=" followed by the subject's total score was printed on the screen for 1 s. Thus, the self-control choice produced a delay of 19 s followed by 10 points delivered over 1 s. This combination produced 10 points delivered over 20 s (a single trial reinforcement density of 0.5/s).

After the forced choice self-control reinforcement period, the screen again cleared and another impulsive forced choice trial occurred, followed by another self-control forced choice trial. The second self-control forced choice reinforcement period was followed by the screen clearing, and the prompt "chose a button and push it" appearing on the center of the screen which prompted the first choice trial. Then, for 30 trials once a button was pressed the same programmed contingencies occurred for self-control and impulsive choices as in the forced choice trials. Except for the periods following Trials 10, 20, and 30 (see below), after the appropriate reinforcement period the screen always cleared and prompted a choice [ILLUSTRATION FOR FIGURE 1 OMITTED].

There was a wait period after Trial 10 that was increased by 19 s for each impulsive choice that was made during Trials 1-10. Impulsive responses during forced choices did not add to the wait period. If no impulsive choices were made there was no wait period. Identical contingent wait periods occurred after Trials 20 and 30 for any impulsive choices that were made during Trials 11-20 and 21-30. During the wait periods the screen cleared and "please wait" was printed on the center of the screen.

The wait periods were used to hold the length of the experimental sessions constant regardless of how many impulsive choices occurred. The wait periods thus decreased the density of reinforcement for impulsive choices at the molar (but not molecular) or session level because each impulsive choice ultimately produced 19 s when reinforcement was unavailable. Therefore, the density of reinforcement for self-control choices (which did not affect, and were not affected by, wait periods) at the session level was the same as it was at the per trial level, 0.5/s. However, the wait periods changed the density of reinforcement for impulsive choices from a per trial density of 2.0 to a session density of 0.1/s (2 points for each impulsive response divided by 1 s reinforcement period for each impulsive choice plus the 19 s added to the wait periods for each impulsive choice). Because following an impulsive response (except after Trials 10, 20, and 30 when the wait periods occurred), another virtually immediate reinforced impulsive response could be made, reinforced response bursts or impulsive binges were possible in this condition.

After the wait period following Trial 30, subjects again had 4 forced choice trials and 30 choice trials as above except the delay condition was in effect for both the forced choice trials and the choice trials (see below).

The delay condition was identical to the no delay condition except for the following changes. There were no wait periods. Following the reinforcement period after an impulsive choice, a postreinforcer delay occurred where the screen cleared, "please wait" was printed on the center of the screen and remained for 19 s, this was followed by a choice prompt [ILLUSTRATION FOR FIGURE 1 OMITTED]. Therefore, for this no delay condition the reinforcement density for impulsive choices was 0.1/s at both the per trial and per session levels and the reinforcement density for self-control choices was 0.5/s at both the per trial and per session levels.

The delay-no delay group received identical trials as the no delay-delay group except in a counterbalanced fashion, for example, 4 forced choice trials followed by 30 choice trials in the delay condition, followed by 4 forced choice trials and 30 choice trials in the no delay condition with wait periods occurring after Trials 40, 50, and 60.

Results and Discussion

Table 1 and Figures 2 and 3 present individual self-control choices by blocks of 10 trials for the delay-no delay and the no delay-delay groups.

Figure 4 presents the mean total self-control responses during Choice Trials 1-30 and 31-60 for all subjects in the delay-no delay and no delay-delay groups. A mixed ANOVA was conducted with the total number of self-control choices as the dependent variable, delay as the within-subject variable and sequence as the between-subject variable. There was a statistically significant effect for delay, F(1, 16) = 8.925, p [less than] .01. Table 1 and Figures 2 and 3 suggest that the effect of delay was behaviorally significant at the level of the individual subject. Except for Subjects 9 and 18 (Subjects 9 and 18 are included in the statistical analysis, Figure 4 and Table 1, but not in Figures 2 or 3) who responded impulsively through out the experiment, all subjects made at least 18 (60%) self-control responses during the 30 delay condition trials. When the self-control choice produced the greatest reinforcement density at both the trial and session level (molecular and molar levels) self-control choice occurred.
Table 1


Number of Self-Control Choices by Blocks of 10 Trials for Delay-No
Delay and No Delay-Delay Groups in Experiment 1


Subject Trials


 1-10 11-20 21-30 31-40 41-50 51-60


 Delay-No Delay Group


1 7 10 10 10 8 4
2 8 9 8 9 9 3
3 7 9 10 10 10 0
4 8 10 9 0 2 4
5 6 2 10 2 5 5
6 6 10 10 10 10 10
7 7 9 10 10 10 10
8 8 10 9 10 10 10
9 2 0 0 1 0 0
SD 1.88 3.84 3.24 4.46 3.86 4.04
Mean 6.56 7.67 8.44 6.89 7.11 5.11


 No Delay-Delay Group


10 6 0 0 5 8 9
11 2 1 4 8 9 10
12 1 1 0 7 8 9
13 0 2 5 7 10 9
14 7 4 10 9 9 10
15 1 3 10 10 6 9
16 1 1 2 7 10 9
17 8 2 3 8 9 10
18 1 1 1 1 0 1
SD 3.08 1.22 3.86 2.62 3.12 2.83
Mean 3.0 1.67 3.89 6.89 7.67 8.44


There was no statistically significant main effect for sequence. However, there was a significant sequence by delay interaction, F(1, 16) = 24.39, p [less than] .001, which is best seen in Figure 4. Table 1 and Figures 2 and 3 suggest that the interaction of delay and sequence was also behaviorally significant at the level of the individual subject. All subjects (minus Subject 9) in the delay-no delay group developed strong preference for the self-control choice by the end of the first 30 trials, and this preference was generally maintained by all subjects, with the exception of Subjects 4 and 5, during the no delay condition. Self-control preference for Subjects 1, 2, and 3 was maintained during the first 20, but not during the final 10 trials, of the no delay condition. All subjects (minus Subject 18) in the no delay-delay group increased self-control choices during the delay condition. All subjects (except Subject 14) in the no delay-delay group made a majority of impulsive choices during the no delay condition, and these subjects (except Subject 18, but including Subject 14) made a majority of self-control choices during the delay condition.

Taken together these results suggest that when the self-control choice produces the greatest reinforcement density at both the molecular and molar levels (that is, during the delay conditions) self-control choice will occur. Further, once the molar self-control discrimination has been made, self-control preference may persist even when impulsive responding produces the greatest reinforcement density at the molecular level (delay-no delay group). However, control by the molar reinforcement contingency (e.g., the greater session wide reinforcement density produced by self-controlled responding) appears to have occurred for Subjects 14 and 15 in the no delay-delay group during the no delay condition. This suggests that perhaps with extended exposure to the no delay condition other subjects would develop the molar discrimination even though impulsive responding produces the greatest density at the molecular level (see Branch, 1991; Wanchisen, Tatham, & Hineline, 1992, for related discussion).

Nevertheless, the majority of the current data most strongly indicates that impulsive choices will occur when impulsive choice produces the greatest density of reinforcement at the molecular level, but that this impulsive preference may be precluded if self-control preference develops during a previous time when self-control choice produces the greatest density of reinforcement at both the molar and molecular level. That is, once molar discrimination is learned it may override molecular contingencies.

The postreinforcer delays that occurred following impulsive choices in the delay conditions could be conceptualized as negative punishment, for example, time-out for impulsive responding. (The delays to reinforcement following a self-control choice could also be conceptualized as a time-out. However, delayed reinforcement is a defining property of "self-control" so such a conceptualization seems spurious.) If the postreinforcer delays following an impulsive choice in the delay conditions acted as a time-out for impulsive responding then it could be concluded that punishment for impulsive responding reduces impulsive responding, and the subsequent increase in self-control is often maintained even when the punishment contingency is removed. Experiment 2 sought to examine explicitly the effects of punishment on impulsive choice.

Experiment 2

Aversive noise was selected as the punishing stimulus for Experiment 2 because aversive noise has been well established as an effective punisher in humans (see Bailey, 1983, for review). For example, Azrin (1958) found that 110-db white noise contingent upon observing responses produced large and stable decreases in responding by the soldier subjects. In Experiment 2, aversive noise was contingent on impulsive responses in the punishment condition. The method of Experiment 2 was identical to Experiment 1 except for the following changes.

Method

Subjects

Subjects were students in the experimenter's masters level Learning and Motivation course and students in the experimenter's undergraduate level Experimental Methods course whose participation gained them extra credit. Seven subjects, 1 male and 6 female, served in the punishment-no punishment condition. Eight subjects, 1 male and 7 female, served in the no punishment-punishment condition.

Procedure and Design

Before signing up for participation prospective subjects were informed that the study involved very loud noises, and they were again informed of this immediately prior to signing the consent form which contained a statement that loud noise exposure was a part of the study.

The no punishment condition was identical to the no delay condition in Experiment 1. The punishment condition was identical to the no punishment condition except that following an impulsive response during the 1-s reinforcement period the 50-db, 0.05-s beep was replaced with a 92-db, 0.50-s high pitched tone (reported as aversive by psychology faculty), and "NO!" was printed in the same size and font as the subject's score directly above the subject's score. Counterbalanced across subjects, subjects received 4 alternating, forced choice (2 impulsive and 2 self-control) trials and 30 choice trials in the punishment condition and 4 forced choice trials and 30 choice trials in the no punishment condition.

Results and Discussion

Table 2, Figure 5, and Figure 6 present individual self-control choices by blocks of 10 trials for the punishment-no punishment and the no punishment-punishment groups. Figure 3 presents the mean total self-control responses during Choice Trials 1-30 and 31-60 for all subjects in the punishment-no punishment and the no punishment-punishment groups. A mixed ANOVA was conducted with the total number of self-control choices as the dependent variable, punishment as the within-subject variable and sequence as the between-subject variable. There was a statistically significant effect for punishment, F(1, 13) = 7.602, p = .016. Table 2 and Figures 5 and 6 suggest that the effect of punishment was behaviorally significant at the level of the individual subject. Except for Subject 25, who responded indifferently throughout the experiment, all subjects made at least 17 (57%) self-control responses during the 30 punishment condition trials.
Table 2


Number of Self-Control Choices by Blocks of 10 Trials for
Punishment-No Punishment and No Punishment-Punishment Groups in
Experiment 2


Subject Trials


 1-10 11-20 21-30 31-40 41-50 51-60


 Punishment-No Punishment Group


19 9 9 10 7 10 10
20 6 8 10 9 10 10
21 9 10 10 10 9 10
22 9 10 10 10 10 10
23 8 8 8 8 9 8
24 7 9 9 10 2 1
25 3 3 7 5 5 6
SD 2.15 2.41 1.21 1.9 3.13 3.39
Mean 7.29 8.14 9.14 8.43 7.86 7.86


 No Punishment-Punishment Group


26 2 0 3 9 10 10
27 6 4 10 9 9 9
28 8 4 2 7 8 2
29 3 7 6 6 10 9
30 3 3 4 8 5 4
31 1 1 10 10 10 10
32 6 7 8 9 9 10
33 8 10 10 8 10 10
SD 2.7 3.34 3.34 1.28 1.73 3.16
Mean 4.63 4.5 6.63 8.25 8.88 8.0


There was no statistically significant main effect for sequence. However there was a significant sequence by punishment interaction, F(1, 13) = 6.33, p [less than] .05, which is best seen in Figure 7. Table 2 and Figures 5 and 6 suggest that the interaction of delay and sequence was also behaviorally significant at the level of the individual subject. All subjects (minus Subject 25) in the punishment-no punishment group showed strong preference for the self-control choice by the end of the first 30 trials, and this preference was maintained by all subjects, with the exception of Subjects 24 and 25, during the no punishment condition. Seven of the 8 subjects in the no punishment-punishment group increased self-control responding during the punishment condition. The exception was Subject 33 who was highly self-controlled throughout the experiment.

Taken together these results indicate that when impulsive responding is punished, self-control is more likely to occur. Further, once self-control responding is developed, self-control preference may persist even when impulsive responding produces the greatest reinforcement density at the molecular level and no longer results in punishment (punishment-no punishment group). That is, in human self-control paradigms, once molar discrimination occurs it may override molecular contingencies.

As the no punishment condition of Experiment 2 was identical to the no delay condition of Experiment 1, it should be expected that the groups that experienced these conditions first should make roughly equivalent numbers of self-control responses during the first 30 trials. However, the no punishment-punishment group of Experiment 2 made statistically significantly more self-control responses than the no delay-delay group of Experiment 1 during the procedurally identical first 30 trials; 15.74 vs. 8.556: F(1, 15) = 4.895, p [less than] .05.

Because the contingencies were identical for the two groups during the first 30 trials, this result may compromise the current study. However, the subject population of Experiment 1 was general psychology undergraduates (largely freshmen) and the subject population of Experiment 2 was graduate and advanced undergraduate students. According to Eisenberger's (1992) "learned industriousness" theory, the presumably greater preexperimental experience with delayed reinforcement of the subjects in Experiment 2 (advanced undergraduates and graduate students) should result in greater "generalized self-control," which should in turn result in greater self-control in the current study. However, the present study did not provide a test for this hypothesis.

General Discussion

Reinforced impulsive response bursts were possible in the no delay condition of Experiment 1 and impulsive responding in this condition produced a reinforcement density that was greater than self-control responding at the molecular level (i.e., reinforcement density aggregated across 10 trials) but not at the molar level (i.e., reinforcement density aggregated across the session). Self-control responding produced the greatest reinforcement density at both the molar and molecular levels in the delay condition of Experiment 1. Impulsive responding occurred in the no delay condition if the no delay condition was experienced first. This condition may provide a new and useful paradigm for studying variables that influence the conflicting choice between immediate reinforcement versus later but greater long term reinforcement. One factor that the present study suggests is important in determining impulsiveness is the availability of many immediate reinforcers (even if they are small) that can be procured rapidly, for example, as in the no delay condition (see Rachlin, 1992, for related theoretical discussion).

However, the results of the delay-no delay group suggest that even if many immediate reinforcers are available, impulsiveness can be precluded if self-controlled responding is already occurring. The molar discrimination, that is, greater overall reinforcement for choice of delayed reinforcement, is facilitated if there is a delay following an impulsive choice. The postreinforcer delays can be conceptualized simply as a design variable that equates trial durations regardless of choice and facilitates molar discrimination. This discrimination may be facilitated by postreinforcer delays because the postreinforcer delays make reinforcement densities the same at the trial (molecular) and session (molar) level. Responding by the delay-no delay group suggests that it is the molar contingency that then controls choice because self-control choice is maintained during the no delay condition when the molecular density of reinforcement is greatest for the impulsive choice.

Alternatively, the postreinforcer delays can be conceptualized as negative punishment, that is, time-out for impulsive responding. Experiment 2 directly tested the effects of positive punishment on impulsive responding and found that punishment did indeed reduce impulsiveness. Further, as in Experiment 1, once self-control responding was occurring, self-control was maintained even when the punishment contingency was removed and impulsive responding would have resulted in a greater reinforcement density at the molecular level. Similarly, Azrin (1958) found that "once the noise had forced the responses into a more efficient temporal pattern, this improvement lingered on after the cessation of the noise" (p. 196).

However, caution should be used in applying the conclusion that aversive punishment of impulsive behavior increases self-control. First, time-out following impulsive responding (Experiment 1) was just as, or more, effective than positive punishment for reducing impulsive responding. Second, in a pilot study for Experiment 2, an 80-db tone was used and had no effect on impulsive responding. Only when the computer volume was at its maximum level (92 db) did the punishment affect choice (apparently, the printed stimulus "NO!" had little or no effect). This result supports the well-known finding that for a stimulus to function as an effective punisher, high intensities must be used (e.g., Azrin, 1958; Azrin & Holz, 1966). Third, when the impulsive response was being punished the self-control choice was a readily available unpunished alternative response that resulted in reinforcement. In nonexperimental situations there may not be readily available reinforced alternatives to the punished response. If there is no unpunished reinforced alternative response, punishment is less likely to be effective (e.g., Azrin & Holz, 1966). Fourth, a few studies have suggested techniques to increase self-control that do not involve punishment. For example, first suggested by Skinner in Walden Two (1948), if the reinforcement for a self-control choice is initially delivered immediately and then the delay is slowly increased, self-control responding may be increased (Mazur & Logue, 1978, but see Logue & Mazur, 1981; Schweitzer & Sulzer-Azaroff, 1988). (Also see Eisenberger, 1992; Eisenberger, Masterson, & Lowman, 1982; Flora et al., 1992, for related studies and discussion.) The present results, together with previous research on self-control and punishment, indicate that punishment of impulsive responding may be an effective method of increasing self-control. However, in consideration of the many possible undesirable side effects of punishment (e.g., aggression, escape behaviors), and the potential of other methods to increase self-control (see, for example, Eisenberger, 1992; Schweitzer & Sulzer-Azaroff, 1988), it is recommended that punishment of impulsive behaviors be considered only after other techniques have been attempted in teaching self-control.

References

AZRIN, N. H. (1958). Some effects of noise on human behavior. Journal of the Experimental Analysis of Behavior, 1, 183-200.

AZRIN, N. H., & HOLZ, W. C. (1966). Punishment. In W. K. Honig (Ed.), Operant behavior Areas of research and application (pp. 380-447). New York: Appleton-Century-Crofts.

BELKE, T. W., PIERCE, W. D., & POWELL, R. A. (1989). Determinants of choice for pigeons and humans on concurrent-chains schedules of reinforcement. Journal of the Experimental Analysis of Behavior, 52, 97-109.

BAILEY, S. L. (1983). Extraneous aversives. In S. Axelrod & J. Apsche (Eds.), The effects of punishment on human behavior (pp. 247-284). New York: Academic Press.

BRANCH, M. N. (1991). On the difficulty of studying "basic" behavioral processes in humans. The Behavior Analyst, 14, 107-110.

EISENBERGER, R. (1992). Learned industriousness. Psychological Review, 99, 248-267.

EISENBERGER, R., & MASTERSON, F. A., & LOWMAN, K. (1982). Effects of previous delay of reward, generalized effort, and deprivation on impulsiveness. Learning and Motivation, 13, 378-389.

FLORA, S. R. (1990). Relative reinforcement deprivation and postreinforcer delays in the behavior analysis of human self-control. Unpublished doctoral dissertation, University of Georgia, Athens, Georgia.

FLORA, S. R., & PAVLIK, W. B. (1992). Human self-control and the density of reinforcement. Journal of the Experimental Analysis of Behavior, 57, 201-208.

FLORA, S. R., SCHIEFERECKE, T. R., & BREMENKAMP, H. G. (1992). Effects of aversive noise on human self-control for positive reinforcement. The Psychological Record, 42, 505-517.

FORZANO, L. B., & LOGUE, A. W. (1992). Predictors of adult human's self-control and impulsiveness for food reinforcers. Appetite, 19, 33-47.

LOGUE, A. W. (1988). Research on self-control: An integrating framework. Behavioral and Brain Sciences, 11, 666-709.

LOGUE, A. W., KING, G. R., CHAVARRO, A., & VOLPE, J. S. (1990). Matching and maximizing in a self-control paradigm using human subjects. Learning Motivation, 21, 340-368.

LOGUE, A. W., & MAZUR, J. E. (1981). Maintenance of self-control acquired through a fading procedure: Follow-up on Mazur and Logue (1978). Behavior Analysis Letters, 1, 131-137.

LOGUE, A. W., PENA-CORREAL, T. E., RODRIGUEZ, M. L., & KABELA, E. (1986). Self-control in adult humans: Variation in positive reinforcer amount and delay. Journal of the Experimental Analysis of Behavior, 46, 159-173.

LOGUE, A. W., SMITH, M. E., & RACHLIN, H. (1985). Sensitivity of pigeons to prereinforcer and postreinforcer delay. Animal Learning and Behavior, 13, 181-186.

MAZUR, J. E., & LOGUE, A. W. (1978). Choice in a "self-control" paradigm: Effects of a fading procedure. Journal of the Experimental Analysis of Behavior, 30, 11-17.

MILLAR, A., & NAVARICK, D. J. (1984). Self-control and choice in humans: Effects of video game playing as a positive reinforcer. Learning and Motivation, 15, 203-218.

NAVARICK, D. J. (1982). Negative reinforcement and choice in humans. Learning and Motivation, 13, 361-377.

NAVARICK, D. J. (1986). Human impulsivity and choice: A challenge to traditional operant methodology. The Psychological Record, 36, 343-356.

RACHLIN, H. (1992). Diminishing marginal value as delay discounting. Journal of the Experimental Analysis of Behavior, 57, 407-416.

RACHLIN, H., & GREEN, L. (1972). Commitment, choice and self-control. Journal of the Experimental Analysis of Behavior, 17, 15-22.

RAINERI, A., & RACHLIN, H. (1993). The effect of temporal constraints on the value of money and other commodities. Journal of Behavioral Decision Making, 6, 77-94.

SCHWEITZER, J. B., & SULZER-AZAROFF, B. (1988). Self-control: Teaching tolerance for delay in impulsive children. Journal of the Experimental Analysis of Behavior, 50, 173-186.

SKINNER, B. F. (1948). Walden two. New York: Macmillan.

SOLNICK, J. V., KANNENBERG, C. H., ECKERMAN, D. A., & WALLER, M. B. (1980). An experimental analysis of impulsivity and impulse control in humans. Learning and Motivation, 11, 61-77.

SONUGA-BARKE, E. J. S., LEA, S. E. G., & WEBLEY, P. (1989). The development of adaptive choice in a self-control paradigm. Journal of the Experimental Analysis of Behavior, 51, 77-85.

WANCHISEN, B. A., TATHAM, T. A., & HINELINE, P. N. (1992). Human choice in "counterintuitive" situations: Fixed-versus progressive-ratio schedules. Journal of the Experimental Analysis of Behavior, 58, 67-85.
COPYRIGHT 1995 The Psychological Record
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1995 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Flora, Stephen R.
Publication:The Psychological Record
Date:Mar 22, 1995
Words:5992
Previous Article:Emergent relational responding based upon quantity and equivalence.
Next Article:Reinforcement concordance induces and maintains stimulus associations in pigeons.
Topics:


Related Articles
Self-Control: Waiting Until Tomorrow for What You What Today.
Superstitious math performance: interactions between rules and scheduled contingencies.
Beyond Discipline: From Compliance to Community.
Toward a New Behaviorism: The Case Against Perceptual Reductionism.
Handbook of Research Methods in Human Operant Behavior.
Getting in Touch with Your Inner-City Child.
Effects of cold pressor pain on human self-control for positive reinforcement.
Persons in context; building a science of the individual.
Scientists discover key missing link in signaling pathway for plant steroid hormones.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters