Printer Friendly

The effects of initial interval size on the efficacy of DRO schedules of reinforcement.

The Effects of Initial Interval Size on the Efficacy of DRO Schedules of Reinforcement

The differential reinforcement of other behavior (DRO) is a procedure in which reinforcement is delivered if a target response does not occur for a specified interval (Kelleher, 1961; Lane, 1961; Reynolds, 1961). It has been studied both in laboratory and applied settings, the latter primarily because it is a nonaversive procedure that can reduce some inappropriate behavior. It is often used in classrooms to manage various disruptive behaviors, perhaps most commonly when teachers tell students they will be rewarded if they are good for the class period or school day.

In general, laboratory studies have not been concerned with whether DRO reduces behavior but rather with what factors make it effective or ineffective. Some of these studies have shown the following results:

1. A schedule in which the interval is fixed is more effective than a schedule in which it varies (Reuter & LeBlanc, 1972).

2. DRO is more effective when the interval is initially small and gradually increased than when it is initially large (Repp & Slack, 1977; Topping, Larmi, & Johnson, 1972).

3. Postponing reinforcement for an interval greater than the DRO interval is more effective than postponing reinforcement for an interval equal to the DRO value (Uhl & Garcia, 1969).

DRO is also a well-known procedure in applied work, although the results of its use have been mixed. In some studies, it has proved effective when used alone or with other procedures (e.g., Barton, Brulle, & Repp, 1986; DeCatanzo & Baldwin, 1978; Deitz, Repp, & Deitz, 1976; Dwinell & Connis, 1979; Lutzker, 1974; Myers, 1975; Repp, Barton, & Brulle, 1983; Repp, Deitz, & Speir, 1975; Tarpley & Schroeder, 1979). In other studies, however, it has been ineffective (e.g., Corte, Wolf, & Locke, 1971; Foxx & Azrin, 1973; Harris & Wolchik, 1979).

There could be many reasons for the disparity in these results, for example, the strength of the reinforcer used to reward the student for not engaging in the inappropriate behavior, or the events maintaining the behavior to be reduced. Speculating post hoc about these reasons would not seem to be a productive undertaking, because many variables change across studies (e.g., behaviors, subjects, settings, and reinforcement histories). A more productive approach would seem to be the one followed by the laboratory researchers. This approach directly compares variables within a single study so that we can learn how to use DRO more effectively in classrooms and other settings.

Such an approach has been followed in one set of studies concerning DRO. These studies (Barton et al., 1986; Repp et al., 1983) have compared two types of DRO schedules. In one, labeled momentary DRO (MDRO), reinforcement is delivered if the target behavior is not occurring at the moment the interval ends (Harris & Wolchik, 1979). MDRO is being used in classrooms when teachers "catch" students not behaving inappropriately (e.g., the moment the bell rings or lessons are completed, or when teachers look up from their desks). In the other procedure, whole-interval DRO (WDRO), reinforcement is delivered if the target behavior has not occurred throughout the whole interval. WDRO is being used in classrooms when teachers reward students who have not behaved inappropriately for a whole period (rather than for any specific moment in the period, which would be MDRO). Results have shown that MDRO, when used alone, is ineffective at reducing inappropriate behavior, but effective at maintaining a reduction that was achieved through WDRO. Thus, one variable associated with the effectiveness of DRO has been identified.

A second variable that seems to be important is the size of the DRO interval. Previous laboratory work has shown that DRO is more effective when its initial interval is small than when it is large. One of these studies (Repp & Slack, 1977) showed this effect with persons with severe handicaps under one baseline condition. When we replicated the study using a different type of baseline, however, no difference was found among three DRO interval sizes. Thus, we are uncertain of the effects of interval size in laboratory settings.

Similarly, we are uncertain of the effects in applied settings where the behavior would be something other than the lever-pressing response used in the laboratory. One pilot study (Barton, Barrow, Brulle, & Repp, 1983) provided only equivocal results, but did lead to the present study, the purpose of which was to compare the relationship of initial DRO interval size to behavior reduction. To address this purpose, a comparison was made between initial DRO interval sizes which were either (a) the average time between responding during baseline, or (b) twice that value. The first value was chosen because we have argued elsewhere (e.g., Repp, 1983) that an interval of this size would provide a high probability that the person would be reinforced for response omission and that behavior reduction should, therefore, come more quickly. The comparison was made in two experiments, each using a different experimental design to rule out alternative hypotheses.



Students and Setting. A class of four male and two female 9 to 10-year-old students with moderate retardation served as subjects in this experiment. They were chosen because the teacher wanted to reduce but not eliminate a relatively innocuous behavior (disruptions) that could ethically be used for a study of this type. The students were in a special education program in a segregated facility in which students periodically moved from one classroom to another. Each class (e.g., academics, functional life skills, music) generally lasted from 20 to 50 minutes (min) and was taught by different professionals. For this study, two relatively similar classes were selected. Both dealth with academics, one primarily with functional arithmetic and the other with word recognition, and each was staffed by a teacher and an aide. The important considerations for this study were that each class involved some independent desk work and each was interrupted by various off-task behaviors.

Behavioral Definitions. The following definitions of the disruptive behaviors were modified from those presented by Kendall and Wilcox (1979):

1. Off-task verbal behavior: The child says something not related to the task presented.

2. Off-task physical behavior: The child plays with materials in a way that draws attention away from the assigned work (e.g., throwing objects, kicking another student, and tapping a crayon).

3. Off-task attention: Without engaging in off-task verbal or physical behavior, the child looks away from the work materials.

4. Out of seat: The child and the chair seat are not in contact.

5. Interruptions: The child speaks on a topic unrelated to the task while the teacher or another student is talking (task-related interruptions were not scored).

Recording Procedures. Two observers simultaneously recorded data in each classroom for 25 minutes (min) per day. One always functioned as the primary observer and had the duties of recording behavior and signaling the teacher when the DRO interval elapsed without disruptions having occurred. During training, the observers were given the behavior definitions, asked to memorize them, and then taken to a classroom not used in the study. Here, they practiced until they reached at least 80% agreement on each of the five categories, even though only the composite of "disruptions" was treated as the dependent variable. This procedure was followed to allow us to test whether the observes were reliable on the five components of the definition of disruption and to prevent observations from being scored as reliable when one observer scored one of the five components while the other observer scored another component.

Data were recorded according to a 10-second (s) partial interval schedule. As an alternative, we could have used a 10-s momentary time-sampling procedure in which the observer recorded what was occurring only at the end of 10 s (Powell, Martindale, Kulp, Martindale, & Bauman, 1977). While this procedure is slightly more accurate at such a small interval, it cannot be used with WDRO because it would not tell us whether behavior was occurring during the interval: it could only tell us what was occurring at the end of the interval.

In our procedure, each person observed for 10 s, and then recorded any of the five behaviors that had just been demonstrated by any of the class members. If more than one response occurred per interval, each was marked; if none occurred, a sixth category ("no response") was marked. In this way, marking the paper did not serve as a cue to the other observer that a disruptive behavior had just occurred. Observation intervals were cued by a tape recorder with an adapter leading to two ear plugs that allowed the observers to coordinate their activities.

During the study itself, the observers sat in a corner of the room a few feet apart and simultaneously observed all students. At the end of each interval, the observers quickly marked on their recording forms whether any member of the group emitted one of the disruptive responses, and began their observations again.

Interobserver Agreement. Interobserver agreement was calculated by dividing the number of intervals of agreement for each category by the total number of observation intervals in which either observer recorded behavior. Because no category occurred in more than 50% of the observations, the occurrence method of calculating agreement was used; and the intervals in which neither observer recorded behavior were dropped from the denominator (Hartmann, 1977).

Although we were interested in the effects of DRO intervals on a response class (disruptions), agreement data were calculated on each category. These percentages would have to be less than or equal to those calculated if each interval were just scored "disruption" or "no disruption." The agreement scores, calculated by the occurrence method each session for each category, were as follows: (a) off-task verbal: mean = 88%, range = 89%-100%; (b) off-task physical: mean = 89%, range = 81%-97%; (c) off-task attention: mean = 82%, range = 79%-91%; (d) out of seat: mean = 100%; and (e) interruptions: mean = 96%, range = 89%-100%.

Reinforcer Definition. One of the reasons DRO schedules may not be effective is that we have not previously tested whether the consequence for omitting a response is actually a reinforcer for each individual. Instead, we might have presumed that praise, free time, and so forth were reinforcing. Before this study began, then, we tested whether the rewards we were going to use were actually reinforcers. On eight occasions, there were a pair of 5-min periods during which students were given worksheets containing matching problems (e.g., numbers and objects). During one condition, students were asked to complete as many problems as they could; during the other, they were given the same assignment but could earn tokens exchangeable for rewards. Each reward that produced 25% more responding during a 5-min period than during the corresponding 5 min it was not used was declared a reinforcer for that student. These were then used as the reinforcers during the DRO phases of the study.

In the study itself, a token was used as feedback for each interval the class went without a disruption. Procedurally, the primary observer signaled the teacher that an interval had passed without disruption; the teacher then made a mark on the chalkboard. At the end of the period, the students could exchange each mark for a reinforcer (e.g., a treat, an activity, or a privilege) from a list the students and the teacher generated before baseline.

Design. A variation of a multiple schedule we have used elsewhere (Repp, Felce, & Barton, 1988; Repp, Klett, Soseby, & Speir, 1975) was used for this study. In this procedure (cf. Ferster & Skinner, 1957), some consequence (e.g., praise) is paired with one stimulus (e.g., Teacher A), while another (e.g., extra credit) is paired with a different stimulus (e.g., Teacher B). When different rates of behavior occur in the presence of the two stimuli (Teachers A and B), experimental control is said to have been demonstrated. In the present study, the two stimuli were the two classrooms, and the consequences were the two types of DRO schedules.

* Phase 1: Baseline. During this phase, the teachers conducted their classes and consequated disruptions in their usual fashion (ignoring the students or verbally reprimanding them). Data collected during this phase allowed the two DRO schedules to be determined according to their respective formulas.

* Phase 2: [DRO.sub.1] and [DRO.sub.2]. During this phase, two DRO schedules were used. In one classroom ([DRO.sub.1]), the DRO value was the mean number of 10-s intervals between responses during the last 3 days of baseline in that class. (This is a convention for determining DRO intervals that we have adopted and used for many years.) In the other classroom ([DRO.sub.2]), the DRO value was twice the mean of the last 3 days for that classroom. For example, if the mean number of 10-s intervals was 6, the students would be reinforced in the [DRO.sub.1] condition if they went 60 s without being disruptive; in the [DRO.sub.2] condition, they would have had to go 120 s without being disruptive. In both classrooms, the period gegan with a brief explanation from the teacher that students could earn tokens exchangeable for reinforcers for not being disruptive. The DRO value was not explained to the students, and whether or not the group met the DRO criterion was indicated by the primary observer only to the teacher. This procedure was used to decrease the probability of other disruptive behaviors such as arguing with the teacher.

* Phase 3: [DRO.sub.1]. In this phase, the most effective of the DRO formulas was used in both classrooms. This procedure served as partial replication of the effects of that DRO value in the other classroom.

Results and Discussion

Figure 1 indicates the percent of 10-s intervals in which as least one of the disruptive behavior categories was scored for the group of six students. Baseline data in Classroom 1 (C1) varied between 19% and 36% (M = 27%) and averaged 25% during the last three sessions. Thus, the behavior occurred on the average every fourth interval, and the average number of 10-s intervals between disruptions was three; therefore, the DRO interval for the next phase was set at 30 s (i.e., the class was reinforced if it was not disruptive for 30 s since the last interval containing responding). Baseline data in Classroom 2 (C2) varied between 17% and 41% (M = 31%) and averaged 31% during the last three sessions. Here, the average number of 10-s intervals between disruptions was two, and the DRO interval for the next phase was twice that value, or 40 s.

Behavior during the next phase was reduced in both classrooms, more so, however, in C1 (M = 14%, range = 6%-28%) than in C2 (M = 23%, range = 16%-32%). Disruptions in C2, however, reduced to a fairly consistent level in three of the last four sessions of the phase. In the third phase, the same DRO value was kept in C1, and behavior maintained the level attained during the last five sessions of the prior phase (M = 11%, range = 14%-12%). The DRO value for C2, however, was changed to that of C1. Now, it too was set at the average number of intervals between disruptions during 3 days of baseline (DRO 20 s). Behavior was reduced during this condition and approximated that of the other classroom (M = 7%, range = 4%-13%).

In summary, then, both the DRO schedules reduced disruptions, but they were differentially effective. In the second phase, the shorter DRO reduced responding to 14%, whereas the longer DRO reduced responding to 23%. However, because the two baseline means differed (27% and 31%, respectively), one could presume that the absolute rate of responding would be lower in the shorter DRO condition. Another way, then, of assessing behavior change might be warranted. One would be to divide the amount of change in two contiguous phases by the amount possible to assess the relative change. With this formula, the following results occurred: (a) in Phase 2, the shorter DRO showed a change of 48% [(27-14)/27]; in Phase 2, the longer DRO showed a change of 26% [(31-23)/31]; and (c) in Phase 3, when moving from the longer to the shorter DRO, there was a change of 70% [(23-7)/23]. Thus, the shorter DRO was considerably more effective both times it was used.

The results show that for one behavior in two settings, the initial DRO size can have an effect. The design used, incorporating the same subjects in both settings, helps to rule out between-subjects differences. It does not, however, rule out the extent to which behavior reduction in one setting may influence behavior in the other setting. Thus, a second study was conducted in which three different students were studied, each in a different classroom.



Students and Setting. Three children who attended a school like that used in the first experiment served as subjects (all were male). The programs were similar to those of the other students, but these three children were in different classrooms. Each student was classified by his school district as trainable mentally retarded and had no secondary handicaps. Student 1 was 9, Student 2 was 11, and Student 3 was 12 years old.

Recording Procedure. The second experiment used the same behavioral definitions, recording procedure, and method of calculating inter-observer agreement used in the first experiment. The observers were trained in the same way, and the recording conditions were similar across the experiments.

Procedure. This experiment also used the same procedure for pretesting reinforcers used in the prior study. Because several other students in the classrooms also were on token programs, the use of reinforcers did not seem to cause any problems for the other students.

Because the design used previously did not allow us to separate the effects of two simultaneous DRO programs on a single set of students, a different design was used here--a variation of a multiple baseline across subjects. The first student was presented with an A-B design and both the second and the third students with an A-C-B design. Introduction of the treatment phases was staggered, and the conditions were: (a) A--baseline, (b) B--a DRO value equal to the average time in baseline between intervals of responding, and (c) C--a DRO value equal to twice the average time in baseline between intervals of responding.

The rationale for the order of conditions for the second and third students was as follows: Because Experiment I had already shown the B condition to be more effective than C, the C condition should come first for these students (there was a chance that B would reduce disruptions too much for C to have any effect). Only B was used after baseline for the first student so that the effect of B without having followed C could be shown.

Results and Discussion

Figure 2 reflects the percent of 10-s intervals in which responding occurred for each of the three students. Subject 1 was under baseline conditions for 4 days, the last 3 of which showed behavior ranging from 14% to 27% and averaging 21%. Because behavior occurred every fifth interval, the average number of 10-s intervals between those containing disruptions was four. Therefore, the DRO value for the next phase (or B condition) was 40 s. The student stayed in this phase throughout the rest of the study, averaging disruptions in 7% of the intervals with a range of from 1% to 21%. By the end of the phase, responding was quite stable and averaged 6% for the last 3 days.

The second subject experienced an A-C-B order of experimental conditions. Baseline lasted for six sessions, and his responding ranged from 14% to 26% (M = 20%). On the average, behavior occurred every fifth 10-s interval, with an inter-response interval of 40 s. His DRO interval, therefore, was established as 80 s (i.e., twice the mean interval between response intervals during baseline). He remained in this phase for 5 days. During this time, responding ranged from 12% to 21% (M = 16%) and decelerated throughout the phase. During the final 8-day phase, responding continued to decelerate, ranging from 3% to 10% (M = 7%) and averaging 5% during the final days.

The third subject also went through the conditions in an A-C-B order. Baseline lasted for 10 sessions, and responding averaged 16% during its last 3 sessions (range = 12%-19%). His DRO interval was set at 100 s, or twice the mean interval (50 s) that occurred during baseline. He stayed in this phase for 5 days, during which responding occurred in 13% of the intervals. Following this phase, he entered the B condition in which the DRO interval was set at the mean number of 10-s intervals (8) between responses during the last 3 days of the second phase. Responding varied from 6% to 14%, averaging 9% for the phase and 7% for the last 3 days.

In general, the results show that the B condition (M = 8%) was more effective than the C condition (M = 15%) in reducing behavior. Subject 1, who experienced the A-B conditions, reduced responding more from baseline to the first treatment (21% to 7%) than did Subjects 2 (20% to 16%) or 3 (16% to 13%), both of whom first experienced the C condition. Following the introduction of the B condition, their behavior decreased even more (to 7% and 9%, respectively). The results for the second student, however, are equivocal because disruptions were decreasing in the second phase in a manner that suggests the results found in the third phase.

When the relative change is assessed (i.e., change/possible change), the results again show the shorter DRO to be more effective: (a) for the first student, the relative reduction was 67% ](21-7)/21] for the shorter DRO; (b)

for the second student, the relative reduction was 20% for the longer DRO ](20-16)/20] in Phase 2 and 56% [(16-7)/16] for the shorter DRO in Phase 3; (c) for the third student, the relative reductions were 19% and 31%.


Previous laboratory and applied research has identified several factors that increase the effectiveness of DRO schedules of reinforcement. These include (a) postponing reinforcement for an interval greater than the DRO interval when responding occurred, (b) using a fixed rather than a variable DRO interval, (c) using a whole-interval rather than a momentary DRO, and (d) beginning with a small rather than a large interval.

The latter (Repp & Slack, 1977) was a laboratory study in which children with retardation were reinforced with pennies for pressing a lever during baseline and then reinforced on various DRO schedules for not pressing it. The present study extends those results by using a different behavior, a different set of reinforcers, and a very different setting. The results showed that the shorter DRO was about twice as effective as the longer DRO. (We should note that the purpose of this article was to compare two reductive procedures while reducing behavior to an acceptable level; it was not to eliminate responding. If the latter had been the purpose, we would have thinned the DRO schedule by gradually increasing the value of the DRO as students came under schedule control. With increasing intervals that are based on prior levels of responding, one could expect further reductions because the penalty for responding is a longer postponement of the time at which reinforcement could be earned.)

More research should be conducted to determine how this procedure can be made effective and efficient in classrooms without research assistants. The data from our study show that interval size can affect the success of DRO and that interval size should be related to the rate of behavior during baseline. The easiest way we have found to establish the proper interval in a one-teacher classroom is to have either the teacher or a student count the inappropriate behaviors in the class period and then to divide that number into the class period. For example, if the average for a few days were 10 disruptions in 60 min, then the DRO interval would be set at 6 min. A kitchen timer or a timing watch could be set at that interval and the teacher would only have to note whether one response occurred in the interval; all other occurrences would be ignored because the student would not be reinforced at the end of the interval anyway. After each interval without inappropriate behavior, the teacher could reinforce or provide feedback in a manner suitable to the student's age.

The results of this study seem to have identified another factor that can make reinforcement-based reductive procedures more effective. At a time when there is so much controversy over the use of aversive techniques (Repp & Singh, 1990), we would hope that the identification of factors making nonaversive procedures more effective would be helpful to all concerned. Further, we would hope that others would join in an effort to determine what factors make reinforcement-based reductive procedures more effective, as well as what conditions would lead us to predict that they would not be successful. In this way, those of us in the human services profession could help our clients much more expeditiously.

The results of this study, however, should be interpreted cautiously. We did not wish to use a behavior such as self-injury or aggresion in an experimental study of this type; prolonging treatment for the sake of experimentation would not seem to use to be warranted. Perhaps studies with such behaviors could be designed in less of a traditional experimental protocol (i.e., with extended baselines) and in a manner that would help experimenters learn how to teach clients to behave in a less maladaptive manner. Thus, whether the shorter interval would be more effective on other such behaviors is clearly unknown. This, of course, is the type of behavior that is at the center of the controversy.


Barton, L.E., Barrow, L.A., Brulle, A.R., & Repp, A.C. (1983). Applied differential reinforcement: The efficacy of "least value interresponse time programming for multiple behaviors." Paper presented at the annual meeting of the Association for Behavior Analysis, Milwaukee, Wisconsin.

Barton, L.E., Brulle, A.R., & Repp, A.C. (1986). Maintenance of therapeutic change by momentary DRO. Journal of Applied Behavior Analysis, 19,277-282.

Corte, H.E., Wolf, M.M., & Locke, B.J. (1971). A comparison of procedures for eliminating self-injurious behavior of retarded adolescents. Journal of Applied Behavior Analysis, 4,201-213.

DeCatanzo, D.A. & Baldwin, G. (1978). Effective treatment of self-injurious behavior through a forced exercise. American Journal of Mental Deficiency, 82,433-439.

Deitz, S.M., Repp, A.C., & Deitz, D.E.D. (1976). Reducing inappropriate classroom behavior of retarded students through three procedures of differential reinforcement. Journal of Mental Deficiency Research, 20,155-170.

Dwinell, M.A., & Connis, R.T. (1979). Reducing inappropriate verbalizations of a retarded adult. American Journal of Mental Deficiency, 84,87-92.

Ferster, C.B., & Skinner, B.F. (1957). Schedules of reinforcement. New York: Appleton-Century-Crofts.

Foxx, R.M., & Azrin, N.H. (1973). The elimination of austic self-stimulatory behavior by overcorrection. Journal of Applied Behavior Analysis, 6,1-14.

Harris, S.L., & Wolchik, S.A. (1979). Suppression of self-stimulation: Three alternative strategies. Journal of Applied Behavior Analysis, 12,185-198.

Hartmann, D.P. (1977). Considerations in the choice of interobserver reliability estimates. Journal of Applied Behavior Analysis, 10,103-116.

Kelleher, R.T. (1961). Schedules of conditioned reinforcement during experimental extinction. Journal of the Experimental Analysis of Behavior, 4,1-5.

Kendall, P.C. & WIlcoz, L.E. (1979). Self-control in children: Development of a rating scale. Journal of Consulting and Clinical Psychology, 47,1020-1029.

Lane, H. (1961). Operant control of vocalizing in the chicken. Journal of the Experimental analysis of Behavior, 4, 171-177.

Lutzker, J.R. (1974). Social reinforcement control of exhibitionism in a profoundly retarded adult. Mental Retardation, 12,46-67.

Myers, D.V. (1975). Extinction, DRO, and response-cost procedures for eliminating self-injurious behavior: A case study. Behavior Research and Therapy, 13,189-191.

Powell, J., Martindale, B., Kulp, S., Martindale, A., & Baumann, R. (1977). Taking a closer look: Time sampling and measurement error. Journal of Applied Behavior Analysis, 10,352-332.

Repp, A.C. (1983). Teaching the mentally retarded. Englewood Cliffs, NJ: Prentice-Hall.

Repp, A.C., Barton, L.E., & Brulle, A.R. (1983). A comparison of two procedures for programming the differential reinforcement of other behavior. Journal of Applied Behavior Analysis, 16,435-445.

Repp, A.C., Deitz, S.M., & Speir, N.C. (1975). Reducing stereotypic responding of retarded persons through the differential reinforcement of other behavior. American Journal of Mental Deficiency, 79,279-284.

Repp, A.C., Felce, D., & Barton, L.E. (1988). Basing the treatment of stereotypic and self-injurious behaviors on hypotheses of their causes. Journal of Applied Behavior Analysis, 21,281-289.

Repp, A. C., Klett, S., Soseby, L., & Speir, N. (1975). Differential effects of four token conditions on rate and choice of responding in a matching-to-sample task. American Journal of Mental Deficiency, 80,51-56.

Repp, A.C., & Singh, N.N. (1990). Current perspectives in the use of nonaversive and aversive interventions. Sycamore, IL: Sycamore Publications.

Repp. A.C., & Slack, D.J. (1977). Reducing responding by DRO schedules following a history of low-rate responding: A comparison of ascending interval sizes. The Psychological Record, 27,581-588.

Reuter, K.E., & LeBlanc, J.M. (1972). Variable differential reinforcement of other behavior (VDRO): Its effectiveness as a modification procedure. Paper presented at a meeting of the American Psychological Association, Honolulu.

Reynolds, G.B. (1961). Behavioral contrast. Journal of the Experimental Analysis of Behavior, 4,57-71.

Tarpley, H.D., & Schroeder, S.R. (1979). Comparison of DRO and DRI on rate of suppression of self-injurious behavior. American Journal of Mental Deficiency, 84,188-194.

Toppings, J.S., Larmi, O.K., & Johnson, D.L. (1972). Omission training: Effects of gradual introduction. Psychonomic Science, 28,279-280.

Uhl, C.N., & Garcia, E.E. (1969). Comparison of omission with extinction in response elimination in rats. Journal of Comparative and Clinical Psychology, 69,554-562.

ALAN C. REPP (CEC Chapter #336) is a Professor in the Department of Educational Psychology, Counseling, and Special Education at Northern Illinois University, DeKalb. DAVID FELCE is the Director of the Applied Research Unit at St. David Hospital at the University of Wales, Cardiff. LYLE E. BARTON (CEC Chapter #214) is a Professor in the Department of Teacher Development and Curriculum Studies at Kent State University, Kent, Ohio.
COPYRIGHT 1991 Council for Exceptional Children
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:differential reinforcement of other behavior
Author:Repp, Alan C.; Felce, David; Barton, Lyle E.
Publication:Exceptional Children
Date:Mar 1, 1991
Previous Article:Do public schools have an obligation to serve troubled children and youth?
Next Article:Learning disabilities as functions of familial learning problems and developmental problems.

Related Articles
Differential reinforcement of other behavior and response suppression: the effects of the response-reinforcement interval.
A response to "The negative effects of positive reinforcement in teaching children with developmental delays".
Counterpoint: practice versus process - rigidity of reinforcement requirements results in regressive research: a reply to Ward.
Experimental analysis of childhood psychopathology: a laboratory matching analysis of the behavior of children diagnosed with attention-deficit...
Assessment and treatment of severe behavior problems using choice-making procedures.
The effects of preferred activities during academic work breaks on task engagement and negatively reinforced destructive behavior.
Schedule-induced and operant mechanisms that influence response variability: a review and implications for future investigations.
Noncontingent reinforcement as treatment for tub-standing in a toddler.
Treatment of escape-maintained behavior with positive reinforcement: the role of reinforcement contingency and density.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters