Printer Friendly

Extinction-induced response variability in humans.

The concepts of evolutionary biology have enjoyed a notable resurgence in current psychological literature, especially in the form of sociobiological theory (e.g., Crawford, 1989; Crawford & Anderson, 1989; Symons, 1987). Cosmides and Tooby (1989) have heralded the development of evolutionary psychology as a viable perspective for interpreting current behavioral mechanisms as adaptations to our ancestors' Pleistocene environment. Such an approach, however, has been somewhat timidly embraced by psychologists, perhaps because of its unorthodox level of analysis. Behavioral scientists ordinarily conceptualize behavior as largely ontogenic in origin, and particularly in the case of practitioners, only proximate independent variables can be targeted for change within the applied or therapeutic context. Evolutionary psychology, in contrast, views current behavioral repertoires as having persisted, through genetic transmission, because of their adaptive value to early hominids. In other words, human behavior in its current form has been shaped, just as has our anatomy, by the "ultimate" forces of phylogeny (Barash, 1982).

In what many psychologists might consider a surprising development, evolutionary themes have also received much recent attention by behavior analysts, though, as one might expect, the focus has been more on ontogenic than phylogenic factors. Rather than entertaining how present behavior may have become universally coded in the human genotype, having been selected for its adaptive value in our ancestors, current behavior analytic conceptualizations have suggested that behavioral repertoires are selected, in the lifetime of the individual, by their consequences. According to Skinner (1966, 1981), operant conditioning represents a process by which differential reinforcement selects for particular response classes in a manner analogous to the process of natural selection in biological evolution. This position, referred to by some writers as the "evolutionary analogy" (Richelle, 1987), was first alluded to by Skinner in 1953, and it became increasingly prevalent in his later writings (e.g., Catania & Harnard, 1986). More recently, this stance has become known as the "selectionist" perspective and has been the focus of several recent articles in behavior-analytic journals (Donahoe, Burgos, & Palmer, 1993; Ringen, 1993) and a recent textbook (Donahoe & Palmer, 1994).

The evolutionary process is generally described as entailing three key elements: variation, selection, and retention. Variation at the biological level is produced both by sexual recombination and mutation. Selection refers to differential reproduction of those genotypes which more adequately meet the demands of a particular local ecology. This differential reproduction carried out over successive generations can be viewed as retention, or the prolonged survival of a genotype.

If evolutionary concepts are to prove useful to behavioral scientists, the behavioral analogues resembling the functional properties of variation, selection, and retention must be identified. Of course, behavior analysis can boast an impressive data base with regards to the second of these processes, selection. Indeed, the selecting properties of reinforcing and punishing consequences have all but defined the history of the experimental analysis of behavior as an empirical science (see, for example, Honig & Staddon, 1977). Moreover, to the extent that operant classes are maintained over time within an organism's repertoire, behavior analysis may also lay some claim to the study of retention as well. What is perhaps not clear is the degree to which the sources of behavioral variation have been similarly appreciated by behavior analysts. Behavioral variability is, after all, the raw material from which operants are selected by their consequences. In short, to do justice to a selectionist account of behavior, we need to know what processes at the ontogenic level mimic sexual combination and mutation. Of course, what is needed here is a functional equivalence; no structural or mechanistic isomorphism is implied.

Because the study of reinforcement and punishment processes has so defined the operant tradition, empirical analyses of behavioral variability have been scarce. In fact, Schwartz (1980, 1982) has even claimed that the operant conditioning process inevitably produces response stereotypy even when programmed contingencies do not explicitly require topographic rigidity in responding. However, Schoenfeld, Harris, and Farmer (1966) and, more recently, Page and Neuringer (1985) have demonstrated that response stereotypy is not an inevitable result of operant learning. If some property of response variability is identified as the criterion upon which reinforcement is made contingent, then reinforcement can indeed produce increased variability.

This recent dialogue on response variability is instructive in that it would seem to imply that reinforcement plays a necessary role in producing behavioral variability. Although this assumption may in fact be tenable, might the search for additional independent variables prove worthwhile? A case in point is the phenomenon of extinction. There is an abundance of data in operant psychology on such varied topics as the partial reinforcement effect (e.g., Ferster & Skinner, 1957; Humphreys, 1939; Robbins, 1971), extinction-induced aggression (Azrin, Hutchinson, & Hake, 1966; Rilling, 1977), and extinction burst (Lovaas & Simmons 1969; Neisworth & Moore, 1972). The latter two phenomena are of theoretical importance, for both aggression and intensified response bursts represent similar functional relations, namely increased topographic variability during extinction relative to a preextinction baseline. Adoption of the evolutionary analogy by behavior analysts makes imperative the search for ontogenic sources of behavioral variability, and extinction-induced variability emerges as one possible candidate.

Topographic variability in responding has been studied in operant contexts with both reinforcement (Machado, 1989; Vogel & Annau, 1973) and extinction (Antonitis, 1951) manipulated as independent variables. In addition, Epstein (1983, 1985) has argued that the "spontaneous" solutions to problems described in the classic problem-solving literature (e.g., Kohler, 1925) can be conceptualized as a process of "extinction-induced resurgence," in which previously reinforced response classes resurge during extinction to create novel and frequently effective problem solutions. In humans, research on the behavioral effects of extinction have focused on the practical consequences of extinction burst (Neisworth & Moore, 1972; Lovaas & Simmons, 1969) and extinction-induced aggression (Todd, Morris, & Fenza, 1989). Of course, intensification and aggression do not exhaust the dimensions along which behavior may vary under extinction. Any topographic feature of behavior may exhibit variability over time, either as a function of explicit reinforcement contingencies, or extinction. The present experiments were conducted to evaluate the nature of extinction-induced variability in humans engaged in a standard operant task.

Experiment 1

Subjects in the first experiment responded on a computer keyboard to fulfill several differential reinforcement of low rate of response (DRL) schedule requirements in succession. In a DRL schedule, only responses that meet or exceed the temporally defined schedule value produce reinforcement. Any other responses serve only to reset the interreinforcement interval. Thus, the schedule differentially reinforces low response rates, characterized by considerable postreinforcement pauses. Subsequent to achieving steady-state responding on three successive schedule values, subjects' responses were placed on extinction. In the present experiment, both response/reinforcer ratios and interresponse time (IRT) distributions during extinction served as dependent measures.

Method

Subjects. Five undergraduate students (three females and two males) were recruited from introductory psychology courses at Spalding University. Subjects received course credit for participating in the study.

Apparatus. The experimental cubicle, measuring approximately 2 x 3 meters, contained a desk (102 cm x 90 cm), on which rested a Cordata 4200 personal microcomputer, monochrome monitor, and Epson LX-810 dot-matrix printer. All experimental conditions and data collection were controlled by the computer. Subjects fulfilled schedule requirements by pressing the space bar on the keyboard, on which was affixed a label reading "SPACE BAR."

Procedure. Subjects participated in the experiment for approximately 30-45 minutes per day, at the same time each day, for a period of 3 to 5 days. Upon entering the experimental cubicle on the first day, subjects were seated in front of the computer and read the following instructions on the monitor:

Thank you for agreeing to participate in this study. Your task is to obtain as many points as possible per session by responding on the space bar below. Respond as often as you like, but do not hold the space bar down. When the session finishes, please leave the room and notify the experimenter. Remember, try to get as many points as you can each session. When you are ready to begin, press ENTER.

Pressing the space bar terminated the instructions and illuminated the message SESSION TOTAL POINTS at approximately the midpoint of the monitor screen. This message remained on throughout the duration of each session. Sessions were generally 10 min in duration, although occasional short sessions (5-8 min) were used to ensure response stability prior to schedule changes. During a given session, responses that fulfilled the schedule value resulted in incrementation of a point counter directly below the SESSION TOTAL POINTS message, in parcels of five. In addition, a tone of 500 Hz accompanied point delivery, terminating after approximately 4 seconds. At the finish of each session, the message FINISH -THANK YOU appeared in the lower right hand quadrant of the monitor screen, and the cumulative session points remained illuminated above.
Table 1

Order of Experimental Conditions for Subjects in Experiment 1

Subject Order of Experimental Conditions

JC DRL 3-s, DRL 16-s, DRL 24-s, DRL 3-s, Extinction
PJ DRL 5-s, DRL 10-s, DRL 20-s, DRL 5-s, Extinction
JL DRL 5-s, DRL 10-s, DRL 20-s, DRL 10-s,
Extinction
PQ DRL 5-s, DRL 10-s, DRL 20-s, DRL 10-s,
Extinction
JD DRL 5-s, DRL 10-s, DRL 20-s, DRL 10-s,
Extinction


All subjects were exposed to a sequence of DRL values, in ascending order, as shown in Table 1. A stability criterion for responding had to be achieved before any subject experienced a change in schedule parameter. The number of reinforcers obtained during a particular session was first calculated. When the mean number of responses per reinforcer during the last quarter (25% of the total session reinforcers) of a session was less than or equal to 2, responding was considered stable, and a subsequent schedule value was programmed. Relative to other reinforcement schedules, the DRL is a somewhat restrictive contingency, in that only IRTs that exceed the programmed schedule value produce reinforcement. Contact with this rather discriminating contingency often occurs very early on in an initial session; thus, behavior comes under schedule control quite rapidly. In the present experiment, subjects frequently met the stability criterion within the first 10-min session under each schedule value. When this was not the case, additional short sessions were conducted until responding met the stability criterion. As can be seen in Table 1, all subjects received an ascending series of DRL values during the first three schedule changes, and a reversal to an earlier schedule value during their last reinforcement session. The programming of only ascending schedule values, rather than ascending and descending series counterbalanced across subjects, was used because establishing initial schedule control at relatively long DRL values (e.g., 20 seconds) can be problematic. Also, to ensure that response patterns did not reflect idiosyncratic properties of the DRL 5-s, DRL 10-s, and DRL 20-s schedule parameters, one subject (JC) experienced a differing sequence of schedule values.

Following the series of DRL values, all subjects were exposed to a 10-min extinction session. During this session, all experimental conditions remained the same as before with the exception that responding on the space bar had no programmed consequences.

Results

Response variability can assume many topographic features, depending upon the behavior of interest. In the present experiment, two dimensions of responding served as dependent measures. The first, number of responses per reinforcer, was considered a useful index of response efficiency during changing schedule conditions. The mean and range of responses per reinforcer were calculated for each session quarter. The first session quarter was defined as accumulation of 25% of the total session reinforcers, the second quarter as accumulation of the next 25%, and so on. Figure 1 depicts the mean and range of responses per reinforcer across all DRL schedule values for Subject JD, whose responding was typical of all subjects in Experiment 1. These data indicate that both the mean and range of responses per reinforcer declined as an orderly function of reinforcers obtained. Also, an increase in the number of responses per reinforcer can be seen in the first quarter of each DRL session. These response characteristics were obtained uniformly both across schedule parameters and subjects.

The transition to a new schedule parameter can be interpreted as a period during which previous response classes, in this case, interresponse times (IRTs), undergo extinction, and alternative IRTs emerge, becoming candidates for reinforcement. Consistent with research on learning sets (Harlow, 1949) and repeated acquisition (Boren & Devine, 1968), intercondition response variability can be seen to diminish with continued exposure to each DRL value.

The functional response unit in temporal schedules of reinforcement, particularly DRL contingencies, is the interresponse time. In the present experiment, IRT variability was evaluated during the extinction component of the experiment. Figure 2 depicts IRT values for all subjects in Experiment 1 for the last quarter of the final DRL condition and the extinction condition. Also, the three lines running horizontally on each graph represent subjects' median IRT values during the final quarter for each DRL value.

Despite having experienced somewhat different experimental histories, subjects' IRT patterns during extinction exhibit considerable uniformity. Four subjects (JC, JL, PQ, and JD) emitted IRTs early in the extinction session that ranged across all median IRT values emitted during the final quarters of the DRL sessions. The exception to this finding was PJ, whose IRTs during extinction never exceeded the 59-s median IRT exhibited during the 20-s DRL schedule. Nonetheless, this subject's data show a degree of variability during extinction almost indistinguishable from that of other subjects.

An additional notable feature of subjects' response patterns was the tendency to move abruptly from short to long IRTs during extinction. At times, these rapid shifts traversed a large range of IRT values, from very short IRTs never reinforced during DRL conditions to long IRTs far exceeding the longest IRTs reinforced during DRL conditions. These shifts are most apparent in the data from Subjects PJ, JL, and JD, though they are also evident in the data from Subjects PQ and JC.

Finally, Epstein (1983, 1985) has claimed that during problem-solving conditions that resemble extinction, previously reinforced response classes resurge, often creating a behavioral synthesis that once again produces reinforcement. In the present experiment, it is not clear how differentially reinforced IRT distributions might have "synthesized" during extinction. It is possible, however, that resurgence may have manifested itself in the frequent occurrence of IRTs characteristic of those emitted during the separate DRL conditions. Though perhaps not highly conspicuous, the early extinction responding of subjects, especially PJ, may be taken as evidence of resurgence. Approximately one-fourth of this subject's initial IRTs under extinction did approach closely those median IRT values emitted during the DRL 5-s and DRL 10-s conditions. Though many of this subject's later IRTs show characteristic deviations from previously reinforced values, there are occasional clusters of responses falling within the range of IRTs reinforced during DRL conditions. To a somewhat lesser extent, this pattern of periodic resurgence also characterizes responding in other subjects. Nonetheless, for all subjects, the majority of extinction IRTs were substantially shorter or longer than those reinforced during DRL conditions.

Experiment 2

The results of the first experiment were consistent with other studies demonstrating increased response variability during extinction relative to a preextinction condition (Antonitis, 1951; Neisworth & Moore, 1972). In particular, a temporal characteristic of behavior, the interresponse time, showed both abrupt and frequent changes during extinction, often assuming values never reinforced during DRL conditions. Moreover, the extinction-induced variability observed in the present experiment was highly consistent across subjects, despite some variability in experimental history.

It is possible, however, that extinction alone was not responsible for the nature of the variability seen in Experiment 1. Though their experimental histories differed somewhat, all subjects in Experiment 1 were exposed to three separate DRL values prior to experiencing extinction. Considerable evidence from human operant research indicates that experimental history is a significant contributor to schedule performance (see, for example, Weiner, 1965, 1969, 1972). Reinforcement history, characterized presently by exposure to three separate DRL parameters prior to extinction, thus may have been an important determinant of the variability seen during extinction in Experiment 1. The second experiment was conducted to assess whether subjects' extinction responding would exhibit similar variability following exposure to a less diverse reinforcement history. Accordingly, subjects in Experiment 2 experienced only a single DRL value prior to exposure to extinction.

Method

Subjects. Six undergraduate students (three female, three male) were recruited from introductory psychology courses at Spalding University. Subjects received extra credit in their classes for participating in the study.

Apparatus. The apparatus and data collection procedures were identical to those used in the first experiment. Refer to Experiment 1 methods section for further details.

Procedure. General experimental procedures, including instructions to subjects, were the same as used in Experiment 1. Subjects in the present experiment, however, were exposed to a single DRL value prior to the extinction condition. Subjects DF and JG were exposed to a DRL 5-s schedule, Subjects JM and JW were exposed to a DRL 10-s schedule, and Subjects JM II and CW were exposed to a DRL 20-s schedule. Stability criteria were identical to those used in Experiment 1. Once a subject had stabilized on the DRL condition according to this criterion, a 10-min extinction session was conducted. As in the first experiment, responding during extinction had no programmed consequences.

Results

Figure 3 depicts the IRT values for the last quarter of the DRL session and all IRTs under extinction for subjects in Experiment 2. Consistent with the results of Experiment 1, IRT distributions during extinction demonstrate considerable variability about the median IRTs emitted during DRL conditions. In addition, responding shows frequent abrupt changes from short to long IRTs. Also similar to results from the first experiment, five (DF, JG, JM, JW, and CW) of six subjects emitted initial IRTs under extinction much shorter than those reinforced earlier under the DRL condition. Finally, occasional brief runs of extinction responses that approximate reinforced IRT values may be interpreted as instances of resurgence. This pattern is most noticeable in the responding of JW near the end of the extinction condition. This subject's IRT distribution during extinction was marked by initially large variability, eventually giving way to a very restricted range of IRTs close to the median values occurring during the DRL condition. Somewhat briefer (35 IRTs) instances of resurgence can be seen in other subjects' extinction data, and they are often separated by response clusters that deviate substantially from earlier reinforced values (see, particularly, JM).

In summary, extinction responding in the present experiment strongly resembled, in both magnitude and response pattern, the variability produced in Experiment 1. The variability exhibited in Experiment 1 thus does not appear to be a peculiar function of the richer reinforcement history experienced by these subjects. Importantly, subjects in Experiment 1 were not only exposed to a variety of DRL values, but their responding achieved the stability criterion on each value, and over the course of DRL training these subjects acquired many times more reinforcers than did subjects in Experiment 2. Subjects in Experiment 2, having encountered but a single DRL value, nonetheless showed abrupt transitions in IRT distributions and periodic resurgence characteristic of extinction responding in the first experiment.

Discussion

Taken together, the present experiments serve to identify extinction as a potentially important source of variability in human operant behavior. Understandably, much of operant psychology's history has been devoted to the analysis of selection by reinforcement. For the most part, research on reinforcement schedules has taken full advantage of the steady-state strategy and, consequently, analyses have most frequently focused on stable response patterns maintained over prolonged and unchanging experimental contingencies (Honig & Staddon, 1977; Johnston & Pennypacker, 1993). Nevertheless, topographic variability has, in a number of studies, been shown to be a reinforceable dimension of behavior (Machado, 1989; Page & Neuringer, 1985; Pryor, Haag, & O'Reilly, 1969). For present purposes, however, experimental conditions immediately following schedule changes, that is, transitional phases, were viewed as the window through which to view response variability. In addition, extinction, rather than reinforcement, was manipulated as the independent variable.

Subjects in the present experiments can be viewed as having been exposed to a problem to be solved, namely, finding out how to obtain points on the computer console. This, in fact, was precisely their objective per the instructions given at the beginning of the experiment. In the first experiment, the problem "solution" changed, as adjustments in interresponse times were necessary to secure points when DRL parameters changed. In the extinction condition, subjects faced what problem-solving researchers call an unsolvable problem. Indeed, procedural and lexical differences notwithstanding, much operant research might be readily conceptualized within the problem-solving research tradition. We might ask whether, conversely, the data from problem-solving experiments reveal functional relationships reminiscent of differential reinforcement, extinction, and the like? Consider the following description of a human problem-solver, as offered by Newell and Simon (1972) in their landmark text on the subject:

When his initial processing of columns 2 and 6 did not lead to a precise value for G and R, he decided to try different possible solutions (forward search) ... When he discovered inappropriate assignments (i.e., reached a dead end), he was able to break off to the last prior state of knowledge that was not disconfirmed. (p. 228)

The description invites comparison both to the notion of "resurgence" and to a more generic selectionist perspective, the evolution of which in psychological thought can be traced back at least four decades. In Science and Human Behavior, Skinner (1953) suggested that the reinforcement process mimics, functionally, the dynamics of natural selection. Instead of genotypes, however, selection at this level operates on response classes, or operants. The agent of selection can be conceptualized as the local reinforcement contingencies effective during the lifetime of the organism. Thus, the ontogenic development of behavior can be seen as unfolding through a process similar to that responsible for the evolution of species. In a more detailed and provocative account, Donald Campbell (1987) proposed a "blind variation and selective retention" model of behavior, in which the fundamental processes describe the behavioral dynamics of organisms across the phylogenetic spectrum. Witness Campbell's (1987) description of the model applied to the locomotive behavior of the protozoa:

Forward locomotion persists until blocked, at which point direction of locomotion is varied blindly until unblocked forward locomotion is again possible. The external physical environment is the selective agency, the preservation of discovery is embodied in the preservation of the unblocked forward movement. (p. 93)

The parallels between this invertebrate's behavior and human subjects solving cryptarithmetic problems (Newell & Simon, 1972) or responding to extinction conditions in the present experiment are striking. The conclusion that all can be readily mapped to a variation-selection-retention model, despite their considerable structural disparities, seems inescapable. Moreover, this same process has been suggested as a model for describing creative accomplishments across many disciplines, including science and the arts (Gruber & Davis, 1988; Perkins, 1988). Thus, contemporary accounts increasingly have come to attribute the emergence of novel behavior and problem solutions to the organism's recent behavioral history, rather than to theoretical constructs, such as "insight," the descriptive status of which has often been mistaken for explanation.

The present account suggests that much of the subject matter of the behavioral sciences, though conceptualized and measured in remarkably different ways by psychologists of differing persuasions, may nonetheless be viewed as the product of a variation and selection process. The implications of such functional uniformity may be unprecedented in psychology, particularly if one takes seriously the claims that the discipline remains a fragmented, disunified enterprise (Staats, 1981). The usefulness of any prospective paradigm will, naturally, depend on its ability to capture under its conceptual umbrella the diverse body of empirical knowledge in scientific psychology. Promising such integrative capability and boasting a surprisingly tenacious history in psychology's literature, the "selectionist" perspective may very well prove itself the kind of variant worthy of retention.

References

ANTONITIS, J. J. (1951). Response variability in the white rat during conditioning, extinction and reconditioning. Journal of Experimental Psychology, 42, 273-281.

AZRIN, N.H., HUTCHINSON, R. R., & HAKE, D. F. (1966). Extinction-induced aggression. Journal of the Experimental Analysis of Behavior, 9, 191-204.

BARASH, D. P. (1982). Sociobiology and behavior. New York: Elsevier.

BOREN, J. J., & DEVINE, D. D. (1968). The repeated acquisition of behavioral chains. Journal of the Experimental Analysis of Behavior, 11, 651-660.

CAMPBELL, D. T. (1987). Blind variation and selective retention in creative thought as in other knowledge processes. In G. Radnitzky & W.W. Bartley, III (Eds.), Evolutionary epistemology rationality and the sociology of knowledge. La Salle, IL: Open Court.

CATANIA, A. C., & HARNARD, S. (Eds.) (1986). The selection of behavior. The operant behaviorism of B. F. Skinner: Comments and consequences. New York: Cambridge University Press.

COSMIDES, L., & TOOBY, J. (1989). Evolutionary psychology and the generation of culture, Part 1: Theoretical considerations. Ethology and Sociobiology, 10, 29-49.

CRAWFORD, C. B. (1989). The theory of evolution: Of what value to psychology? Comparative Psychology, 103, 4-22.

CRAWFORD, C. B., & ANDERSON, J. L. (1989). Sociobiology: An environmental discipline? American Psychologist, 44, 1449-1459.

DONAHOE, J. W., BURGOS, J. E., & PALMER, D.C. (1993). Selectionist approach to reinforcement. Journal of the Experimental Analysis of Behavior, 58, 17-40.

DONAHOE, J. W., & PALMER, D. C. (1994). Learning and complex behavior. Boston: Allyn & Bacon.

EPSTEIN, R. (1983). Resurgence of previously reinforced behavior during extinction. Behavior Analysis Letters, 3, 391-397.

EPSTEIN, R. (1985). Extinction-induced resurgence: Preliminary investigations and possible applications. The Psychological Record, 35, 143-153.

FERSTER, C. B., & SKINNER, B. F. (1957). Schedules of reinforcement. New York: Appleton-Century-Crofts.

GRUBER, H. E., & DAVIS, S. N. (1988). Inching our way up Mount Olympus: The evolving systems approach to creativity. In R. J. Sternberg (Ed.), The nature of creativity: Contemporary psychological perspectives (pp. 243-270). Cambridge: Cambridge University Press.

HARLOW, H. F. (1949). The formation of learning sets. Psychological Review, 56, 51-65.

HONIG, W. K., & STADDON, J. E. R. (1977). Handbook of operant behavior. Englewood Cliffs, NJ: Prentice-Hall.

HUMPHREYS, L. G. (1939). Acquisition and extinction of verbal expectations in a situation analogous to conditioning. Journal of Experimental Psychology, 25, 294-301.

JOHNSTON, J. M., & PENNYPACKER, H. S. (1993). Strategies and tactics of behavioral research. Hillsdale, NJ: Lawrence Erlbaum.

KOHLER, W. (1925). The mentality of apes. New York: Harcourt, Brace.

LOVAAS, I. O., & SIMMONS, J. Q. (1969). Manipulation of self-destruction in three retarded children. Journal of Applied Behavior Analysis, 2, 143-157.

MACHADO, A. (1989). Operant conditioning of behavioral variability using a percentile reinforcement schedule. Journal of the Experimental Analysis of Behavior, 52, 155-166.

NEISWORTH, J. T., & MOORE, F. (1972). Operant treatment of asthmatic responding with the parent as therapist. Behavior Therapy, 3, 95-99.

NEWELL, A., & SIMON, H. A. (1972). Human problem solving. Englewood Cliffs, NY: Prentice Hall.

PAGE, S., & NEURINGER, A. (1985). Variability is an operant. Journal of Experimental Psychology: Animal Behavior Processes, 11, 429-452.

PERKINS, D. N. (1988). The possibility of invention. In R. J. Sternberg (Ed.), The nature of creativity: Contemporary psychological perspectives (pp. 362385). Cambridge: Cambridge University Press.

PRYOR, K.W., HAAG, R., & O'REILLY, J. (1969). The creative porpoise: Training for novel behavior. Journal of the Experimental Analysis of Behavior, 12, 653-661.

RICHELLE, M. (1987). Variation and selection: The evolutionary analogy in Skinner's theory. In S. Modgil & C. Modgil (Eds.), B. F. Skinner: Consensus and controversy. New York: Falmer Press.

RILLING, M. (1977). Stimulus control and inhibitory processes. In W. K. Honig & J. E. R. Staddon (Eds.), Handbook of operant behavior (pp. 432-480). Englewood Cliffs, NJ: Prentice-Hall.

RINGEN, J. D. (1993). Adaptation, teleology, and selection by consequences. Journal of the Experimental Analysis of Behavior, 60, 3-15.

ROBBINS, D. (1971). Partial reinforcement: A selective review of the alleyway literature since 1960. Psychological Bulletin, 76, 415-431.

SCHOENFELD, W. N., HARRIS, A. H., & FARMER, J. (1966). Conditioning response variability. Psychological Reports, 19, 551-557.

SCHWARTZ, B. (1980). Development of complex stereotyped behavior in pigeons. Journal of the Experimental Analysis of Behavior, 33, 153-166.

SCHWARTZ, B. (1982). Failure to produce response variability with reinforcement. Journal of the Experimental Analysis of Behavior, 37, 171-181.

SKINNER, B. F. (1953). Science and human behavior. New York: Macmillan.

SKINNER, B. F. (1966). The ontogeny and phylogeny of behavior. Science, 153, 1203-1213.

SKINNER, B. F. (1981). Selection by consequences. Science, 213, 501-504.

STAATS, A. W. (1981). Paradigmatic behaviorism, unified theory construction, and the zeitgeist of separatism. American Psychologist, 36, 239-256.

SYMONS, D. (1987). If we're all Darwinians, what's the fuss about? In C. B. Crawford, M. S. Smith, & D. Krebs (Eds.), Sociobiology and psychology: Ideas, issues, and applications (pp. 121-146). Hillsdale, NJ: Erlbaum.

TODD, J. T., MORRIS, E. K., & FENZA, K. M. (1989). Temporal organization of extinction-induced responding in preschool children. The Psychological Record, 39, 117-130.

VOGEL, R., & ANNAU, Z. (1973). An operant discrimination task allowing variability of response patterning. Journal of the Experimental Analysis of Behavior, 20, 1-6.

WEINER, H. (1965). Conditioning history and maladaptive human operant behavior. Psychological Reports, 17, 935-942.

WEINER, H. (1969). Controlling human fixed-interval performance. Journal of the Experimental Analysis of Behavior, 12, 349-373.

WEINER, H. (1972). Controlling human fixed-interval performance with fixed-ratio responding or differential reinforcement of low-rate responding in mixed schedules. Psychonomic Science, 26, 191-192.
COPYRIGHT 1996 The Psychological Record
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Morgan, David L.; Lee, Kelly
Publication:The Psychological Record
Date:Jan 1, 1996
Words:4819
Previous Article:Transfer of consequential functions via stimulus equivalence generalization to different testing contexts.
Next Article:Self-reinforcement and persons with developmental disabilities.
Topics:


Related Articles
Methylphenidate hydrochloride (ritalin) reduces operant responding in rats by affecting the spatial and temporal distributions of responses.
Effects of differing instructional histories on the resurgence of rule-following.
Response-class hierarchies and resurgence of severe problem behavior.
Schedule-induced and operant mechanisms that influence response variability: a review and implications for future investigations.
A comparison of "direct" versus "derived" extinction of avoidance responding.
Some effects of magnitude of reinforcement on persistence of responding.
Impacts of rainforest logging on non-volant small mammal assemblages in Borneo.
Extinction-induced variability in human behavior.
Computational models can help study exactly how brain reacts to fear.
Computational models can help study exactly how brain reacts to fear.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters