Printer Friendly

The bases for language repertoires: functional stimulus-response relations.


Speech-language pathologists (SLPs) and applied behavior analysts (ABAs) have much in common. They generally prefer evidence-based methodologies to inference-based methodologies. The evidence-based methodology of behavior analysis provides a causal basis for explaining behavior in terms of its functional relationship to environmental variables. While behavior analysts have certainly not discovered all the causes of human behavior, their experimental findings of lawful relations between environmental stimuli and behavior, how these relationships are established, and what kind of factors go into establishing and maintaining these relationships have given them great confidence in their endeavors and in their interpretations of how behavior repertoires are formed. This has resulted in the development of effective techniques in therapeutic situations and offers the prospect of future progress in applied areas.

From the history of their field and from their experiences working with clients, SLPs and ABAs know that in many cases individuals with verbal behavioral disorders are amenable to improvement, if not a complete restoration to normal behavior. My purpose here is to discuss empirical issues that are the common concerns of SLPs and ABAs. Cognitive psychologists and most linguists have argued that language behavior cannot be learned "simply" through stimulus-response relations (Chomsky, 1959). But what are S-R relations? How are such relations established? Are the objections of cognitivists and linguists to an explanation of language behavior in terms of S-R relations valid? In what follows I will try to answer these questions and in doing so demonstrate the relevance of special kinds of S-R relations--the operant S-R relations--to providing behavior interventions to those with speech-language problems.

Functional Stimulus-Response Relations

S-R relations can be interpreted in a variety of ways. In behavior analysis, what such a relation means is that behavior, in the form of an identifiable response (R), is controlled or has come to be controlled in some way by some identifiable event, condition, or situation in the environment. That environmental event, condition, or situation is called a stimulus (S). This relationship is a functional relationship in that the stimulus is defined in terms of the effect it has on behavior.

The term stimulus can refer to (1) a specific instance of physical events, (2) combinations or complexes of events, (3) the absence of previously occurring events, (4) a relation among events, (5) specific physical properties of events, (6) classes of events defined by phys ical properties, and (7) classes of events defined functionally (Catania, 1998). After reading the previous list of what the term stimulus can refer to, the reader may be thinking that almost anything can be a stimulus. The most important thing to remember about an S-R relation is not what kinds of things can be stimuli or even responses, but the role stimuli play in affecting behavior. S-R relations must be determined empirically, i.e., based on observational and experimental findings about how environmental variables (the stimuli) and behavior (the responses) interact with each other.

In the last sentence of the previous paragraph I deliberately used and emphasized the word "interact" with reference to stimuli and responses because often it is assumed that the temporal locus of a stimulus is antecedent to the response it affects. However, the temporal locus of a stimulus that affects behavior can also be a consequence of behavior itself, for behavior acts on the environment and in doing so often changes it (Cooper, et al, 2007, pp. 28-29). Altering the environment may then very well have an effect on future behavior. Just how future behavior might be affected as a result of a consequent stimulus change will be discussed in the following section on operant relations.

Another thing to keep in mind about S-R relations is that in all likelihood these relationships do not involve single, unique events. This can be seen by the list of things that the term stimulus can refer to, given earlier. Although in a single observation or a single experiment, a specific S and a specific R are being considered, the relation between them may involve a number of different values of S and R. For example, the verbal stimuli "Sit down," "Have a seat," "Why don't you make yourself comfortable," "Pull up a chair," "Please be seated," "Take a load off," and so forth may all result in the "same" response of sitting down. Note also that the act of sitting down (R) is not without its variability. People sit down in many different ways, depending on what they sit down on, the kind of clothes they are wearing, their physical state, their emotional state, and such other factors.- But basically the relationship between the verbal stimuli of some speaker(s) and the physical response of some listener(s) is the same. Thus, for any given S-R relation, we are usually dealing with a class of stimuli and a class of responses. Each class may be given a particular verbal label. In this example, 'sitting down' is the label given to the response class and "mand to sit" might be given as the label for the stimulus class.

It seems natural to suppose that if a stimulus class influences a response class in the same way, we would expect the members of the stimulus class to have some physical properties in common. This frequently happens, but it is not always the case. The members of the stimulus class given in the previous paragraph labeled "mand to sit" really have nothing physical in common except they are in the same receptive mode (auditory if spoken, and visual if written). Furthermore, they do not have very much in common structurally. Some belong to the same sentence class (Imperative) and a few share a lexical item (the indefinite article "a"), but not much else.

Just what is the specific functional relationship between S and R is a matter of empirical circumstances. From a scientific point of view what is important is that these relationships are lawful, meaning that they are regular and predictable and established in the same empirical manner. Furthermore, if S and R are variables that can be manipulated by the experimenter, the possibility for altering the relationship arises. This alterability factor is of particular relevance for the speech-language and behavior analysis practitioner whose goal is to establish, change, or correct language behavior.

Types of S-R Relations

There are many ways of classifying S-R relations. In behavior analysis, however, S-R relations are classified according to whether the relations are reflexive, respondent, or operant. One of the key criteria for this classification has to do with whether the relationship is phylogentic or ontogenetic. Phylogenetic S-R relations are the products of the genetic selection that arise from the evolutionary history of a species. Phylogenetic S-R relations are often referred as being automatic , mechanical, and unlearned, because of their high degree of predictability. Ontogenetic S-R relations are those that are established by an organism's interactions with its environment. They are said to be contingent or conditional, because they are more probabilistic and subject to the vagaries of changes in the environment. In other words, ontogenetic S-R relations are learned.

Reflex Relations

Everyone is familiar with reflex relations or what are commonly called reflexes. Among these are the gag reflex, the knee-jerk reflex, the startle reflex, and the papillary light reflex. Reflexes involve involuntary responses to various environmental stimuli. In a reflex relation the stimulus is referred to as being unconditioned and is said to elicit the response. Human infants are known to have about a half-dozen so-called primitive reflexes that gradually disappear by the time they are one year of age. Reflexes are phylogenetically based S-R relations and therefore unlearned. They are a product of the evolutionary history of a species and universally part of a species' behavioral repertoire. Using the jargon of computerese, reflexes are "hardwired." Except for the developmental reflexes in human infants mentioned above, reflexes do not change much over the life span of an individual. Furthermore, they are usually unmodifiable. However, reflex relations may be temporarily weakened by repeated presentation of the unconditioned stimulus. Such a process is known as habituation. The reflex relation usually returns quickly after a fairly brief interval of time.

Reflex relations can occur in chains. For example, the nursing process of newborn babies involves a chain of reflexes. First, touching an infant's cheek elicits head turning in the direction of the tactile sensation (rooting reflex). If in the process of turning the infant's mouth touches the surface of an object, this elicits sucking. If the object that infant comes in contact with is the mother's nipple or the nipple of a bottle, milk will then enter the infant's mouth. The sensation of milk in the mouth triggers swallowing. These chains are called reactive or reflex chains (Pierce & Cheney, 2008).

Simple reflexes and reflex chains make up only a tiny part of the behavioral repertoire of a human being. It was once believed that all behavior could be explained in terms of reflexes, including consciousness (Sechenov, 1965). Reflexes were considered the elemental building blocks of all behavior. This mechanistic view, however, was overthrown by the later discoveries of respondent and, even more importantly, operant S-R relations.

Reflexive vocalizations occur when an infant is sucking, swallowing, and burping. Fussing sounds and crying are also elicited by a wide variety of stimuli. Among these are deprivation of food or water, diaper rash, and other discomforting stimuli, including loud noises, noxious tastes or smells, etc. However, there is no empirical evidence that indicates that reflex relations play any role at all in the establishment or maintenance of language repertoires.

It is important to note that although an involuntary response, whether vocal or non-vocal, is elicited by an unconditioned stimulus in a reflex relation, the same response or response class may also be evoked under other environmental conditions. Behaviors can and often do have multiple causes (Skinner, 1953, 1957). For example, an infant's cry, as pointed out above, may be elicited under various kinds of physical deprivation or aversive stimuli, but may also come under the control of non-eliciting ontogenetic social and situational stimulus variables (Schlinger, 1995). Likewise, stimuli that may elicit a reflex response may also come to control behavior in a noneliciting way under other environmental conditions. For example, a flashing red light might elicit the orienting reflex response of head turning, but under other conditions may cause a driver to stop momentarily before driving on. How these non-eliciting stimulus-response functions are established will be discussed in the next two subsections.

Respondent Relations

"If reflexes were the only legacy of natural selection, an organism would be ill-equipped to survive in a changing environment." (Donahoe & Palmer, 1994)

As discussed in the previous subsection, reflex relations have been phylogenetically selected for and require the presence of a particular member of a stimulus class to elicit a particular member of a response class. But always contiguous with a particular eliciting stimulus-event are other kinds of environmental stimuli with other kinds of properties that are not necessarily in the same sensory mode. This opens up the possibility for these other stimulus-events to establish a new eliciting function with the reflex response. Ivan Pavlov (1849-1936) is given credit for discovering how such stimulus-response relations get established through a process known today as respondent conditioning (also called classical or Pavlovian conditioning).

Respondent conditioning basically involves the contiguous pairing of the eliciting unconditioned stimulus with some other neutral stimulus called the conditioned stimulus. After several such pairings, the presence of the conditioned stimulus alone comes to elicit the reflex response, now called the conditioned response. Thus, a new S-R relation has been established through the original pairing of an unconditioned stimulus with a conditioned stimulus.

The conditioned stimulus may be a property or a by-product of the unconditioned stimulus. For example, if a dog has been deprived of food for a sufficiently long period, the sight of food or the smell of food brings about salivation. However, the conditioned stimulus may be totally independent of the unconditioned stimulus, i.e., quite arbitrary. In some of Pavlov's experiments with dogs (Pavlov, 1927), the conditioned stimulus was the sound of a bell, or a metronome, or the sound of the laboratory door being opened by the experimenter, or even Pavlov himself as he entered the laboratory to perform an experiment on one of his dogs.

Pairing a conditioned stimulus with an unconditioned stimulus is referred to as first-order respondent conditioning. It has been shown experimentally that second-order, third-order, or in general, higher-order respondent conditioning is possible. Once the first-order conditioned stimulus-response relation has been established it is then possible to pair the first-order conditioned stimulus with another conditioned stimulus.

The establishment of new S-R relations through respondent conditioning might be said to add to an individual's behavioral repertoire in the sense that although not novel, the response is now being elicited by a different stimulus. Except for those respondent relations where the response is a strongly emotional one, a respondent S-R relation is not what could be called a long-lasting relation. Once the respondent relation is established, repeated occurrences of the conditioned stimulus without the unconditioned stimulus weaken the S-R relation until the conditioned stimulus elicits no response. This process is known a respondent extinction. The only way to re-establish the relationship is to go through respondent conditioning all over again. Of course, the original reflex relation is left undisturbed.

While respondent conditioning seems to play a role in developing taste aversions, sexual arousal, and phobias, little is known about what role it plays in language development and everyday language behavior. It may play some role in words acquiring emotional connotations, but little else. That does not mean that there has not been much speculation in the past about its function with respect to language behavior. Nevertheless, textbooks on applied behavior analysis and speech-language disorders usually have little to say about respondent S-R relations or establishing them for therapeutic purposes. The application of respondent conditioning in speech-language interventions seems to be extremely limited, since only the stimulus is changed in the stimulus-response relation, not the behavior. Usually speech-language interventions involve changing the behavior of the client, such as correcting for misarticulations. To achieve changes in behavior requires the establishment of operant relations.

Operant Relations

"Men act upon the world, and change it, and are changed in turn by the consequences of their action." (Skinner, 1957)

Skinner's quotation captures the essence of how most human behavior occurs, is maintained, shaped, changed, elaborated on (i.e., made more complex), refined, and enlarged or contracted in its repertoire. Behavioral repertoires are largely created and maintained through an individual's interactions with the physical and social environment. Traditionally, we refer to this kind of behavior as voluntary behavior, in contrast to reflexive behavior. This kind of behavior is called operant behavior in behavior analysis. Operant behavior is a function of its consequences and is much more complex than reflexive behavior in that the operant S-R relations involve both antecedent stimuli and consequent stimuli, as will be shown below.

Consequent Stimuli. The following is an illustration of how a consequent stimulus can affect behavior. An infant is placed on her back in a crib. Typically, when placed in such a position the infant will randomly move her arms and feet and look around in what could be called uncommitted or emitted behavior (Cooper, Heron, & Heward, 2007). Above the infant and within her visual field is a motionless but colorful mobile. After a while she may habituate to her surroundings and eventually stop her activity and even fall asleep. Let us now tie the end of a ribbon to her right leg and connect the other end to the mobile, so that every time she moves her right leg the objects hanging from the mobile will be set in motion. The first time she moves her right leg the motion of the mobile will attract her attention. Pretty soon she is regularly moving her right leg to initiate the movement. The more vigorously she moves her leg the more movement in the mobile. Notice that the behavior is first initiated randomly. We have no evidence that there are any eliciting stimuli. Yet now the behavior is repeated because of its consequence(s). Traditionally we say that the infant is engaging in purposeful or intentional behavior. What was once a random movement of the right leg has become a regular, "purposeful" activity.

Reinforcement. The consequence of the behavior, the moving mobile, has come to function as a positive reinforcing stimulus, also called a positive reinforcer, for the generation of an operant behavioral repertoire (right leg movement). Technically speaking, consequent stimuli that are positively reinforcing are those that increase the probability that the response will occur again under similar circumstances. Consequent stimuli that decrease the probability that a response will reoccur are called punishing stimuli or punishers . For example, if a mother gently slaps (aversive stimulus) her young son's hand as he reaches out to grab something his mother doesn't want him to touch, the child will withdraw his hand. He might try several times more to reach to the object, but each time he gets slapped, perhaps a little harder. After a while, he no longer reaches out for the object. Each slap (the aversive punishing stimulus) is reducing the likelihood of his reaching out for the object. On the other hand, if a response leads to the removal of an aversive stimulus, the probability of the response is likely to increase when similar aversive contingencies occur. In such cases, the aversive stimulus is often referred to as a negatively reinforcing stimulus or negative reinforcer.

Reinforcing stimuli can be classified on the basis of their origin (empirical source) or their formal characteristics (Cooper, et al., 2007). In classifying reinforcing stimuli on the basis of origin, two types are identified: unconditioned (positive or negative) reinforcers and conditioned (positive or negative) reinforcers. Unconditioned reinforcers, also known as primary reinforcers, are those that that are not learned. In other words, such stimuli have been established as reinforcers through the process of natural selection over the evolutionary history of a species. Food, water, and sexual stimulation are frequently cited as examples of unconditioned reinforcers. Just what other stimuli are unconditioned reinforcers is difficult to specify for the human species, because few empirical studies have attempted to sort them out. It has been suggested that touch and some facial gestures, like the smile, may also be natural reinforcers, but the evidence is not strong enough to know for sure. Conditioned reinforcers, also known as secondary reinforcers, are stimuli that were originally neutral but became reinforcing by being paired either with unconditioned reinforcers or previously conditioned reinforcers. Perhaps one of the most well known examples of a conditioned reinforcer is money. Conditioned reinforcers like money are referred to as generalized conditioned reinforcers because they have been paired with so many different unconditioned and other conditioned reinforcers.

In terms of formal characteristics reinforcers can be divided into (1) tangible reinforcers , that is, those things that can be touched, seen, smelled, and manipulated; (2) edible reinforcers, that is, anything that can be eaten; (3) activity reinforcers , such as playing games by oneself or with others, reading, attending a concert, playing with friends, engaging in artistic endeavors, and so forth; (4) social reinforcers , such as physical contacts like hugs and pats, proximity to others, attending, and verbal stimuli like praise. Obviously social reinforcers are provided by other people and are probably among the most useful of the conditioned reinforcers for certain kinds of clients.

Antecedent Stimuli. Now let us complicate the previous situation of the infant and the mobile to show how contingent (non-eliciting) antecedent stimuli can influence behavioral responses. Let us say that during the time the infant was in her crib and engaged in these activities, a red light was on. After the reinforcing relationship has been established between the motion of her right leg and its consequence, we turn off the light and arrange things so that any further movement of her right leg does not result in setting the mobile in motion. After a while, the movement of her right leg will become less frequent and predictable, i.e., the reinforcing relation has been extinguished. But if we turn the red light on again and allow again movement in her right leg to cause the mobile to move, we will soon observe the infant's right leg moving as frequently and as predictably as before. In other words, we have established the reinforcing relation between the be havior (right leg movement) and its consequences (setting the mobile in motion) but only if the red light is on. Now when we turn the light off, we will find that the infant does not move her right leg, or more likely, moves her leg but with much less frequency and certainly with less regularity then when the light is on.

This example illustrates how antecedent stimuli can come to control operant behavior as a result of stimulus discrimination training, that is, reinforcing behavior in the presence of one antecedent stimulus but not in the presence of another. Through discrimination training a new stimulus-response relationship has been created between the presence of the antecedent red light and the behavior. The antecedent red light is said to occasion or evoke the behavior, that is, make it more likely to occur. The red light is said to be functioning as a discriminative stimulus . The behavior under the control of the discriminative stimulus is referred to as the discriminated operant.

After an antecedent stimulus has come to evoke a particular response under contingencies of reinforcement, other stimuli are also likely to evoke the same response. This empirical principle is called stimulus generalization. These other stimuli usually share some common properties with the controlling stimulus, although the evocative strength of these new (unconditioned) stimuli may not be as great as the original controlling stimulus. In the just given example of stimulus discrimination, the infant learned that right leg movement was reinforced when the red light was on but not when the light was off. However, lights of other colors might now also evoke leg movement with the strength (frequency) of the leg response varying depending upon how close the color is to red. Stimulus generalization can also be seen in language learning. Once a child has acquired a new word, it will be evoked over a wider range of stimuli than it would be evoked in adult speech. For example, one child learned the word "fly" in the context of the common household insect, but then on other occasions used the same word in the context of any small insect, crumbs, and even her own toes. Linguists often refer to such language behavior as overextensions, but such behavior conforms very well to the concept of stimulus generalization.

Schedules of Reinforcement. A reinforcing stimulus following some behavior increases the likelihood of that behavior occurring in the future. But for how long will the behavior reoccur, given the appropriate conditions? The answer to this question will depend on the past contingencies of reinforcement, that is, the frequency with which the behavior is reinforced. The rate at which reinforcement occurs is called a schedule of reinforcement (Ferster & Skinner, 1957). A continuous reinforcement (CRF) schedule is one in which an operant response is reinforced every time it occurs. There are also several varieties of intermittent reinforcement schedules, each with their own special characteristics. Interval schedules are arranged according to the interval of time that has passed after an appropriate response occurs, regardless of the number of other appropriate, or even non-appropriate, responses that have occurred in that interval. A fixed interval schedule (FI) is one in which the occurrence of the reinforcing stimulus depends on a constant time interval following an appropriate response, such as five minutes (FI5) or ten minutes (FI10). On the other hand, a variable interval schedule (VI) is one in which the interval of time is not constant but can vary around an average. For example, a VI10 means that on average reinforcement is occurring every 10 minutes after an appropriate response, but an actual interval might be shorter or longer than 10 minutes. Ratio schedules are arranged according to the number of appropriate responses that have occurred, regardless of time. Just like interval schedules, ratio schedules can be fixed (FR) or variable (VR). For example, an FR1 means that every appropriate response is being reinforced, i.e., continuous reinforcement, while an FR10 means that every 10th appropriate response is being reinforced. A VR15 means that on average every 15th response is being reinforced.

Behavior that is continuously reinforced will have a very high probability of reoccurring and will be strengthened faster than on an intermittent schedule. If reinforcement is withheld, extinction of the behavior will occur quite rapidly. Responses on a continuous schedule of reinforcement also tend to have very little variation in their topography and thus tend to be rather stereotypical. Furthermore, while in laboratory and clinical settings, behavior on a continuous reinforcement schedule will fairly often be employed in certain circumstances, one rarely finds them occurring in natural settings.

Unlike a continuous reinforcement schedule where high rates of responding can be quickly generated, the rate of responding takes more time to build up with intermittent reinforcement. Nonetheless, response rates under intermittent reinforcement also eventually reach high probabilities of occurring under appropriate circumstances. An advantage of an intermittent schedule of reinforcement over CRF is that the behavior will persist longer and be more resistant to extinction. Comparing the different intermittent schedules, experimental evidence suggests that behaviors under variable schedules (VR and VI) of reinforcement, are likely to be more resistant to extinction than those under the fixed schedules (FR and FI), as long as the average ratio or average interval is not too great. Various kinds of intermittent schedules of reinforcement are more likely to be found in natural settings than CRFs. This explains in part why much of our operant behavior persists over time, including language behavior. It should also be kept in mind that within our behavioral repertoires different operants are subject to different schedules of reinforcement and that schedules may change over time.

The Three-Term Contingency. Operant S-R relations, thus, usually involve what is called a three-term contingency, sometimes symbolized as [S.sup.D][right arrow]R[right arrow][S.sup.R], where [S.sup.D] is the discriminative stimulus, R, the response, and [S.sup.R], the reinforcing stimulus. According to this "formula," the occurrence of some behavior is highly probable (other conditions being present) when a certain antecedent (discriminative) stimulus is present as the result of a past history of the behavior being contingently followed by a reinforcing stimulus under some schedule of reinforcement. The three-term contingency formula for operant conditioning looks deceptively simple, which is probably one of the reasons why many linguists and cognitive psychologists, as well as others, have difficulty conceiving that operant conditioning could be the basis for language behavior. What I have presented here is only the bare basics of the principles of operant conditioning. For details, see Catania (1953), Cooper, et al (2007), Fester & Skinner (1957), Pierce & Cheney (2008), Skinner (1953), or Sulzer-Azaroff & Mayer (1991).

Two Poverty of the Stimulus Arguments

"It is perhaps worth emphasizing that the orderly control of behavior in a stable environment by contingencies of reinforcement is not a theory but an empirical fact." (Palmer, 1998)

After 23 years of working on it, B. F. Skinner (1957) wrote Verbal Behavior, a 478- page exposition of how language behavior could be explained in terms of operant principles. Actually, Skinner avoids using the term 'language behavior' in his book (1957, p. 2) because for him 'language' refers to the practices of a linguistic community and his focus is on the verbal behavior of the individual. Also his use of the term verbal behavior is broader in conception than just spoken language..

Unlike linguists, Skinner did not try to define nor describe language or linguistic units in structural terms or in terms of formal rules for generating and comprehending language forms. Instead, he tried to characterize language behavior in terms of the sources that provide the reinforcing stimuli and discriminative stimuli for the behavior. As he put it (Skinner, 1957, p. 2), language behavior is "... behavior reinforced through the mediation of other persons ..." In other words, it is the members of a language community who provided the stimulus control and reinforcement necessary for an individual to become a member of and to maintain membership in that community. Later in Verbal Behavior (pp. 224-226), Skinner refined his characterization to make it clear that language behavior of the speaker was a special repertoire tied to a listener whose own language behavior was shaped by a community of speaker-listeners (Palmer, 2008).

Unlike cognitivists and linguistic nativists, Skinner did not put the structures and the rules or principles that supposedly govern language behavior in the head. Instead, in describing language behavior in terms of the environmental (physical and social) variables that give rise to such behavior, he gave full responsibility for the phonological and grammatical patterns in language behavior to the language community, through its control over the contingencies of reinforcement.

Arguments critical of explaining language behavior in terms of operant S-R relations might be said to have begun soon after the publication of Verbal Behavior and quickly culminated with Chomsky's review of that book (Chomsky, 1959). Over the ensuing decades these arguments became more refined and new ones were added, becoming known as the poverty of the stimulus arguments (Chomsky, 1980; Thomas, 2002).

The poverty of the stimulus arguments might be regarded as a multi-bladed Excalibur with which the knights of linguistic nativism try to slay the empiricist dragon. Only those considered most pertinent to the issues of concern of speech-language pathologists and applied behavior analysts will be discussed here. It has been argued by Chomsky and others that explaining the ontogenetic development of language behavior (i.e., language acquisition) in terms of establishing operant S-R relations in a child's interactions with speaking members of a language community is insufficient and furthermore, that the structural complexities of speech utterances (i.e. the grammar of language) are too great to be learned through verbal stimuli.

1. Language behavior is stimulus-independent. Chomsky (1972) stated that language behavior is " from control of detectable stimuli, either external or internal." (p. 12.) This is such a radical claim that one might well wonder if there is any empirical support for it. Unfortunately, Chomsky provides no such evidence. Previous to making this claim, Chomsky (1972) discusses another "important observation." The observation is that "...the normal use of language is innovative, in the sense that much of what we say in the course of normal language use is entirely new, not a repetition of anything that we have heard before and not even similar in pattern--in any useful sense of the terms "similar" and "pattern"--to sentences or discourse that we have heard in the past" (pp. 11-12). Again just what evidence there is to support this claim is not stated. In fact, Chomsky speaks of this observation as being a "truism." Chomsky treats the stimulus-free aspect and the novelty of language behavior as being independent observations. Logically it seems that if language behavior were not under the control of stimuli then it would be natural to expect much novelty in language behavior. Nonetheless, Chomsky claims that language behavior is always coherent and appropriate to the situation. These properties are apparently intended to rule out language behavior that is random or not in conformance with the grammatical principles of the language system. It is interesting to note that Chomsky rules out the possibility of the coherence and appropriateness of language behavior being controlled by external stimuli, specifically the language community. Instead he lets coherence and situational appropriateness remain inexplicable mysteries by saying, "Just what "appropriateness" and "coherence" may consist in we cannot say in any clear or definite way, but there is no doubt that these are meaningful concepts." (p. 12)

While there seems to be little, if any, empirical evidence for stimulus-independent language behavior, there is plenty of empirical evidence for language behavior being stimulus-dependent, particularly in the language acquisition process. Microanalytical studies of the dyadic conversations between children and their parents have shown that (1) children are relying on cues from the immediate conversation for building up their speech repertoires; (2) parents are providing models of speech patterns that show up later in the child's speech; (3) when a child's verbal response is grammatically incorrect, the parent will frequently follow up with an utterance similar to what the child said but recasted in the correct grammatical form, thus providing a mechanism for shaping the child's language repertoire. Mediated reinforcing stimuli take many forms, such as fulfilling a child's verbal request and providing various social reinforcements when the child responds appropriately. For example, when the parent asks the child a question and the child replies appropriately or when something in the environment, such as a object or event, or a property of either evokes a verbal response from the child that is deemed appropriate, the parent then responds with such expressions as "yes," "right," and so forth, possibly followed by a recast. If there was something grammatically wrong in the child's reply, just recasting or even expanding on what the child said often seems to be sufficient for the child to self-correct eventually. It might be worth mentioning here that this is probably the way by which children begin to distinguish between what linguists refer to as grammatical and ungrammatical utterances. Later their "sense of grammaticality" is refined largely through the contingencies of reinforcement provided by the educational system. These findings are not what one would expect if language behavior were stimulus-independent (Moerk, 1992, 2000).

2. Primary linguistic input is limited and degenerate. Nativists use the term "degenerate" to refer to the fact that in everyday conversations speakers fairly often make mistakes in pronunciation and vocabulary selection, do not speak in full sentences, but in fragments, hesitate, start an utterance, stop, and begin anew, and in general make a number of errors. In addition, the range of grammatical constructions that listeners hear is limited. How then, nativists ask, could a child possibly construct a mental grammar of their language that would allow them to produce and understand virtually any utterance in their language?

The preceding question raised by nativists clearly reveals the difference between the theoretical predispositions of nativists and behaviorists. Behaviorists make no assumption about the existence of any innate, species-specific, mental mechanism for grammar construction. They assume that language behavior arises, is maintained, and changes through the interactions between an individual and other members of his/her language community. For behaviorists there are no internal rules and representations or principles and parameters for generating linguistic utterances or for responding to linguistic utterances; there is only the history of contingencies of reinforcement for such behavior by the verbal community and the current circumstances or setting in which such behavior occurs. Clearly behaviorists and nativists stand apart in their theoretical claims. But the crux of the difference between them seems to be in their empirical claims and the strength of the evidence that supports them.

The empirical evidence for the linguistic input being limited and degenerate seems to be very weak. Studies of the interactions between children in the process of learning their first language and their parents or other members of their language community strongly suggest a great richness in the quality and quantity of the antecedent and consequent stimuli. Far from being anomalous, messy, and full of errors as adult speech is claimed to be, these studies of parent-child verbal interactions going back at least as far as the early 1970's (Snow, 1972) have shown that parent's modify their language behavior when interacting with children so that their speech is simpler, less lengthy, more careful, less complex, and more repetitious. Moerk (1992) found that as children develop in their language behavior, the speech of the parents becomes more complex, staying somewhat ahead of the complexity of child's speech, as if providing a goal for the child to reach.

In addition, a study of the quantity of the parent-child interactions (Hart & Risley, 1995), involving 42 children from three different social classes from the ages of 13 months to 36 months, found that the children were verbally engaged with their parents at an average rate of 341 utterances per hour. These utterances contained numerous kinds of morphological and syntactic structures and numerous kinds of sentence types. Although a complete analysis of the verbal interactions between each parent and child has yet to be done, their study reveals a tremendous richness in the children's linguistic input.

Richness of input is necessary but hardly sufficient for a child to learn a language. Studies have shown that children just listening for hours and hours to another language do not show much progress in learning the language beyond perhaps having some rudimentary understanding of a few words or expressions. Learning a first language also demands a rich situational context and much feedback and interaction between the speaker and listener with a frequent reversal of speaker-listener roles. In more behavioral terms, learning language behavior requires interactions between the child and his or her social and physical environment where operant S-R verbal relations can be established and maintained.

The Relevance of Operant S-R Relations to Speech-Language Interventions

Speech-language interventions have four basic aims: (1) to increase desirable or target verbal behaviors; (2) to decrease undesirable verbal behaviors; (3) to establish desirable verbal behaviors when such behaviors do not occur in the client's behavioral repertoire; and (4) to provide the means for the clients to maintain the desirable verbal behaviors in their everyday life after treatment is over. To achieve these aims, Hegde (1998) distinguishes between treatment principles and treatment procedures. Treatment principles are verbal inductive generalizations or rules that are the outcomes of experimental analyses of operant behavior. Thus, they are focused on the conditions under which operant S-R relations are established and maintained.

The treatment procedures are those intervening interactions that a clinician carries out with a client to bring about changes in the behavior of the client. In this process, not only is the client's behavior changed, but the clinician's own behavior will also be altered as a consequence of the client's responses as he or she determines which discriminative and reinforcing stimuli are effective in altering the client's behavior and which schedule of reinforcement is most suitable for the maintenance of the new behavioral repertoires.

Treatment procedures might be said to be where the rubber meets the road. They are the specific steps taken by the clinician in implementing the empirically based principles on a case-by-case basis in carrying out language interventions. Because the treatment principles are empirically well founded, valid, reliable, and replicable, they are what license or legitimize the use of the treatment procedures. Treatment procedures need to be tailored to the client because each client is an individual and will respond differently owing to the unique aspects of his or her physiology and history of past contingencies of operant interactions with the physical and social environments.

Hegde (1998) points out that there are many operant principles that suggest ways to carry out speech-language behavior interventions. The principle of positive reinforcement can be applied to increase the probability of desirable or target speech-language behaviors, such as community acceptable word pronunciation. One of the problems a speech-language clinician faces is determining what are the appropriate positive reinforcers for a particular client. Some clients, initially at least, respond better to certain primary reinforcers, while other clients might respond better to conditioned reinforcers, based on their past history of contingent reinforcement. For example, a clinician working with an autistic child who has tended to not interact well with other people and who displays little or no verbal behavior might make use of one or more primary reinforcers such a food or objects such as toys that the child shows some interest in and allowing access to them when the child responds in some verbally desirable way, like saying "Please." Gradually the clinician can switch to conditioned reinforcers, particularly social reinforcers.

Some undesirable behaviors can be removed from a client's repertoire through extinction. If these are operant behaviors, reinforcers must control them. By finding out what they are the clinician can withdraw these stimuli and thus bring about extinction of the behavior. Differential reinforcement can help by replacing the undesirable behavior with more desirable behavior. Another approach to removing undesirable behaviors is to follow such behaviors with aversive stimuli (punishers). There are some advantages to punishment of undesirable behavior. The behavior is rapidly extinguished and is not likely to reoccur. However, using punishing stimuli has a downside to it. It may engender other unacceptable behaviors, such as aggression, anger, even withdrawal from interaction. The use of punishment should be decided on a case-by-case basis and great care must be taken not to overuse it.

For reinforcement to be effective in building or altering a language repertoire, the target verbal behavior must first be emitted. But what if the target behavior is not in the client's current repertoire or occurs too infrequently to make practical use of it? In situations like these the clinician needs to apply procedures that can result in new verbal behaviors being added to a client's language repertoire. The kinds of procedures that are appropriate will depend on assessing the current nonverbal vocal behavior, language repertoire, and learning and social skills of the client. If a client already demonstrates some emitted vocal skills but exhibits little language behavior, then the verbal modeling-imitation procedure may be quite useful in the development of such behavior. As was pointed out earlier, Skinner (1957) defined verbal behavior as behavior that is mediated by other persons. In verbal modeling-imitation, a model is a verbal response produced by one person and the imitation is the evoked verbal response that is topographically similar to its modeled stimulus, produced by another person. In the example presented earlier, the red light (the SD) evoked the infant's leg movement (R) after the behavior was differentially reinforced in the presence of the light but not in its absence. Note that formally or topographically, there was no resemblance between the red light and the infant's leg movement, but not in the case of modeling-imitation, where the antecedent verbal stimulus and the verbal operant share common topographical properties, in this case acoustic ones. Skinner (1959) called such verbal operants echoics . Others may refer to them as verbal or vocal imitations.

Modeling-imitation can be seen in the pre-linguistic and early stages of language development. Young children, infants and toddlers, frequently get socially reinforced for vocalizing in ways that are similar to or echo adult speech. For example, a mother that hears her young daughter babble the uncommitted vocal response "dada" will suddenly pay more attention to her child, show excitement, smile brightly, gently touch or pat, and even hug her child, and say 'Yes, dada." Some or all of these parental responses may provide reinforcement for the child to repeat "dada." Soon the child is saying "dada" whenever the mother says to her "Say, dada" under some schedule of reinforcement. Thus, "dada" has become a discriminated verbal operant. In fact, this kind of interaction can become generalized, such that whenever the mother says "Say, mama," or "Say, doggie," or, in general, "Say, X," the child will repeat back (echo) whatever X is. Through this generalized procedure new verbal responses are being added to the child's language repertoire.

Modeling- imitation also has applications in speech-language interventions. For example if a client is having difficulties with pronouncing certain words or longer expressions, the model discriminative stimulus provides a standard that can be used for correcting the problem. It may also be used in dealing with a client's morphological or syntactic problems.

Most people would not consider a child's echoic responses to have any particular informative value , that is, we would hardly consider the verbal echoic episodes between the mother and her daughter as conversations. Nevertheless, echoics are a part of language behavior. Echoic behavior, either complete or partial, plays various roles in adult verbal episodes, such as when quoting someone else, expressing surprise, reassuring that one is paying attention, concurring or showing understanding, taking vows and oaths, repeating to gain time to respond, and so forth (Winokur, 1976). Furthermore, echoics are extremely valuable in both normal language development (see Hegde, in this issue, and McLaughlin, in this issue) and language interventions as a jumping off stage for developing other kinds of verbal operants that are more involved in verbal interchanges.

Among a number of verbal operants, Skinner (1957) distinguished between two primary ones: tacts and mands . Tacts are verbal operants that are evoked by the presence of discriminative stimuli, such as objects, events, or properties of objects and events, spatial, temporal, and other relationships or combinations of these. In other words, tacts are verbal operants for whatever there is in our universe of discourse. Mands are verbal responses that are evoked under conditions of deprivations or aversive stimuli. The form of the mand often specifies or tacts its reinforcer, making it possible for another member of the verbal community to respond as a reinforcement mediator to supply the reinforcer specified by the mand. Linguistically a mand can be as simple and as short as a single word "Cookie!" or as long as a sentence "Would you go into the bedroom and get my green slippers?" or even longer.

To establish tact and mand verbal operants as part of the language repertoire of a child with only echoic behavior requires the use of transfer of control procedures (Sundberg & Partington, 1998). To illustrate such a procedure the earlier "dada" example, where a child acquired a repertoire of echoic behaviors, will be expanded on. Now the father is present in the room. The mother points to the father and says, "Who is this? Dada. Say, dada." The child echoes "dada" and is socially reinforced with big smiles, verbal praise, and perhaps some physical contact from one or both parents. After some similar episodes, the echoic prompt can be faded out, while keeping the prompt "Who is this?" Later, when the father is alone in the room with his daughter, if she utters "dada" without any prompts at all, her behavior is reinforced. In this way, transfer of stimulus control from the verbal antecedent stimulus prompts to the presence of the father is accomplished. The initial modelantecedent stimulus acts as a kind of catalyst to establish a new operant relationship in which the presence of another antecedent stimulus comes to evoke the original echoic response. Once the child has learned tact relationships, the transfer of control procedure is no longer necessary. Instead, the child can start acquiring more tacts by observing adults and other children tacting in the presence of discriminative stimuli. This is another kind of verbal modeling-imitation that is frequently observed in language development. If the child has also learned mand relationships, she can start asking for the names of things, perhaps first by pointing and than later verbally by asking, "What's that?" Mand verbal operants can be learned in a similar way but with the added complication that a deprivation or aversive antecedent stimulus must be already present or must be established by the clinician, so that transfer of control can eventually be made to it.

Human beings like other living organisms interact with the physical world and thus their behavior is subject to the contingencies of reinforcement and stimulus discrimination the physical world provides. But more than any other species human beings interact with their fellow human beings and are subject to the contingencies of reinforcement and stimulus discrimination of the language behavior of others. And once such behavior has been established, we learn to behave both verbally and non-verbally, according to the verbal instructions of others. Verbal instructions, whether vocal, written, or gestured, are stimuli that describe or specify occasioning discriminative stimuli, the evoked responses they control, and their consequences or some combination of these (Skinner, 1969). Verbal instructions have also been called contingency-specifying stimuli (Schlinger & Blakely, 1987) or rules (Skinner, 1969). The behavior that they evoke has been called by Skinner (1969) rule-governed behavior and by Catania (1998) verbally governed behavior.

The linguistic form that verbal instructions can take is highly varied, ranging over such traditional sentence types as declaratives, interrogatives, and imperatives. We learn many skills via the model-imitation route, but many more are acquired through instructions or a combination of modeling and instruction. For example, a manual of English pronunciation might describe how to pronounce the "f" sound as follows: Spread your lips a little and gently bring your lower lip up against your upper teeth; then breath out through your mouth without vibrating your vocal cords. If a clinician gave the same instructions to a client who is having difficulty pronouncing this sound and the client has had a long history of being reinforced for following verbal instructions, there is a good chance that the client would correctly pronounce the target sound. If the client still has problems, the clinician can model the articulation of sound, providing a visual stimulus for the client to imitate. Also the acoustic result of the articulation of the sound by the clinician results in an auditory stimulus that provides the client with a model for the resulting auditory effect of his or her own production.

Another means of adding new behaviors to an individual's repertoire is shaping. Not all clients are able to acquire new behaviors using the modeling-imitation procedure, nor are all clients able to follow verbal instructions well. Shaping is a technique for adding new behaviors in a step-by-step fashion using differential reinforcement. Skinner (1953) compares the shaping of emitted behaviors into new complexes of behavior through differential reinforcement of successive approximations to the desired terminal behavior to the shaping of a lump of clay by a sculptor into a piece of artistic design. Teaching language skills to autistic and cognitively deficient children (Sundberg & Partington, 1998) and rehabilitating the verbal skills of aphasic adults can be particularly difficult, but shaping can be an effective means of establishing verbal behavior.

Starkweather (Starkweather, 1983) discusses shaping with the example of a five-year-old boy with no evidence of speech behavior, who did vocalize on some occasions and had a gestural repertoire for communicating his needs. He was brought to a clinic after being deprived of food for a while. His mother brought a sandwich and it was made known that he could have a bite of the sandwich at the pleasure of the clinicians. His gestures requesting the sandwich were ignored. Finally, he started whimpering a little. Immediately he was given a bite of the sandwich. Soon he varied his whimper with a kind of grunt, which was reinforced with another bite from the sandwich, but whimpers were no longer reinforced. It was not long before he was just grunting to get food reinforcement. Then the clinicians changed his reinforcement so that he was being reinforced only if he grunted with his mouth open, producing a vowel-like sound. The openmouth grunts varied in their vowel quality and then only the vocalizations with a short schwa-like quality were reinforced and not others. As the variation in his vowel-like sounds changed, some would be reinforced and not others and only if at the same time he pointed to a particular object and produced a certain vowel. Eventually he reached the behavioral point where he was being reinforced when he pointed to the sandwich and said /se???/, which corresponded to the vowels in the word 'sandwich', each ending with a glottal stop. The shaping procedure continued until he could say the whole word correctly. As you can see, shaping can be a very onerous technique for teaching new behaviors. It takes a great deal of time, progress may not be as straightforward as a clinician might hope, constant monitoring is required, and the results may have some unintended consequences. The shaping procedure offers the best approach for initiating new behaviors with clients who do not imitate. With clients who do imitate, the modeling-imitation procedure is always more efficient.

Various schedules of reinforcement are used for establishing and maintaining desirable behaviors. If the desirable behavior only occurs infrequently, continuous reinforcement can be used to strengthen and stabilize that behavior, such as was done in the case of the five-year-old child discussed by Starkweather. Once the desired behavior is regularly occurring in appropriate situations, it is then advisable to "thin" the delivery of reinforcement by gradually switching to an intermittent schedule of reinforcement to ensure that it will be maintained. As a goal, a highly variable schedule might be preferable because it generates responses that are highly resistant to extinction and also because variable schedules are more commonly encountered in natural settings.

From the previous discussion it should be clear that speech-language pathologists and applied behavior analysts must rely on the operant conditioning methods to establish S-R relations to carry out language interventions. There is no other way to try to modify speaker or hearer verbal behavior repertoires. The successes that have been achieved by operant conditioning methods provide the reinforcing stimuli that keep practitioners continuing to rely on such methods and the well-founded empirical principles on which they are based.


Catania, A. C. (1998). Learning (4th ed.). Upper Saddle River: Prentice-Hall, Inc.

Chomsky, N. (1959). Review of Skinner 1957. Language, 35(1), 26-58.

Chomsky, N. (1972). Language and mind (Expanded ed.). New York: Harcourt Brace Jovanovich, Inc.

Chomsky, N. (1980). Rules and representations. New York: Columbia University Press.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Saddle River, NJ: Pearson Education, Inc.

Donahoe, J. W., & Palmer, D. C. (1994). Learning and complex behavior. Boston, MA: Allyn and Bacon.

Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement. New York: AppletonCentury-Crofts.

Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore, MD: Paul H Brookes Publishing Co.

Hegde, M. N. (1998). Treatment procedures in communicative disorders (3rd ed.). Austin, TX: Pro-Ed.

Moerk, E. L. (1992). A first language taught and learned. Baltimore, MD: Paul H. Brookes.

Moerk, E. L. (2000). The guided acuisition of first language skills (Vol. 20). Stamford, CT: Ablex.

Palmer, D. C. (1998). On Skinner's rejection of S-R psychology. The Behavior Analyst, 21 (1), 9396.

Palmer, D. C. (2008). On Skinner's definition of verbal behavior. International Journal of Psychology and Psychological Therapy, 8(3), 295-307.

Pavlov, I. P. (1927). Conditioned reflexes (G. V. Anrep, Trans. paperback ed.). New York: Dover Publications, Inc.

Pierce, W. D., & Cheney, C. D. (2008). Behavior analysis and learning (4th ed.). New York: Psychology Press.

Schlinger, H. D., Jr. (1995). A Behavior analytic view of child development. New York: Plenum Press.

Schlinger, H. D., Jr., & Blakely, E. (1987). Function-altering effets of contingency-specifying stimuli. The Behavior Analyst, 10(1), 41-45.

Sechenov, I. M. (1965). Reflexes of the brain (S. Belsky, Trans. English ed.). Cambridge, MA: The M.I.T. Press.

Skinner, B. F. (1953). Science and human behavior (Paperback ed.). New York: The Free Press.

Skinner, B. F. (1957). Verbal behavior. New York: Appleton-Century-Crofts.

Skinner, B. F. (1969). Contingencies of reinforcement: A theoretical analysis. New York: Appleton-Century-Crofts.

Snow, C. E. (1972). Mothers' speech to children learning language. Child Development, 43 (2),


Starkweather, C. W. (1983). Speech and language: Principles and processes of behavior change. Englewood Cliffs, NJ: Prentice-Hall.

Sundberg, M. L., & Partington, J. W. (1998). Teaching language to children with autism or other developmental disabilities (Version 7.1 ed.). Pleasant Hill, CA: Behavior Analysts, Inc.

Thomas, M. (2002). Development of the concept of "the poverty of the stimulus." The Linguistic Review, 19, 51-71.

Winokur, S. (1976). A Primer of verbal behavior: An operant view. Englewood Cliffs, NJ: Prentice-Hall, Inc.

Author Contact Information

Raymond S. Weitzman

1035 E. Monticello Circle

Fresno, CA 93720-1872

Phone: 559-438-6334

COPYRIGHT 2010 Behavior Analyst Online
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Weitzman, Raymond S.
Publication:The Journal of Speech-Language Pathology and Applied Behavior Analysis
Date:May 13, 2010
Previous Article:Verbal behavior by B.F. Skinner: contributions to analyzing early language learning.
Next Article:Behavioral vs. cognitive views of speech perception and production.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |