Printer Friendly

Behavioral vs. cognitive views of speech perception and production.

Introduction

The study of speech perception and production has been dominated largely by cognitively oriented researchers and theorists. However, cognitive theories do not translate well into actually teaching speech and language. Because of its emphasis on behavior and its foundation of experimentally derived principles of learning, operant learning theory, on the other hand, has for more than 40 years been used to successfully teach speech and language, especially to people with a variety of speech and language disorders (e.g., Camarata, 1993; DeLeon, Arnold, Rodriguez-Catter, & Uy, 2003; Hegde, 1998; Hegde, 2007; Hegde & Maul, 2006; Johnston, & Johnston, 1972; Lancaster et al., 2004; Pena-Brooks & Hegde, 2007; Wagaman, Miltenberger, & Arndorfer, 1993). Although there is a fair amount of behavioral research on teaching verbal behavior to people with speech and language disorders, with few exceptions (e.g., Guess, Sailor, Rutherford, & Baer, 1968; Whitehurst, 1972; Whitehurst, Ironsmith, & Goldfein, 1974) behavioral scientists have not contributed much basic research on speech perception or production. What behavior analysis can offer language researchers and speech-language pathologists (SLPs) is a coherent and parsimonious interpretation of speech consistent with experimentally established scientific principles of learning that has immediate practical applications. The purpose of this paper is to illustrate a general behavioral approach to speech perception and production and to contrast it with the traditional cognitive account. The task of interpreting the traditional speech perception and production literature from a behavior analytic perspective is not as onerous as one might imagine, given that much of the research either incorporates relatively straightforward operant conditioning methods or methods that are amenable to an operant analysis.

Contrasting Views of Perception

Cognitive Views of Perception

Traditional treatments describe sensation in terms of the effects of stimuli on sensory receptors. Perception, on the other hand, has generally been referred to as how the brain interprets sensory experience, or more formally as:

...the process by which animals gain knowledge about their environment and about themselves in relation to the environment. It is the beginning of knowing and so is an essential part of cognition. More specifically, to perceive is to obtain information about the world through stimulation. (Gibson & Spelke, 1983, p. 2)

The main problem with such descriptions is that gaining knowledge, knowing, and obtaining information are inferred solely from observable behavioral evidence and, thus, do not add much to our understanding. Moreover, placing perception inside the brain, as some descriptions do, only moves the level of analysis further away from the actual behaviors involved when one speaks of perception. And talking about the brain as if it is doing something--perceiving--is an example of what I refer to as the brain-as-person metaphor because organisms, not brains, perceive, that is, behave.

Descriptions of the development of perception in infants are equally vague. For example, according to Gallahue (1989), "newborns attach little meaning to sensory stimuli" but very soon infants begin to attach meaning and attend to specific stimuli and to identify objects (p. 184). But what does that mean? These verbs do not refer to specific behaviors that can be studied but rather labels for a variety of behaviors. To understand what it means to attend, to identify or to attach meaning, we would need to observe what newborns actually do and the circumstances under which they do it when researchers use such terms. And we should not be surprised if the behaviors and circumstances vary considerably from instance to instance. Furthermore, such characterizations raise a more fundamental question: Why do infants begin to attach meaning or attend to specific stimuli? The traditional account offers no clear answer.

Finally, the term perception itself is problematic. As a noun, it is usually considered a name for a process. But, of course, most of the time the only evidence for the process is the behavior that leads one to say that an organism has perceived something in the first place. In such cases, considering perception to be a process is an example of the logical error of reification.

A Behavioral View of Perception

Rather than debating the meaning of the term perception, a behavioral approach looks at what an individual does (and under what circumstances) that leads investigators to say that he or she perceives something. The focus is not on some inferred hypothetical construct, but rather on the actual behaviors from which such inferences are made. The advantages are obvious. Because the behaviors can be objectively described and measured, their causes can be potentially discovered and, as a result, can be manipulated as independent variables to change the behaviors. It is not possible to change one's perception.

Our principal question, then, is this: What is someone doing when he or she is said to "perceive"? For example, what does it mean to say that I perceive the computer on which I am writing the words you are reading? In other words, what do I do that causes someone to say that I perceive the computer? The list includes but is not restricted to looking at the computer, pushing the button to turn it on, moving the mouse, typing on the keyboard, and, importantly for verbal organisms, calling it a computer and describing it. Obviously, these behaviors usually, but not always, only occur in the presence of the computer, that is, while it is functioning as a set of visual, auditory, and tactile stimuli. (When these behaviors occur in the absence of the computer, we say that I am imagining it.) There are, however, many other stimuli impinging on my sensory receptors as I write at the computer, but until they influence some behavior on my part, we are not likely to say that I perceive them. For example, the sound waves produced by a car going down the street affect sensory receptors in my inner ear, but unless I do something like get up to see who it is, comment on it to someone else, or say to myself, "I wonder who just drove by," I would not be said to perceive the car even though I sensed it. As this example illustrates, the sound of the car as a conditional stimulus (CS) or a discriminative stimulus (SD) might evoke several different behaviors that would cause an observer to say that I perceived the car. Or, it might evoke no behaviors at all. Our senses are constantly bombarded by stimuli that are transduced into neural impulses, but we only perceive, that is respond to, a very small portion of them. Behavior analysts will recognize this as a relatively straightforward issue of stimulus control. The question for a science of behavior is: What causes us to respond to some stimuli but not others; in other words, what causes only certain stimuli to evoke behavior? Cognitive approaches to speech perception are problematic because they infer hypothetical constructs. A behavioral approach is more parsimonious because it infers potentially observable and manipulable interactions between behavior and stimuli.

Cognitive Approaches to Speech Perception

At this point in our evolutionary history, auditory perception is important for human beings primarily because we talk and listen. But as with perception in general, speech perception in particular has been studied within the conceptual framework of cognitive science.

Consider the following description of the perception of a spoken word by Holt and Lotto (2008):

The ease with which a listener perceives speech in his or her native language belies the complexity of the task. A spoken word exists as a fleeting fluctuation of air molecules for a mere fraction of a second, but listeners are usually able to extract the intended message. (p. 42, emphasis added)

This short description embodies the essence of a cognitive approach to speech perception: that speakers have intentions, that words contain meanings, and that listeners must extract the intended meaning. Of course, the only evidence for extracting the intended meaning of an utterance is what the listener actually does. When this account is contrasted with a behavioral approach in which a speaker's verbal behavior, itself evoked by the current circumstances (often including the presence of a listener), in turn evokes verbal (and nonverbal) behavior in the listener all because of a history of operant learning, it is not too difficult to understand why the behavioral approach has not fared well at all against the cognitive one. The cognitive account, despite its logical and scientific problems (see below), is the more familiar and accessible. It is also consistent with the philosophical tradition of mentalism with which we have all been raised. Even so, most language researchers discount any important role for operant learning in speech perception or production.

Cognitive Views of "Skinnerian Learning"

Modern language researchers do not put any stock in a behavioral account of speech perception or language acquisition. Some researchers still reference Chomsky's (1959) review of Skinner's (1957) book Verbal Behavior as demonstrating "the failure of existing learning models, such as Skinner's, to explain the facts of language development" (Kuhl, 2000, p. 11852), despite the fact that Chomsky's review was thoroughly rebutted decades ago (MacCorquodale, 1970; see also Palmer, 2006 for a brief history of the writing of Verbal Behavior, Chomsky's review, and MacCorquodale's rebuttal; see also Hegde in this issue). As I have written elsewhere:

It seems absurd to suggest that a book review could cause a paradigmatic revolution or wreak all the havoc that Chomsky's review is said to have caused to Verbal Behavior or to behavioral psychology. To dismiss a natural science (the experimental analysis of behavior) and a theoretical account of an important subject matter that was 23 years in the writing by arguably the most eminent scientist in that discipline based on one book review is probably without precedent in the history of science. (Schlinger, 2008b, pp. 335-336)

Moreover, much empirical research since Chomsky's review supports the behavioral view that parents and other caregivers behave in ways that shape and reinforce verbal behavior in young children (e.g., Whitehurst, Novak, & Zorn, 1972; Moerk, 1978, 1983, 1992; Hart & Risley, 1995, 1999). And speech-language pathologists have regularly used behavioral interventions to teach speech and language skills to children with articulation (speech) and language disorders (Hegde & Maul, 2006; Pena-Brooks & Hegde, 2007).

In addition to citing Chomsky's review, many language researchers perpetuate myths about Skinner's view of language. For example, according to Kuhl (2000),
 On Skinner's view, no innate information was necessary,
 developmental change was brought about through reward
 contingencies, and language input did not cause language to emerge
 ...The emerging view argues that the kind of learning taking place
 in early language acquisition cannot be accounted for by Skinnerian
 reinforcement. (p. 11850)


It is not clear what "innate information" means; if it means some presumed and unverifiable innate mechanism, Skinner may not have supported it. Skinner, however, never discounted the importance of well documented neurophysiological and genetic information--he believed that all behavior, including verbal behavior, was the combined result of inheritance and learning. But Skinner was neither a geneticist nor an evolutionary biologist (although he did write often about biology and evolution [see Morris, Lazo, & Smith, 2004]). Rather, his area of expertise was the effect of environmental contingencies on behavior. Thus, although Skinner emphasized the role of reinforcement contingencies on the behavior of a wide range of species, especially humans, he never claimed that developmental change was brought about solely through "reward contingencies." And he most certainly did acknowledge the contribution of language input in language acquisition. But he did so within the confines of a unified account of behavior in general (see Skinner, 1957). Finally, as I argue below, there is no evidence that in any way suggests "that the kind of learning taking place in early language acquisition cannot be accounted for by Skinnerian reinforcement." On the contrary, the kind of learning that takes place in early language acquisition can almost entirely be accounted for by operant learning principles.

Elsewhere I have addressed some of the misconceptions of a behavioral approach to language (see Schlinger, 1995, pp. 178-182). Nonetheless, such misconceptions continue to be perpetuated by language researchers. One reason may be that behavior analysts have conducted very little of their own research on speech perception or language acquisition, except in the context of teaching children with language deficits (e.g., Learman et al., 2005; Sautter & LeBlanc, 2006). And much of the research conducted by behavior analysts has not been published in mainstream language journals. Also, many behavior analysts have been relatively content with interpreting the facts of verbal behavior, including research by traditional language researchers, according to the established principles of behavior analysis. Regardless of whether such an interpretation is correct, it has much to recommend it because it is parsimonious and firmly grounded in, and consistent with, experimentally established principles.

But if behavior analysts themselves have not conducted much research into speech perception and production, other researchers have demonstrated, albeit incidentally, the critical role of operant learning, especially in speech production. For example, even though Kuhl (2000) claimed that "Skinnerian reinforcement learning" cannot explain how infants' perceptual systems are altered by experience, elsewhere (e.g., Kuhl, 2004) she cited studies showing that social contact and interactions affect the duration, rate, and frequency of vocal learning in human infants and in songbirds. In particular, she cited a study by Goldstein, King, and West (2003) with human infants in which in a contingent condition "mothers were instructed to respond immediately to their infants' vocalizations by smiling, moving closer to and touching their infants" (Kuhl, 2004, p. 837). Not surprisingly, at least to a behavior analyst, the results showed that when compared to infants in the non-contingent condition, infants in the contingent condition produced more vocalizations and more mature and adult-like vocalizations. This is obviously operant conditioning (Kuhl would call it "Skinnerian learning") and it is consistent with previous research demonstrating that reinforcement, even in the absence of awareness, can strengthen (i.e., select) vocalizations and numerous forms of speech (e.g., Greenspoon, 1955; Rosenfeld, & Baer, 1970; Rheingold, Gewirtz, & Ross, 1959; Todd & Palmer, 1968). Even Goldstein et al. refer to it as "social shaping" (Skinner coined the term shaping to refer to the gradual differentiation of responses belonging to a response class as a function of differential reinforcement.). Other data from Goldstein's lab confirm the powerful role of operant learning in early language acquisition (e.g., Goldstein, Schwade, & Bornstein, 2009). But such findings are not new (e.g., Rheingold, Gewirtz, & Ross, 1959; Todd & Palmer, 1968). Nonetheless, data such as these provide further support for the interpretation that infant vocalizations, including those called babbling, can be, and are shaped (i.e., selected) by consequences. The consequences for infant babbling and speech sometimes come from others, but the most relevant consequences are likely the match between the babbled sounds and those heard by the infant for the several months prior to the onset of babbling (see Schlinger, 1995, pp. 158-160; Hegde, this issue). There is simply no question that operant learning is a significant factor in language production.

What Does It Mean to Say That Someone "Perceives Speech"?

Whether we know it or not, most of the time when we say that someone perceives speech, we mean that his or her behavior is under the stimulus control of the speech. Some questions for a science of behavior are: 1) What form does the behavior take?; 2) What is the function of the behavior?; and, 3) How is such behavior acquired?

At the most basic level, it could be said that infants perceive speech if they turn their head toward the location of vocal stimuli, although we would be hard-pressed to claim that infants are "extracting information" from such stimuli. Researchers often assess auditory perception in infants by using so-called habituation and dishabituation methods, for example, by measuring non-nutritive sucking or changes in heart rate, all of which, on the present account, qualify as perceptual behaviors. One example of a seemingly simple speech perception phenomenon that has received a considerable amount of attention from language researchers is categorical perception.

The Strange Case of Categorical Perception

Beginning in the late 1950s and early 1960s, researchers began to test the ability of both infants and adults to discriminate different categories of phonemes, a phenomenon called categorical perception. Traditional accounts view the phoneme as "the smallest unit of sound that signifies a difference in meaning in a given natural language" (Aslin, 1987, p. 68). Behavior analysts, on the other hand, view the phoneme functionally as "the smallest unit of sound that exerts stimulus control over behavior" (Schlinger, 1995, p. 153). For the present account, whatever behavior is evoked by the sound of a phoneme is what we mean when we speak of categorical perception.

In early experiments, researchers used computers to present a series of synthetic consonant-vowel (CV) sounds that ranged across a number of consonants (e.g., /bV/-/dV/-/gV/). Specifically, the computer generated synthetic speech sounds that varied along a stimulus dimension called voice onset time (VOT), which is the basis of the distinction between CVs like pa and ba. Voice onset time refers "to the point at which vocal cords begin to vibrate before or after we open our lips" (Bates, et al., 1987, p. 152). Thus, for sounds we react to as b, voicing begins either before or simultaneously with the consonant burst; and in sounds that we react to as p, voicing begins after the consonant burst. A computer can present stimuli along this VOT continuum from -150 to +150 milliseconds (ms) from burst to voice. Results from numerous studies have shown that English-speaking adults and infants do not respond differentially to VOTs that fall either significantly above or below the boundary between pa and ba, which is about 25-30 ms. Results showing that infants only a few months old can respond differentially to these different phonemic categories suggested to some researchers that humans are born with "phonetic feature detectors" that evolved specifically for speech and that respond to phonetic contrasts found in the world's languages (Eimas, 1975). Such results seemed to support nativist theories of speech and language (e.g., Chomsky, 1957) and were couched in such terms, which is understandable because Chomsky's views on innate mechanisms of generative grammar still dominated the study of language in the 1970s.

Results from other studies, however, have shown that nonhumans (e.g., chinchillas, monkeys, Japanese quail and even rats) could be trained to respond to these phonemic categories in the same way as human adults and children (e.g., Dooling, Best, & Brown, 1995; Kluender, Diehl, & Killeen, 1987; Kuhl, 1981; Kuhl & Miller, 1975, 1978; Kuhl & Padden, 1982, 1983; Reed, Howell, Sackin, Pizzimenti, & Rosen, 2003; Toro, Trobalon, & Sebastian-Galles, 2005). Moreover, categorical perception in adults was limited to the phonemes in their respective native languages (Miyawaki et al., 1975), which suggests a strong experiential component.

These studies with nonhuman animals forced a conclusion that ran counter to the prevailing views that somehow infants were born responding to phonetic units, and that language evolved in humans discontinuously with lower species; namely, that infants are born with a general capacity to discriminate auditory stimuli including speech sounds rather than one that evolved specifically for speech. Thus, domain-general, rather than species-specific mechanisms seem to be responsible for infants' tendency to respond to phonetic units (Kuhl, 2000). As Kuhl (1981) stated, "the evolution of the sound system of language was influenced by the general auditory abilities of mammals" (p. 347). Or, as Bates, O'Connell, and Shore (1987) put it:
 We rushed too quickly to the conclusion that the speech perception
 abilities of the human infant are based on innate mechanisms
 evolved especially for speech. The infant's abilities do indeed
 seem to involve a great deal of innately specified information
 processing. But we do not yet have firm evidence that any of this
 innate machinery is speech specific. We assumed that the human
 auditory system evolved to meet the demands of language; perhaps,
 instead, language evolved to meet the demands of the mammalian
 auditory system. This lesson has to be kept in mind when we
 evaluate other claims about the innate language acquisition device.
 (p. 154)


What Bates et al. mean is that the development of speech perception in infants is constrained by what they are able to hear, and that the range of auditory stimuli detectable by the mammalian auditory system evolved for reasons having nothing to do with speech.

Many language researchers now agree that the language environment exploits natural boundaries of a general auditory capacity that is common to mammals and some birds, but also can modify them (Diel, Lotto, & Holt, 2004). Thus, although initial studies on categorical perception were meant to support nativist theories of language development, such as Chomsky's, further research supported the opposite conclusion, namely, that categorical perception resulted largely from experience and learning. In studies on categorical perception with both human infants and non-human animals, researchers successfully trained discriminative responses to phonemic sounds using operant conditioning procedures. One can assume, then, that similar contingencies operate naturally for human infants. The fact that adults respond only to phonemes in their respective native languages also suggests that those phonemes have specific functions that non-native phonemes do not, thus supporting the contention that operant learning is responsible. But that is not how cognitive researchers see it.

Cognitive Views of Learning

According to Kuhl (2000), "The acquisition of language and speech seems deceptively simple" (p. 831). By that she means that children learn their native language quickly and effortlessly. She wonders, then, how children, but not language theorists, are able to crack "the speech code" so easily. This is a little like asking how children began walking, negotiating the effects of gravity, before physicists understood the law of gravity. Nonetheless, Kuhl believes that the last several years has produced "an explosion of information about how infants tackle this task" ... that would be surprising to and "unpredicted by the main historical theorists'" (p. 831, emphasis added). Specifically, "children learn rapidly from exposure to language, in ways that are unique to humans, combining pattern detection and computational abilities (often called STATISTICAL LEARNING) with special social skills" (p. 831, emphasis added).

It may come as a surprise to behavior analysts (who, I believe, are the "main historical theorists" Kuhl refers to) that after decades of uncritically adhering to Chomsky's structural nativist view of language, many language researchers and theorists have done an about-face and now tout learning as the primary mechanism for language acquisition and speech perception (e.g., Kuhl, 2000). However, according to these scholars, this "new kind of learning" is not "Skinnerian learning."

Many modern language researchers now propose that certain aspects of language are experience-dependent (vs. experience independent), and even refer to the experience as "learning," but instead of relying on empirically supported theories of (Pavlovian and operant) learning, they posit new forms of learning that are mostly human-language-specific (e.g., Saffran, Aslin, & Newport, 1996). To their credit, modern language researchers explicitly reject the assertion that similarities across languages (suggested by Chomsky) reflect innate linguistic knowledge and, instead, now believe that learning can explain them. But, oddly, these researchers attribute the similarities across languages not to already well-established general learning principles, but rather to poorly researched constraints on learning even while acknowledging that these learning mechanisms were not tailored solely for language (Saffran, 2003).

So, what has caused language researchers to conclude that experience and learning (though not operant or Pavlovian learning) play a critical role in language acquisition and speech perception? What has changed over the last two decades to support these so-called "new views of learning" is the discovery that "by simply listening to language, infants acquire sophisticated information about its properties ... " (Kuhl, 2000, p. 11852), a phenomenon also referred to as "incidental language learning" (Saffran et al., 1997). Two examples of this "new kind of learning" are discriminative abilities in infants and so-called statistical learning (Kuhl, 2000).

Discriminative Abilities in Infants

The first example that illustrates the so-called "new views of learning" is that infants detect patterns or similarities in language input. Researchers cite evidence from a variety of studies, including the finding that at birth infants generally prefer to listen to the language spoken by their mothers during pregnancy and specifically to their mother's voice over another woman's voice, and to particular stories read by their mothers during the last several weeks of pregnancy (DeCasper & Fifer, 1980; DeCasper & Spence, 1986; Mehler et al., 1988; Moon et al., 1993; Nazzi et al., 1998). Other examples include the finding that by 9 months of age, but not earlier, infants prefer to hear prosodic patterns characteristic of their native language (Jusczyk, Cutler, & Redanz, 1993). Finally, infants have been shown to listen longer to words in their native language than to words in another language (Jusczyk et al., 1993). According to Kuhl (2000), "At this age, infants do not recognize the words themselves, but recognize the perceptual patterns typical of words in their language" (p. 11852). Such a locution, however, simply describes the research results and provides no explanation. And it is not clear how researchers know that infant recognize patterns but not words.

Statistical Learning

The second example of the "new views of learning language" is that "infants exploit the statistical properties of the input, enabling them to detect and use distributional and probabilistic information contained in ambient language to identify higher-order units" (Kuhl, 2000, p. 11852, emphasis added). This type of learning is called statistical learning by language researchers (e.g., McMurray & Hollich, 2009), and for Kuhl (and many other language researchers) it is the mechanism "responsible for the developmental change in phonetic perception between the ages of 6 and 12 months" (2004, p. 833; see also Maye, Werker, & Gerken, 2002). According to Kuhl (2000):
 Running speech presents a problem for infants because, unlike
 written speech, there are no breaks between words. New research
 shows that infants detect and exploit the statistical properties of
 the language they hear to find word candidates in running speech
 before they know the meanings of words. (p. 11852)


Researchers have demonstrated that by 6 months of age, infants who shortly after birth could universally be taught to discriminate between phonetic units, show a preference for the phonetic units of their native language (Kuhl, et al., 1992). In fact, researchers describe the changes in infant speech perception as a reduction in the ability to discriminate speech sounds that are not found in one's native language. Because "the beginnings and ends of sequences (i.e., the segmentation) of sounds that form words in a particular language are not marked by any consistent acoustic cues" (Aslin et al., 1998, p. 321), such as pauses, and because the acoustic structure of speech across different languages is highly variable, researchers believe that a distributional rather than an acoustical analysis must be used to solve the problem of finding the words in a particular language. A distributional analysis refers to the regularities in the relative positions of sounds over a large sample of linguistic input (Aslin et al., 1998). For example, in English, "certain combinations of two consonants are more likely to occur within words whereas others occur at the juncture between words. Thus, the combination 'ft' is more common within words whereas the combination 'vt' is more common between words" (Kuhl, 2000, p. 11853).

According to many language researchers, human infants need to discover the phonemes and words in a particular language. Because the speech they are exposed to is so variable and not marked by reliable acoustic cues, researchers believe that "infants use computational strategies to detect the statistical and prosodic patterns in language input" (Kuhl, 2004, p. 831). Other researchers are quick to point out that infants are not consciously calculating statistical frequencies, but rather are sensitive to distributional information contained in the linguistic input to which they are exposed (Aslin et al., 1998). Notwithstanding this one disclaimer, most of these researchers seem to believe that the infants, or their brains, are extracting statistical information from the linguistic input.

Based on the results of particular studies (e.g., Saffran et al., 1996), numerous researchers have concluded that, "infants use statistical information to discover word boundaries" (Aslin et al., 1998, p. 321), or they learn "from exposure to the distributional patterns in language input" (Kuhl, 2004, p. 835). However, such conclusions suffer from a number of logical and scientific problems.

Problems With a Cognitive Account of Learning

There are several logical and scientific problems with the cognitive account of speech perception and learning. The first concerns the issue of agency; that is, who does what. Cognitive accounts misplace the agency producing the effects inside the infant instead of in the linguistic environment which cognitive theorists clearly believe is critical for such learning. This is illustrated repeatedly in the literature in the way researchers talk about language learning infants. For example, Saffran et al. (1996) state that, "One task faced by all language learners is the segmentation of fluent speech into words" (p. 1927). According to Kuhl (2004), "infants use computational strategies to detect the statistical and prosodic patterns in language input, and that this leads to the discovery of phonemes and words" P. 831). Thus, as stated by these researchers, infants are faced with tasks, extract information, use or exploit strategies, abstract patterns, discover rules, and so on. Sometimes the task is assigned to the (infant's) brain, which is said to be endowed with mechanisms that enable it "to extract the information carried by speech" and "use them to discover abstract grammatical properties." (Mehler, Nespor, & P?na, 2008, p. 434). Either way, the locus of control is said to be within the individual. In essence, many of these researchers believe that it is the job of language learners to make sense of vague or complex information. This account is at odds with a natural science approach which looks for physical causes of behavior. Using an evolutionary analogy, it would be akin to saying that the task for individual organisms (or their brains) is to exploit strategies in order to discover the rules for how to survive in a complex environment. But ever since Darwin (and Wallace), we know that the direction of causation is the other way. The environment selects extant traits to the extent that on average those traits enable individuals possessing them to live long enough to pass on their genes. A selectionist account of language learning suggests that only some responses of infants to specific stimuli will produce desired consequences.

Another problem with cognitive accounts of learning is that sometimes the questions are so confusing as to be essentially unanswerable. For example, Saffran (2003) asked what infants were actually learning in a segmentation task: "Are they learning statistics? Or are they using statistics to learn language?" (p. 112). The answer, of course, is neither. To understand what is wrong with these questions and with the notion of statistical learning in general, we must make a distinction between the researchers' behavior and that of the subjects in experiments or infant in natural environments. It is true that a statistician or psychologist can statistically analyze conditional probabilities of certain sounds or arrangements of sounds within a stream of speech. But infants (or their brains) are not carrying out a statistical analysis based on the distributional patterns in the speech they hear anymore than they are calculating force, resistance, or gravity when they walk. The evidence that researchers cite is simply that hearing speech changes the behavior of infants. In reality it is the researchers who are doing the statistical analysis, not the infants. The principle of parsimony suggests that explanations of phenomena should make the fewest assumptions. But modern language researchers make many unnecessary and essentially untestable assumptions when they suggest that infants or their brains are statistically analyzing speech sounds. Just because a researcher can carry out a statistical analysis of speech sounds does not mean that is how infants learn from hearing the sounds.

In addition to conceptual and logical problems with a cognitive account of speech perception, there are also methodological problems with some of the research. For example, the infants used in many studies on speech perception were already at least 6 months of age, which means they had at least 6 months (excluding prenatal exposure) of hearing speech and countless interactions with a vocal environment, which researchers acknowledge contributes to speech perception and language learning (Kuhl, 2000, 2004). Moreover, many of the studies trained infants, however briefly, to make discriminations or to show preferences, by reinforcing appropriate responses (although the authors rarely, if ever, referred to their training as operant conditioning). And, the general research paradigm in most of these studies is the hypothetical deductive one still common in psychology, which does not and cannot account for the variability from one infant to the next and therefore must pool (i.e., average) data. Such an approach obscures variation rather than refining experimental control to account for it (Schlinger, 2004).

Finally, many of the very researchers who cite animal studies as evidence against a uniquely human capacity for speech perception claim that statistical learning is uniquely human. But studies employing operant contingencies have shown that even rats can be taught to perceive (i.e., discriminate) the nuances of speech (e.g., Reed et al., 2003; Toro et al., 2005). These animal studies further suggest that operant learning is a plausible explanation for how infants learn to discriminate speech sounds.

A Behavioral View of Speech Perception

As already stated, a behavioral view of speech perception stresses what an individual does when we say that he or she perceives speech and implicates general learning principles in the acquisition and maintenance of such behavior. In the infant laboratory, for example, the label speech perception is reserved for the behaviors researchers measure as responses to speech sounds, such as changes in heart rate and non-nutritive sucking. Outside the laboratory, the term speech perception is used to denote such behaviors as turning ones head in the direction of the speech, smiling, and making sounds. In sophisticated listeners, speech perception refers to a much wider range of behaviors from complying with requests to actually listening to what a speaker says, that is, subvocally echoing or otherwise talking to oneself (see Schlinger, 2008a).

Over and above identifying the behaviors involved, "one of the most important issues in speech perception is how listeners come to perceive sounds in a manner that is particular to their native language" (Diel, et al., 2004, p. 164). As mentioned previously, newborn infants have been shown to prefer to listen to the language spoken by their mothers and specifically to their mother's voice. By "prefer" researchers mean that infants will engage in behaviors that result in hearing the language spoken by their mothers during pregnancy, their mother's voice more than another woman's voice, and specific features of their mother's voice over other features (DeCasper & Fifer, 1980; DeCasper & Spence, 1986; Moon et al., 1993; Nazzi et al., 1998). Thus, at birth or shortly thereafter, particular features of the infant's native language in general and the mother's voice in particular have become potent conditioned reinforcers in the sense that when such stimuli are presented contingently on some infant behavior (e.g., particular sucking patterns), that behavior increases relative to behavior that does not produce those features. These stimuli appear to become conditioned reinforcers (and perhaps acquire other behavioral functions as well) by simply hearing them; that is, it is not clear that pairing with other reinforcing stimuli is necessary. We do not need to appeal to a statistical analysis to explain these effects because as mentioned previously, it is researchers who impose those statistical properties after the fact. We also do not need to appeal to ad hoc cognitive processes, such as "perceptual representations of speech ... stored in memory" (Kuhl, 2000, p. 11854) because the evidence for such explanations is only the behavior to be explained. The behavioral explanation is simply the most parsimonious.

Early linguistic exposure produces other effects as well. For example, exposure to a specific language alters infants' perception of specific speech sounds by 6 months of age (Kuhl et al., 1992). Although the precise mechanism by which such effects are produced remains to be determined, it seems as if, just as with the sound of the infant's native language as well as the sound of the voices of significant others, the sounds of specific phonemes acquire behavioral functions by virtue of general principles of Pavlovian and operant learning. The role of general learning principles, however, is much easier to demonstrate in the production of speech, which sometimes involves the very same behaviors as perceiving speech.

A Behavioral View of Speech Production

How Do Infants Learn to Produce Speech?

Typically, a distinction is made between perceiving and producing speech. Elsewhere, however, I have argued that what we normally speak of as listening is behaving (subvocally) (Schlinger, 2008a). Equating listening (to speech or music) with perceiving is consistent with the thesis in the present article; namely that when we say that someone perceives speech he or she is behaving in certain ways. Although most infants naturally progress through a series of stages of vocal sounds, and although those sounds are undoubtedly influenced by reinforcement, the role of operant learning becomes especially clear when infants begin babbling and discrete changes in their vocal output can be more easily detected. Infant babbling contains the prosodic characteristics of adult speech to which the infants have been exposed (e.g., Levitt & Aydelott Utman, 1992; Whalen, Levitt & Wang, 1991). Some researchers (Kuhl & Meltzoff, 1996) refer to this phenomenon as (vocal) imitation, although such a portrayal is not quite right.

Both anecdotal and experimental observations suggest that infants in the first year of life learn to produce not just the intonation and prosody of the language that they hear but the sounds as well. In fact recent research suggests that even the "melody" of the cries of newborns' is influenced by hearing the prosodic features of their native language as early as the third trimester of pregnancy (Mampe, Friederici, Christophe, & Wermke, 2009). Despite the admission that vocal learning depends on hearing the vocalizations of others and of oneself, language researchers acknowledge that "little is known about the processes by which change in infants' vocalizations are induced" (Kuhl & Meltzoff, 1996, p. 2425). That does not prevent these researchers, however, from offering ad hoc cognitive explanations. For example, Kuhl and Meltzoff (1996) speculate that "infants listening to ambient language store perceptually derived representations of the speech sounds they hear which in turn serve as targets for the production of speech utterances" or that it is as though both adults and infants "have an internalized auditory-articulatory 'map' that specifies the relations between mouth movements and sound" (p. 2426). The problem with such explanations, as I have repeatedly pointed out, is that they are not parsimonious; that is, they require many untestable, unfalsifiable assumptions and are often just redundant descriptions of the observable evidence. More parsimonious explanations can be gleaned by comparing vocal development in human infants to that of songbirds.

Vocal Learning in Infants and Songbirds: The Role of Automatic Reinforcement

Researchers who investigate language development in humans and the development of songs in certain species of birds have noted many parallels (Brainard & Doupe, 2002; Doupe & Kuhl, 1999; Kuhl, 2000; 2004). For one, as already mentioned, social contingencies of reinforcement play an important role in vocal learning (Goldstein et al., 2003; 2009). Additionally, researchers agree that hearing the vocalizations of others and of oneself are necessary for vocal development in infants (Kuhl & Meltzoff, 1996) and in songbirds (Brainard & Doupe, 2002; Doupe & Kuhl, 1999). Thus, in infants and in many songbirds, immature vocal sounds are shaped into more mature sounds in large part by the feedback produced by making sounds. Although few of these scholars directly mention reinforcement or operant learning, many describe how "infants' successive approximations of vowels would become more accurate" due to the "acoustic consequences of their own articulatory acts" (Kuhl & Meltzoff, 1996, p. 2426); or how "during sensorimotor song learning, motor circuitry is gradually shaped by performancebased feedback to produce an adaptively modified behaviour (Brainerd & Doupe, 2002, p. 355). Other researchers describe how sounds emitted by infants and songbirds "are then gradually molded to resemble adult vocalizations" (Doupe & Kuhl, 1999, p. 574). Finally, some researchers explicitly acknowledge a selection process involved in early vocal production. For example, de Boysson-Bardies, 1999 write:
 The vocal productions of children are thus modeled by selection
 processes. The phonetic forms and intonation patterns specific to
 the language of the child's environment are progressively retained
 at the expense of forms that are not pertinent to the phonological
 system of this language. The process begins at birth, if not
 before. However, the first effects on vocal performance are
 delayed, particularly by the slow course of motor development. (p.
 56)


It is amazing that otherwise good scholars fail to either know or acknowledge that the process they are talking about is a form of selection by consequences called operant conditioning (see Skinner, 1981). In particular, infants and songbirds start out with a repertoire of immature or unrefined sounds. When the infants or songbirds hear themselves making sounds that match what they have heard from others, those sounds are automatically strengthened (i.e., reinforced) in the sense that they occur with a greater frequency relative to sounds that do not match what they have heard from others. In other words, the parity achieved when produced sounds are closest to heard sounds automatically strengthens the produced sounds (Palmer, 1996). According to some researchers, this vocal learning occurs relatively rapidly in infants and songbirds and without much in the way of external reinforcement (Doupe & Kuhl, 1999, but see Goldstein et al., 2003, 2009, emphasis added). But it does not occur in the absence of any reinforcement. Such shaping takes place as a function of automatic reinforcement, that is, reinforcement not mediated by another individual (see Vaughan & Michael, 1982). Automatic reinforcement (though not by that name) in the form of parity between self-produced auditory feedback and the sounds heard from others has also been recognized in song learning in birds (e.g., Konishi, 1965, 1985; Watanabe & Aoki, 1998). For example, according to Konishi (1985), "A bird's use of auditory feedback in song development resembles learning by trial and error; the bird corrects errors in vocal output until it matches the intended pattern" (p. 134). By "trial and error," Konishi means operant learning.

Automatic reinforcement also plays an important role in early language acquisition (Schlinger, 1995, pp. pp. 158-160; Smith, Michael, & Sundberg, 1996; Sundberg et al., 1996), and it parsimoniously explains so-called learning without reinforcement. Automatic reinforcement is the elusive mechanism responsible for vocal learning that many language researchers (e.g., Doupe & Kuhl, 1999) are searching for, and it is right before their very eyes (or ears). Moreover, an automatic reinforcement hypothesis requires very few assumptions, is consistent with known scientific principles and is, thus, a parsimonious explanation.

The only wildcard is the process by which the vocal sounds one is exposed to in his or her environment assume automatically reinforcing qualities. At this point it is not clear whether pairing with other reinforcing stimuli is necessary to establish the sounds of speech or songs as reinforcers, or whether simple exposure suffices, although the evidence suggests that mere exposure is sufficient. If so, then the question becomes how much exposure is required during the so-called sensory or perceptual learning phase. What are not helpful, however, are cognitive accounts that appeal to "perceptually derived representations of the speech sounds" (Kuhl & Meltzoff, 1996, p. 2425). Such accounts are inferred solely on the basis of behavioral observations and, moreover, are only redundant descriptions of those observations. As explanations, then, they are circular in that the only evidence of the perceptually derived representations is the fact that an individual's vocal sounds that approximate those previously heard come to predominate in their repertoire. As I have tried to show in this paper, a behavioral account is simpler, in part because it is based on a foundation of experimentally derived principles, and, thus, does not need to appeal to inferred, hypothetical entities.

Summary and Conclusions

In this article I have illustrated the cognitive approach to speech perception and language acquisition by noting that although language researchers often incorporate operant or operant-like methods in their research, their interpretations of the results are problematic. Specifically, cognitively oriented language researchers often attribute the causes of speech perception and production to children themselves, or to their brains rather than to the environmental features which the researchers manipulate in their studies. Because this approach infers hypothetical constructs instead of testable physical events, it does not adhere to the principle of parsimony. Moreover, inferring hypothetical constructs leads to logical errors of reification and circular reasoning. I also attempted to show how research conducted by cognitively oriented speech and language researchers can be parsimoniously interpreted according to the experimentally established principles of behavior analysis. Furthermore, because behavior analysts demonstrate environmental variables as causes of the behaviors, their approach is inherently consistent with the principle of parsimony and is, thus, directly testable.

For SLPs, perhaps most importantly, cognitive approaches are not very helpful in suggesting effective assessment or treatment procedures to remediate speech and language disorders. I conclude that a functional, behavior-analytic approach serves the SLPs better, by offering both an experimentally based analysis of speech and language and measurable and manipulable methods of treatment.

References

Aslin, R. N. (1987). Visual and auditory development in infancy. In J. D. Osofsky (Ed.) Handbook of infant development (pp. 5-97). New York: Wiley.

Aslin, R. N., Saffran, J. R., & Newport, E. L. (1998). Computation of conditional probability statistics by 8-month-old infants. Psychological Science, 9, 321 -324.

Bates, E., O'Connell, B., & Shore, C. (1987). Language and communication in infancy. In J. D. Osofsky (Ed.), Handbook of infant development (pp.149-203). New York: Wiley.

Camarata, S. (1993). The application of naturalistic conversation training to speech production in children with speech disabilities. Journal of Applied Behavior Analysis, 26, 173-182.

Chomsky, N. (1957). Syntactic structures. The Hague: Mouton.

Chomsky, N. (1959). Review of Verbal Behavior by B.F. Skinner, Language 35, 26-58.

DeCasper, A. J. & Fifer, W. P. (1980). Of human bonding: Newborns prefer their mother's voices. Science 208, 1174-1176.

DeCasper, A. J., Introduction & Spence, M. J. (1986). Prenatal maternal speech influences newborns' perception of speech sounds. Infant Behavior and Development, 9, 133-150.

Diel, R. L, Lotto, A. J., & Holt, L. L. (2004). Speech perception. Annual Review of Psychology, 55, 149-79

DeLeon, I. G., Arnold, K. L., Rodriguez-Catter, V., & Uy, M. L. (2003). Covariation between bizarre and nonbizarre speech as a function of the content of verbal attention. Journal of Applied Behavior Analysis, 36, 101-104.

Dooling, R. J., Best, C. T., & Brown, S. D. (1995). Discrimination of synthetic full-formant and sinewave/ra-la/continua by budgerigars (Melopsittacus undulatus) and zebra finches (Taeniopygia guttata). Journal of the Acoustical Society of America, 97, 1839-1846.

Eimas, P. D. (1975). Developmental studies in speech perception. In L. B. Cohen, & P. Salapatek (Eds.) Infant Perception: Vol. 2. From Sensation to Cognition (pp. 193-231) Academic: New York).

Gallahue, D. L. (1989). Understanding motor development: Infants, children, adolescents (2nd ed.) Indianapolis: Benchmark Press.

Gibson, E. J., & Spelke, E. S. (1983). The development of perception. In P. H. Mussen, J. H., Flavell, & E. M. Markman (Eds.), Handbook of child psychology: Vol. 3.Cognitive development (4th ed., pp. 1-76). New York: Wiley.

Goldstein, M. H., King, A. P., & West, M. J. (2003). Social interaction shapes babbling: Testing parallels between birdsong and speech. Proceedings of the National Academy of Sciences, 700(13), 8030 8035.

Goldstein, M. H., Schwade, J. A., & Bornstein, M. H. (2009). The value of vocalizing: Five-month-old infants associate their own noncry vocalizations with responses from adults. Child Development, 80 (3), 636--644.

Greenspoon, J. (1955). The reinforcing effect of two spoken sounds on the frequency of two responses. American Journal of Psychology, 68: 409-416.

Guess, D., Sailor, W., Rutherford, G., & Baer, D. M. (1968). An experimental analysis of linguistic development: The productive use of the plural morpheme. Journal of Applied Behavior Analysis, 1, 297-306.

Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experiences of young American children. Baltimore, MD: Paul Brookes.

Hart, B., & Risley, T. R. (1999). The social world of children: Learning to talk. Baltimore: Paul Brookes.

Hegde, M. N. (1998). Treatment procedures in communicative disorders (3rd ed.). Austin, TX: Pro-Ed.

Hegde, M. N. (2007), Treatment protocols for stuttering. San Diego, CA: Plural Publishing.

Hegde, M. N., & Maul, C. A. (2006). Language disorders in children: An evidence-based approach to assessment and treatment. Boston, MA: Allyn and Bacon.

Holt, L. L., & Lotto, A. J. (2008). Speech perception within an auditory cognitive science framework. Current Directions in Psychological Science, 17, 42-46.

Johnston, J. M., & Johnston, G. T. (1972). Modification of consonant speech-sound articulation in young children. Journal of Applied Behavior Analysis, 5, 233-246.

Jusczyk, P. W., Cutler, A. & Redanz, N. J. (1993). Infants' preference for the predominant stress patterns of English words. Child Development, 64, 675-687.

Jusczyk, P. W., Friederici, A. D., Wessels, J. M. I., Svenkerud, V. Y., & Jusczyk, A. M. (1993). Infant's sensitivity to the sound patterns of native language words. Journal of Memory and Language, 32, 402-420.

Kluender, K. R., Diehl, R. L., & Killeen, P. R. (1987). Japanese Quail can form phonetic categories. Science, 237, 1195-1197.

Kuhl, P. K. (1981). Discrimination of speech by nonhuman animals: Basic auditory sensitivities conducive to the perception of speech-sound categories. Journal of the Acoustical Society of America, 70, 340-349.

Kuhl, P. K. (2000). A new view of language acquisition. Proceedings of the National Academies of Sciences, 97, 11850-11857.

Kuhl, P. K. (November, 2004). Early Language acquisition: Cracking the speech code. Nature Reviews: Neuroscience, 5, 831-843.

Kuhl, P. K., & Meltzoff, A. N. (1996). Infant vocalizations in response to speech: Vocal imitation and developmental change. Journal of the Acoustical Society of America, 100, 2425-2438.

Kuhl, P. K., & Miller, J. D. (1975). Speech perception by the chinchilla: Voiced-voiceless distinction in alveolar plosive consonants. Science 190, 69-72.

Kuhl, P. K., & Miller, J. D. (1978). Speech perception by the chinchilla: Identification functions for synthetic VOT stimuli. Journal of the Acoustical Society of America, 63, 905-917.

Kuhl, P. K., & Padden, D. M. (1982). Enhanced discriminability at the phonetic boundaries for the voicing feature in macaques. Perception & Psychophysics, 32, 542-550.

Kuhl, P. K., & Padden, D. M. (1983). Enhanced discriminability at the phonetic boundaries for the place feature in macaques. Journal of the Acoustical Society of America, 73, 1003-1010.

Kuhl, P. K., Wiliams, K. A., Lacerda, F., Stevens, K. N., & Lindbloom, B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science, 255, 606-608.

Lancaster, B. M., LeBlanc, L.A., Carr, J. E., Brenske, S., Peet, M. M., & Culver, S. J. (2004). Functional analysis and treatment of the bizarre speech of dually diagnosed adults. Journal of Applied Behavior Analysis, 37, 395-399.

Lerman, D. C., Parten, M., Addison, L.R., Vorndran, C. M., Volkert, V. M., & Kodak, T. (2005). A methodology for assessing the functions of emerging speech in children with developmental disabilities. Journal of Applied Behavior Analysis, 38, 303-316.

MacCorquodale, K. (1970). On Chomsky's review of Skinner's Verbal Behavior. Journal of the Experimental Analysis of Behavior. 13, 83-99.

Mampe, B., Friederici, A. D., Christophe, A., & Wermke, K. (2009). Newborns' cry melody is shaped by their native language. Current Biology, 20,

Maye, J., Werker, J. F. & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82, B101-B111.

McMurray, B., & Hollich, G. (2009). Core computational principles of language acquisition: Can statistical learning do the job? Introduction to Special Section. Developmental Science, 12, 365368.

Mehler, J., Nespor, M., & P?na, M. (2008). What infants know and what they have to learn about language. European Review, 16, 429-444.

Mehler, J., Jusczyk, P., Lambertz, G., Halsted, N., Bertoncini, J. & Amiel-Tison, C. (1988). A precursor of language acquisition in young infants. Cognition, 29, 143-178.

Miyawaki, K., Strange, W., Verbrugge, R., Liberman, A. M., & Jenkins, J. J. (1975). An effect of linguistic experience: The discrimination of [r] and [l] by native speakers of Japanese and English. Perception and Psychophysics, 18, 331-340.

Moerk, E. L. (1978). Determiners and consequences of verbal behaviors of young children and their mothers. Developmental Psychology, 14, 537-545.

Moerk, E. L. (1983). The mother of Eve--As a first language teacher. Norwood, NJ: Ablex.

Moerk, E. L. (1992). First language: Taught and learned. Baltimore: Paul H. Brookes.

Moon, C., Cooper, R. P., & Fifer, W. P. (1993). Two-day-olds prefer their native language. Infant Behavior and Development, 16, 495-500.

Morris, E. K., Lazo, J. F., & Smith, N. G. (2004). Whether, when, and why Skinner published on biological participation. The Behavior Analyst, 27, 153-169.

Nazzi, T., Bertoncini, J., & Mehler, J. (1998). Language discrimination by newborns: towards an understanding of the role of rhythm. Journal of Experimental Psychology: Human Perception and Performance, 24, 756-66.

Reed, P., Howell, P., Sackin, S., Pizzimenti, L., & Rosen, S. (2003). Speech perception in rats: Use of duration and rise time cues in labeling of affricate/fricative sounds. Journal of the Experimental Analysis of Behavior, 80, 205-215.

Palmer, D. C. (1996). Achieving parity: The role of automatic reinforcement. Journal of the Experimental Analysis of Behavior, 65, 289-290.

Palmer, D. C. (2006). On Chomsky's appraisal of Skinner's Verbal Behavior: A half-century of misunderstanding. The Behavior Analyst, 29, 253-267.

Pena-Brooks, A., & Hegde, M. N. (2007). Assessment and treatment of articulation and phonological disorders in children (2nd ed.). Austin, TX: Pro-Ed.

Rheingold, H. L., Gewirtz, J. L., and Ross, H. W. (1959). Social conditioning of vocalizations. Journal of Comparative and Physiological Psychology, 25, 68-73.

Rosenfeld, H. M., & Baer, D. M. (1970). Unbiased and unnoticed verbal conditioning: The double-agent robot procedure. Journal of the Experimental Analysis of Behavior 14, 99-107.

Saffran, J. R. (2003). Statistical language learning: Mechanisms and constraints. Current Directions in Psychological Science, 12, 110-114.

Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928.

Saffran, J. R., Newport, E. L., Aslin, R. N., Tunick, R. A., & Barrueco, S. (1997). Incidental language learning: Listening (and learning) out of the corner of your ear. Psychological Science, 8, 101105.

Sautter, R. A., & Leblanc, L. (2006). Empirical applications of Skinner's analysis of verbal behavior with humans. The Analysis of Verbal Behavior, 22, 35-48.

Schlinger, H. D. (1995). A behavior-analytic view of child development. New York: Plenum.

Schlinger, H. D. (2004). Why psychology hasn't kept its promises. Journal of Mind and Behavior, 25 (2), 123-144.

Schlinger, H. D. (2008a). Listening is behaving verbally. The Behavior Analyst, 31, 145-161.

Schlinger, H. D. (2008b). The long goodbye: Why B. F. Skinner's Verbal Behavior is alive and well on the 50th anniversary of its publication. Psychological Record, 58, 329-337.

Skinner, B. F. (1957). Verbal Behavior. New York: Appleton-Century-Crofts.

Skinner, B. F. (July, 1981). Selection by consequences. Science, 213, 501-504.

Todd, G. A., & Palmer, B. (1968). Social reinforcement of infant babbling. Child Development, 39, 591596.

Toro, J. M., Trobalon, J. B., & Sebastian-Galles, N. (2005). Effects of backward speech and speaker variability in language discrimination by rats. Journal of Experimental Psychology: Animal Behavior Processes, 31, 95-100.

Wagaman, J. R., Miltenberger, R. G., & Arndorfer, R. E. (1993). Analysis of a simplified treatment for stuttering in children. Journal of Applied Behavior Analysis, 26, 53-61.

Whitehurst, G. J. (1972). Production of novel and grammatical utterances by young children. Journal of Experimental Child Psychology, 13, 502-515.

Whitehurst, G.J., Novak, G., and Zorn, G.A. (1972). Delayed speech studied in the home. Developmental Psychology, 7, 169-177.

Whitehurst, G. J., Ironsmith, M., & Goldfein, M. (1974). Selective imitation of the passive construction through modeling. Journal of Experimental Child Psychology, 73, 288-302.

Author Note

I am grateful to Julie Riggott for her keen editorial eye and to Giri Hegde for his helpful comments on an earlier version of this manuscript.

Author Contact Information

Henry D. Schlinger, Jr.

Department of Psychology

California State University

Los Angeles, CA 90032-8227

E-mail: hschlin@calstatela.edu
COPYRIGHT 2010 Behavior Analyst Online
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Schlinger, Henry D., Jr.
Publication:The Journal of Speech-Language Pathology and Applied Behavior Analysis
Article Type:Report
Date:May 13, 2010
Words:9287
Previous Article:The bases for language repertoires: functional stimulus-response relations.
Next Article:Speech and language assessment: a verbal behavior analysis.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters