Printer Friendly

An analysis of phonology and linguistic interfaces as a new case for a biopsychological foundation for linguistics.

1. The Biopsychology of Phonetics and Phonology

This section addresses phonetics, phonology, and the relation between the two. It demonstrates that phonological units and phonological grammars are both essentially rooted in human biopsychology.

Philosophers typically think of words as pairings of sounds with meanings. This can make words seem apsychological. But the correlate of a meaning is not really a sound, but rather something that only a biopsychological perspective can describe. There are two biopsychological projects, neither of which is exactly about sounds. There is the articulation of sound patterns in speech and the perception of sound patterns in speech. Some philosophers of language have emphasized the importance of the actual human brain for philosophy of language. In this paper, I want to emphasize the essential importance of things like the exceptional agilities of our tongue, vocal chords, etc.

Phonetics and Speech Science

A complete understanding of human language requires an understanding of the biology of articulation and perception. To understand why speech sounds develop as they do and why human systems of speech sounds have the particular structures that they have, we must understand the science of speech (Leiberman and Blumstein, 2).

In the physiology of speech production, some anatomical elements have a specific functional role and some do not. Articulatory Phonetics seeks to understand the elements that play the specific, speech-related functional roles. Traditionally, the physiology of speech production is categorized into three anatomical subcomponents: the subglottal component, the larynx and the supralaryngeal vocal tract. In turn, the subglottal component of speech anatomy consists of the lungs and the respiratory muscles that control them. The larynx is a muscle that moves the vocal chords together or apart so as to close or open the airway from the lungs up towards the supralarngeal area. Finally, the supralarngeal vocal tract consists of further airways in the nose, the mouth and the connection from the throat to the mouth called the "pharynx." The speech-specific actions of articulatory phonetics are called "gestures" (Leiberman and Blumstein, 3-4).

The functionally relevant components of speech anatomy are "matched" with functionally relevant components of perceptual physiology in a way that composes a complete functional system. This is to say that the sounds we have gestures to produce are the same sounds we have a perceptual apparatus to interpret. Further, it is by the same mentally represented categories that both articulation and perception of human speech are made possible. The nature of these categories will be illuminated gradually over the course of this section (Leiberman and Blumstein, 14).

In phonetics and speech science we are interested in anatomical gestures, perceptions and acoustic patterns that are related to language. But this language-specificity is relative to our interests, whereas in phonology, in theory, the natural categories are inherently language-specific due to their role in the language faculty. In cases where we are taking in air for a nonlinguistic reason, the relevant anatomical laws are the same, whereas phonol ogical laws are linguistic by their very nature; they have no non-linguistic instances.

The larynx in the throat is set to openings of particular size, varying from a completely closed to a minimally obstructed airway. It holds certain openings relative to the production of certain speech sounds. For example, it holds wider during the production of a [h] than during a [f]. The principle function of the larynx relative to speech is to generate a series of air puffs by vibrating the throat. In articulatory terms this is called "phonation," in phonological terms the matched representation is called "voicing" (Leiberman and Blumstein, 97-8).

While phonation and voicing have an equivalent physiological basis, logically, they can be understood as distinct. Phonation is an anatomical gesture, whereas voicing is a phonological feature. Features, unlike gestures, are inherently linguistic by theoretical definition. We can say that all voicing is phonation but not that all phonation is voicing, since non-linguistic phonation is not voicing, but all voicing is inherently linguistic. These definitions are helpful for explanatory purposes. Analogous remarks apply to the gestures of the supralaryngeal vocal tract where the majority of the phonological categories for production and perception of speech are biologically grounded (Leiberman and Blumstein, 115).

Depending on speaker and context, non-identical patterns of articulation may be used to produce equivalent speech sounds, even within the same dialect. Evidence appears to point towards "acoustic invariance" as opposed to "motor invariance" as the fundamental language-specific phonetic categories. (1) As Leiberman and Blumstein write, "speech production appears to involve the speaker's planning his articulatory maneuvers to achieve supralaryngeal vocal tract shapes that yield particular acoustic goals" (Leiberman and Blumstein, 128-9).

If the acoustic invariance account is correct, the principles by which phonological features relate to articulation must be somewhat abstract and defined in relation to articulatory physiology in an irreducibly functional manner related to our perceptual systems. However, this does not change the fact that phonological categories are grounded, ultimately, in physiology. The functional abstractness of the physiological grounding of phonological features demonstrates why they require articulatory gestures to exist, but cannot be strictly identical to them. It also demonstrates, as will be discussed later on in the section, the role psychological representation must play in the categorization of gesture classes into features. As Leiberman and Blumstein continue, "speakers have some sort of mental representation (2) of the supralaryngeal vocal tract shape that they normally use to produce a particular speech sound in a given phonetic context" (Leiberman and Blumstein, 129).

An analogous theory could offer a functional psychological grounding of sign language phonological features in relation to light patterns, rather than acoustic ones. Given the possibility of a functional analogy between the groundings of spoken and signed phonological features, even on the acoustic invariance perspective, the metaphysical grounding of features remains articulatory. In other words, sound is not directly essential to features, whereas articulation is. Sound, rather, is indirectly essential in that it is an essential consequence of certain articulations. The sense in which the articulatory grounding of features is functional does not mean they can be abstracted from a biologically constrained class of implantation bases. Features are very much rooted in the general nature of human physiology. If the human production system had been radically different, human languages would have been too. Sign-language phonology will be discussed further later in this chapter.

Psychological representation provides a physiologically grounded mapping between articulation and perception. An understanding of this mapping demonstrates that even acoustic invariance relative to features is actually a partially illusory abstraction. It is the categorical nature of perception that allows continuously varying gesture sets to be categorized relative to features. Phonetic communication between speakers-listeners essentially works by a speaker encoding phonological feature segments into acoustic patterns and a listener decoding those patterns back into phonological representations. This intricate phonological encoding and decoding system creates the illusion of invariance between gestures and acoustic patterns that are in fact distinct. As an example, the [d] sound in [di] ('dee') is acoustically distinct from the [d] sound in [du] ('doo'), but "human listeners 'hear' an identical initial [d] segment in both of these signals because they, in effect, 'decode' the acoustic pattern using prior 'knowledge' of the articulatory gestures and anatomical apparatus that is involved in the production" (Leiberman and Blumstein, 147).

The set of categories used for instructions that function to produce an acoustic pattern is the same set as that used to perceive it. This category set is a phonological feature. The specific biological implementations of the human articulatory and perceptual systems are thus fundamental to the very nature of phonological features. This fundamental grounding is organized through a mapping between articulation and perception that is connected via psychological representation. As I will eventually flesh out thoroughly, this is why the grounding of phonology must be not only inherently biological, but also inherently psychological as well.

It is the same speech sounds that the human vocal tract is adapted to produce that human neural property detectors are adapted to perceive. As Leiberman and Blumstein write, "there seems to be a 'match' between the sounds that humans can make and the sounds that they are specially adapted to readily perceive (Lieberman, 1970, 1973, 1975, 1984; Stevens, 1972b)" suggesting that "physiological constraints on both the human vocal tract and the auditory system provide a limiting condition on the particular shape of language"(149, 185).

The human auditory system has biological constraints analogous to those of the human articulatory system. From birth, humans are equipped to interpret speech-sound continua via strict on-or-off categories. The biological contingencies of the human articulatory system together with the categorical restraints of the human auditory system provide the fundamental explanation for why there is a specific, limited range of possible basic human speech sounds (Leiberman and Blumstein, 186-7).

As an example to demonstrate the categorical nature of human speech perception, consider the sounds [b] and [p]. In the articulation of these two sounds, the classes of gestures functioning in the supralaryngeal vocal tract are equivalent in equivalent contexts, e.g. in 'bat' and 'pat'. The functional articulatory difference between these two sounds consists in the degree of opening of the laryngeal muscles in the throat. In the [b] in 'bat' the larynx begins in a relatively closed, phonating position. In the [p] in 'pat' the larynx begins relatively open and closes as the lips do, phonation beginning on the vowel sound (Leiberman and Blumstein, 196).

Of course, the articulatory system itself is anatomically continuous. The difference between 'bat' and 'pat' is perceived relative to the continuum of possible phonation-delay periods from the starting of articulation. As it happens, articulations with a delay before phonation shorter than 25 milliseconds are perceived as [b] while otherwise equivalent articulations with a delay before phonation longer than 25 milliseconds are perceived as [p]. Yet phonation delays in the same context shorter or longer than 30 milliseconds or 35 milliseconds make no perceptual difference. Clearly, the 25 milliseconds distinction has no logical priority over phonation delay distinctions at other duration points. We happen to make a discrimination at the particular point in the potential duration continuum that we make, this is a fact about our biology, not of logic. It is the specific, biological nature of the human auditory apparatus that structures the basic sounds of human speech (Leiberman and Blumstein, 197).

Equivalent millisecond variation ranges are not perceived. Which ones are perceived is simply a contingency of the human perceptual system. As Leiberman and Blumstein write, "voicing thus appears to be inherently structured in terms of this perceptual constraint as well as the articulatory constraints of the speech-producing apparatus" and, of course, this applies to other phonological features as well (Leiberman and Blumstein, 197).

We now have a general understanding of how the science of speech production and perception relates to the grounding of the basic sound units of language. So far we have been talking about the implementation of the sound systems of language at a very low level of abstraction. We will now begin to speak about these systems at a higher level of abstraction, specifically, at their inherently linguistic level. At their inherently linguistic level, speech sounds are represented as phonemes. The nature of phonemes is to be understood through an understanding of the underlying nature of features.

Features

[set], [bet], [let], [net] each correspond to different words in English because /s/, /b/, /l/ and /n/ are different phonemes in English (Durand, 7-8). It is in the sense that there is no linguistic change without a phoneme change that the phoneme is the basic linguistic unit. But this is not to say that a phoneme cannot be analyzed. As Druand writes:
   A typical French [[epsilon]] (as in [m[epsilon]r] 'sea') involves
   at the same time an egressive airstream, vibrations of the vocal
   folds, a raised velum shutting off the nasal cavity, a specific
   tongue position on anterior-posterior axis (i.e. front (3)) and the
   closed-open axis (i.e. mid-low) and a lip gesture (spread lips).
   But all these components are independent of one another. The lips
   could have been rounded resulting instead in an [oe] sound as in
   'moeurs' [moer(s)] 'habits', the tongue could have been retracted
   (cf 'mort' [mor] 'death'), the velum could have been lowered giving
   the sound a nasal quality (cf 'main' [m[epsilon]~]) 'hand') and so
   on (15).


In other words, a phoneme is not simply a sound. It is a set of instructions for articulation matched with a mapped set of instruction for perception based on the same fundamental categories. The different phonologically significant aspects of articulation determine what are called phonological features. According to linguistic theory, a phoneme is a set of features. In so far as a phoneme is a set of features, it cannot be something other than a set of features, since this would be a modal contradiction. Features are logically dependent on articulatory and perceptual phonetics. Thus, phonemes are logically dependent on articulatory and perceptual phonetics, which is to say that the basic linguistically significant units are logically dependent on their biological implementation. The set of possible features is determined universally by the nature of biological constraints of human anatomy. In theory, fundamental laws relating to natural classes of possible phonemes should apply cross-linguistically at some level of abstraction.

Gestures and features are importantly distinct. Gestures are purely physiological. Features are psychological representations of the ways ranges of gestures function to produce acoustic categories defined relative to perception. Not every language needs to make every possible phonological distinction, but every language can use only possible phonological distinctions, and what the possible phonological distinctions are is determined by the biology of articulation and perception.

Sign Language

Etymologically, it could be argued that 'sign language phonology' is a contradiction in terms. Scientifically, however, underlying categories and principles of signed languages have been shown to be relevantly analogous to those of spoken language phonologies to such an extent that semantic concern about the term has become irrelevant. This is because, as noted earlier, the directly essential metaphysical grounding of the categories of phonology is not acoustic, but physiological.

The only relevant disanalogy between articulatory gestures of spoken and signed languages is that the latter happen to be decoded via a visual rather than an auditory modality. (4) As in spoken languages, in signed languages the semantically lexical items are not holistic gestures but, rather, sets of anatomically-grounded, recombinable, non-semantically significant primitives determined by a finite inventory of categories functioning as instructions for articulation and perception. The phonological features of signed languages have constraints on their recombination analogous to those of spoken languages and, again analogous to spoken languages, "features of handshape, location, and movement can recombine to form minimal pairs of signs," thus distinguishing particular phonemes relative to the role they play in distinguishing different words (Sandler, 1).

This shows that the sense in which spoken language phonology is related to actual acoustic sounds is in fact indirect. The physiological production of components of signed languages is a different type of articulation, and their visual rather than auditory decoding is, of course, a different kind of perception. The truth is that all phonological systems are essentially related to articulation and perception, and, since the articulations and perceptions essentially related to the phonologies of spoken languages are in turn essentially related to sounds, the phonologies of spoken languages are, in some sense, indirectly essentially related to sounds. But the philosophical point remains the same that all phonological systems are essentially related to some implementation in some articulatory anatomy and some perceptual modality. The representation and use of the phonological system of a language is thus very much dependent on concrete biological matters.

An organism radically different from us biologically, may not be able to use our languages despite being sufficiently intelligent, and vice versa with us and the language of such an organism. In theory (perhaps) we could come to understand the meanings of a language of an organism biologically very different from us. But this would be different from representing that language linguistically; its principles would not be intuitive to us and we would represent them using general-purpose memory, rather than the language faculty. We could never be a native speaker, but could only represent the alien language as we might represent the biology of how birds fly.

Dolphin Communication Systems

Consider, as a possible real-world example, dolphin communication systems. Dolphins produce two broad types of sounds: "pure-tone whistles and broadband clicks." The clicks in turn come in two subtypes: "echolocation signals and burst pulse sounds" (Dooling and Hulse, 296).

To what extent dolphin communication systems should be considered actual languages is a matter of debate. However, if dolphins do have languages, the phonologies of these languages will likely be very different from those of human languages. In this case, though we could come to learn the dolphins' languages using general-purpose memory, due to their physiological differences from us, we could not represent the dolphins' languages as languages. We could never be native dolphin-language speakers, even if taught from birth.

It would not be contradictory to imagine languages with phonologies based in articulatory and perceptual systems that do not actually exist. Presumably, in some sense, the possibilities are infinite. According to Platonism nonactualized languages are just as legitimate objects of study as actualized ones, since all possible languages exist as abstract objects in the same way, regardless of whether or not organisms or populations use them. But the features and principles of possible languages with otherworldly phonologies are impossible for linguists to study. Arguably, they are as impossible to study as otherworldly colors, as they simply have no cognitive grounding for us.

The concrete empirical facts of language are essential to the science of linguistics. Even if we do have the imaginative ability to conceive of nonactual phonologies, this is no more relevant to foundations of linguistics than our imaginative ability to conceive of non-actual animals is relevant to the foundations of biology. In some sense, there are possible worlds with very different biological facts. But no one moves from this premise to the conclusion that biology is an abstract discipline analogous to mathematics. Since the facts of phonology are grounded in contingent reality analogously to biology, imaginative possibilities should also not lead to the conclusion of Linguistic Platonism.

The Phonetics-Phonology Interface

Having discussed some varied actual and possible implementation bases of phonologies, let us return to the matter of the nature of the direct relation of phonology to phonetics.

There are three ways in which phonology is grounded in phonetics: definition, explanation and implementation. Phonological features are defined in respect to contingencies of human physiology. It is the biological constraints of human physiology that explain phonological patterns. And, lastly, it is human physiology that implements phonological representations (Kingston, 401).

The explanation of phonological patterns is very complex, as such patterns are always the result of various, often competing biological constraints. As we will see later in this section, communicative and anatomical constraints are in a delicate balance in the determining of the phonological principles of languages. The problem of understanding the implementation of phonological representation is analogously complicated. As Kingston explains, "the problem of implementing phonological representations cannot be solved by simply reversing the solution to defining their constituents and working down from the phonology to a pronunciation or percept," because "the phonetic implementation also determines what kind of phonological representation is possible in the first place" (Kingston, 402).

Definition

Phoneticians and phonologists have long worked hard to understand the correct phonetic definitions of phonological features. "Landmarks in this effort," Kingston sites, "are the acoustic-auditory definitions in Preliminaries to Speech Analysis (Jackobson, Fant, & Halle 1952), the articulatory alternatives in Chapter 7 of The Sound Pattern of English (Chomsky & Halle 1968), and most recently, the combined acoustic and articulatory definitions in The Sounds of the World's Languages (Ladefoged & Maddieson 1996), and Acoustic Phonetics (Stevens 1998)" (Kingston, 402).

As noted previously, the phonetic grounding of phonological features has a certain level of functional abstractness. Features are realized differently depending on contexts, aspects of speaking style, etc. There have been two general approaches to understanding the relevant level of abstraction appropriate to categorizing the phonetic invariance of phonological features: Articulatory Phonolog[gamma] and [lambda]uditorism. Each of these approaches abstract from the specific articulatory and acoustic details of utterances, but each does so differently. Articulatory Phonology defines phonetic invariance of phonological features relative to the speaker's mental representation of her articulatory plan for the utterance, whereas Auditorism defines phonetic invariance of phonological features relative to the auditory effects of the acoustic waves produced. These approaches each offer a different means of defining phonetic invariance of phonological features "by moving to a description of the utterance with many fewer dimensions than are necessary to describe its articulatory or acoustic realization" (Kingston, 402-3). At the linguistic level, less is represented than may be represented at other levels related to actually performing articulation. Linguistic principles do not require representation of all the articulatory information.

Either of these approaches is consistent with the functional grounding of phonological features relative to human biological constraints that we have previously described. Evidence, as it happens, at this point leans toward Auditorism. Scientific results, Kingston writes, "suggest that the invariants from which distinctive features emerge are the auditorily similar effects of covarying acoustic properties and not motor equivalences of different combinations of articulations" (Kingston, 406). (5)

This provides one fleshing out of how phonological categories can be based in biology. The appropriately similar articulations are those that produce sounds that are perceived as the same. That the fundamental level of categorization relates to perception does not mean that the mental representations of plans for utterances do not exist. It merely means that these mental representations of plans for utterances must be understood as having those plans fundamentally defined relative to the way those utterances are represented as being perceivable. But, as we have previously discussed, the categories by which humans perceive and individuate speech sounds map onto the same categories they use to produce them in a manner composing a complete functional biological system. This is to say that we are wired up to perceive the same sounds we are wired up to articulate, and that both the articulatory wires and the perceptual wires make use of the same information, functioning in harmony. Perceptual phonetics may define phonological features, but since the mental representations of articulation and perception are essentially interrelated in this way, articulation still has a metaphysical relation to phonological features. Indirectly, the objective nature of articulation, as well as perception, thus determines the grounding of phonology.

Explanation

Natural language phonemes cluster into natural classes that relate in principles that determine phonological patterns. The biological and psychological properties of speaking and listening offer phonetic explanations of these phonological patterns. As examples, consider "stops," the natural phoneme classes in which the airflow of the vocal tract is blocked, and "fricatives," the consonants in which air is pushed through two articulators close together. Stops include phonemes such as /p/, /t/ and /k/, and fricatives include phonemes such as /m/, /n/, /q/ ('ng') and /l/. Ultimately, all natural classes of phonemes are defined relative to the features that they have in common. Phonological principles relating phonemes are thus ultimately reducible to principles relating feature classes, since phonemes themselves are nothing over and above classes of features. Kingston notes that "stops intrude between nasals or laterals and following fricatives in many American English speakers' pronunciations of words such as warm[p]th, prin[t]ce, leng[k]th, and el[t]se because voicing ceases and, in the case of the nasal-fricative sequences, the soft palate rises before the oral articulators move to the fricative configuration (Ohala 1971, 1974, 1981)." This is an example of how the contingent nature of human anatomy can ultimately explain phonological patterns. We will delve deeper into the role human biological constraints play in determining phonological principles later in this section (Kingston, 406).

As we have previously discussed, articulatory and perceptual biological factors ultimately explain why there is a finite set of possible human speech sounds. In fact, human articulatory and perceptual constraints also seem to provide part of an explanation for why human languages have the specific actual sounds that they have. As Kingston writes, "languages have the oral, nasal, and reduced vowels they do because vowels must be dispersed perceptually in the vowel space, certain vowel qualities are more salient than others, and a long vowel duration makes it possible for a listener to hear nasalization while a short duration prevents the speaker from reaching a low target." Phonetic factors will form a part of any eventual complete explanation of the specific contents of the phoneme segment inventories of human languages (Kingston, 407).

Implementation

A fundamental difference between phonetic reality (6) and its phonological representation is that the former is inherently continuous whereas the latter fundamentally consists in relation to strict categories. The categorical nature of phonology, as we have discussed, is determined by the categorical nature of human speech perception in that the categories determining phonological principles are defined by functional contingencies of perceptual phonetics. The continuous-categorical distinction is often put forward as the fundamental distinction between phonetic and phonological phenomena. This is the case, for example in Keating (1988c), Pierrehumbert (1990), Cohn (1993a), Zsiga (1995), Holst & Nolan (1995), and Nolan, Holst, & Kuhnert (1996) (Kingston, 430).

The puzzle of understanding exactly how phonology is phonetically implemented is analogous to the challenge of explaining phonological patterns in phonetic terms. Human biological constraints determine not only the manner in which phonological representations are realized but also some of the properties of these representations themselves. The phonetic implementation of phonology also relates to the phonetic definition of phonological features in that "properties of the phonological representation emerge out of its implementation in much the same way that the distinctive features emerge out of the solution to the variability problem." Thus, the three main topics in the study of the phonetics-phonology interface all ultimately interrelate themselves. The complete system of human phonology is grounded in a thorough and harmonious manner in the contingent restraints of human biology (Kingston, 432). Ultimately, phonology consists in a mental representation of phonetics. (7)

Optimality and Markedness

One might agree that the basic units of language are not logically independent of biology and psychology, but claim still that the general principles are logically independent. A strong case can be made that this is false. In optimality theory, a given grammatical output is, by definition, the optimal output given a ranked set of "markedness constraints and faithfulness constraints" (Kager, 9). Markedness constraints optimize ease of articulation and perception while faithfulness constraints optimize one-to-one correspondences between forms and meanings. In other words, faithfulness constraints "protect the lexical items of a language against the 'eroding' powers of the markedness constraints" (Kager, 10). The constraints most interesting for our purposes are markedness constraints.

Unmarked values are preferred in all languages and primary in all grammars, while marked values are avoided when possible in all languages and used by grammars only for the purposes of contrast. Formal and lexical contrasts are required for contrasts in meaning. This, again, is the role served by faithfulness constraints.

Markedness constraints affect all levels of phonological representation. At the phoneme level it can be observed that, while the unrounded front vowels /i/ and /e/ exist in all languages, rounded front vowels such as /y/ and /o/ exist only in some languages, and again, merely for the purposes of certain contrasts. [-round] is thus unmarked in front vowels while [+round] is marked in front vowels. Analogous markedness values also apply at the level of syllables, with CV (8) and V syllables being unmarked and CVC and VC syllables existing only in some languages and only for contrastive means (Kager, 2-3).

The important fact about markedness constraints, for present purposes, is that "what is 'marked' and 'unmarked' for some structural distinction is not an arbitrary formal choice, but rooted in the articulatory and perceptual systems" (Kager, 3). As Kager continues:
   An exclusively typology-based definition of universality runs the
   risk of circularity: certain properties are posited as 'unmarked'
   simply because they occur in sound systems with greater frequency
   than other 'marked' properties... phonological markedness
   constraints should be phonetically grounded in some property of
   articulation or perception. That is, phonetic evidence from
   production or perception should support a cross linguistic
   preference for a segment (or feature value) to others in certain
   contexts (11).


Since markedness must be rooted in the biology of articulation and perception, and since the general phonological principles of natural languages' grammars are based on markedness constraints (together with faithfulness constraints), it follows that the general phonological principles of natural language are themselves rooted in biology. These general phonological principles consist in psychological representations of the physiological system in which they are implemented. The information carried is dependent on the receiver, even when the receiver is one's self. 'Unmarked' is not a functional characterization that is multiply realizable. It isn't like jewelry or transportation. Instead, 'unmarked,' like 'cerebral cortex,' refers to a biological kind. As a result, what is actually unmarked is essential to natural language. General principles of natural languages are essentially biological and psychological and cannot be logically independent of their representation and implementation, as Platonism requires them to be. Not merely phonological units, but also phonological grammar is rooted in contingent human biopsychological constraints. It is not just that the units the principles relate are biopsychological. The very principles themselves are determined in the biopsychological implementation. It is because the implementation has the structure that it has that phonological principles have the structure that they have.

Allophony in I-language

As we have discussed, not all aspects of phonetic reality are essential to the nature of their phonological representation. As an example, Isac and Reiss note that phonological grammar "treats plain and aspirated [t], flap, and glottal stop as equivalence classes that are themselves realizations of a more abstract equivalence class called a phoneme" (Isac and Reiss, 112). In this example, we see that whether a [t] is phonetically aspirated is phonologically irrelevant. Thus, standard French and Quebecois pronunciations of the informal second-person singular pronoun 'tu,' for example, are phonologically equivalent. Both phonetically not aspirated and phonetically aspirated [t]s are represented phonologically as a /t/. When it comes to determining phonological principles, whether a [t] is aspirated is an irrelevant distinction. Two classes of speech sounds produced relative to the same phoneme that are not represented as phonologically distinct or relevant to phonological principles are called two "allophones." Phonetically aspirated and not aspirated [t] classes are allophones of the phonological /t/ (Isac and Reiss, 112).

As Isac and Reiss note, the phenomenon of allophones demonstrate the "construction of experience" that is intrinsic to phonological representations in that two sounds that, in reality, are acoustically and articulatorily different, are perceived as the same. This point matches well with the fundamental perceptual definition of phonological features put forward earlier. Analogously, as noted earlier, sounds that are phonetically different can also be perceived as phonologically the same. E.g. 30-35 milliseconds phonation delays on [p]s in 'pat' are all phonologically perceived as identical /p/s. (9) These examples demonstrate the essential role inherently psychological representation of human biology plays in the fundamental nature of human language (Isac and Reiss, 113).

The Role of the Psychological

As the above already suggests, saying that phonological facts are logically dependent on phonetic ones, again, is not to say that they are identical. As Durand writes:

Componential aspect of speech production does, of course, provide support for distinctive features but it should not lead us into thinking that every parameter of speech production is phonologically relevant... it is crucial to establish a fundamental distinction between phonological and phonetic features (Durand, 15).

Every pronunciation of the word 'set' is slightly different, phonetically speaking. But they are all the same, phonologically speaking, in a sense in which a pronunciation of 'bet' is different. Since the articulatory system is continuous, there is strictly an infinite range of possible phonetic variation, allowing for arbitrary, non-linguistic individuation criteria. Since the categories on which phonology is based are discrete, phonemes cannot be strictly identified with regions of articulatory space. Rather, in saying that phonemes are sets of features, what is meant is that phonemes are sets of instructions to be executed in continuous articulatory space producing acoustic patterns categorized discretely relative to human perception. The basic linguistically significant units are thus logically dependent both on the biological as well as on the psychological. The identity of a phoneme consists in a set of instructional relations from the mind to the articulatory system that determines the production of sounds categorized relative to their perceptibility by the human auditory system. Since phonemes are scientific objects, they may be designated according to their essential structure. They are thus not merely contingently, but necessarily biopsychological.

Tonal Stability

The role of psychology in phonology may be made clearer by a consideration of tonal languages. The language Bakwiri has high and low tones on given phonemes that create linguistically significant contrasts beyond those deter mined by standard phonological features. In what follows, an upward-facing accent will be used to note a rising tone and a downward-facing accent to note a falling tone. When Bakwiri speakers are asked to transpose (10) the syllables of certain classes of disyllabic words they systematically respond by swapping the phoneme sequence of each syllable while leaving the tones in their original phonological segment. For example, upon being asked to transpose the syllables of 'kweli' ('falling') they consistently produce 'likwe.' This "tonal stability" can only be explained by considering the tonal system and the phoneme system to be represented on separate phonological "tiers" (Durand, 244).

In other words, a phoneme and its tone must be distinct, since phonological operations may be applied to them separately. But since a tone, at the level of articulation, cannot exist independently of a phoneme, its identity must be partly determined psychologically. It is only in representation that a tone is isolated in the way required for the transposition results attained to be possible. Again we have a scenario in which the categories of instructions for articulations must be more specific than the mere general properties of articulations themselves. The basic linguistically significant units are logically dependent on both the biological and the psychological.

Empirical inquiry into the grounding of the basic linguistically significant units leads us to the a posteriori discovery of their essential features. In every possible world in which the phonemes and tones of our world exist, those phonemes and tones are logically dependent on the same biological and psychological grounding that individuates them.

Conclusion

The view here defended--that psychological representation of biological phenomena is the foundation of human language--is often termed "mentalism" (or "psychologism," which I prefer). As Bromberger and Halle write, "if the mentalist view is correct, then one should expect systematic connections [between phonology and phonetics] : after all, articulatory types represent ways in which phonological intentions are executed, and acoustical types represent information on the basis of which these intentions can be recognized" (142).

In other words, the essential psychological nature of phonological representation does not take away from the concrete phonetic grounding of these psychological representations. Human phonology is a system of psychological representations that are representations of concrete biological realities. The phonological principles that determine phonological patterns are psychological principles that are themselves also partly determined by fundamental human biological constraints. This section has demonstrated the essential grounding of phonology in both human psychology and human biology.

2. The Dependence Relations of Formal and Phonological Grammar

One might retort to the previous chapter by claiming that linguistics should simply ignore phonology. One might try to defend the idea that, despite appearances, phonology is somehow not actually a proper part of the science of language. A Platonist could claim that only the formal aspects of language should be considered essential to linguistics, where 'formal' means 'mediumindependent, ' the aspects of language that are the same whether spoken or written (or signed or whatever). Since languages are abstract, anything related to articulation or perception by nature is inessential, so the objection would go.

The response to this objection is that phonology has to be considered part of linguistics because the formal aspects of language, it turns out, cannot be properly explained independently of their phonological aspects. Morphology (11) and phonology, as optimality theory demonstrates, are essentially codependent. Aspects of syntax, in turn, are dependent on morphology. Thus, aspects of all of formal grammar require explanation partially at the level of phonological grammar, and thus ultimately at a biopsychological level.

General Harmony

In this section, I will be putting forward some technical arguments for the interrelatedness of formal and phonological grammar. These technical arguments will do the main work to rebut Platonism regarding morphology and syntax, but they will also be made within a context based on general harmony. General consideration of harmony leads one to note that it would be strange if phonological units and structural principles were biopsychological while morphology and syntax were Platonic. The observation of correlations at the syntax-phonology interface makes this evident.

Consider the sentence 'John likes blueberries'. The syntactic concept of force, or emphasis, notated 'f, allows for multiple formal representations of this sentence. For example, the sentence can be represented as '[f JOHN] likes blueberries', such as when it is used as a response to the question 'who likes blueberries?', or as 'John likes [f BLUEBERRIES], such as when it is used to answer the question 'what does John like?', etc. In these representations, f is a syntactic constituent. In the everyday writing of the sentences there is no representation of f, but phonologically there is. The constituent of the sentence paired with syntactic force, in Standard English, will always be the same constituent paired with phonological stress, that is, the constituent emphasized vocally. When it comes to force and stress, in Standard English, syntax and phonology are 'mapped' in a way determining consistent correlations. Not all stress is accompanied by force, (12) but all force is accompanied by stress (Truckenbrodt, 442-3).

Thus, there are law-like relations between formal and phonetic representations at the level of sentence syntax. These relations are facts about idiolects with generalizations, facts and generalizations that Platonist Linguistics must necessarily leave out if it is to ignore phonology. Specifically, there is a law-like relation between syntactic f and phonological stress generalizable over Standard English idiolects, a sentence-level linguistic phenomenon that Platonist linguistics must leave out. (13)

Such correlations do not provide a strong case against Platonism for formal grammar, but they intuitively open the door for a biospychological account. Though such correlations do not demonstrate dependency, it would be somewhat surprising if syntax and phonology turned out to have no ontological connection to each other. As I will now show, other aspects of formal grammar are directly dependent on aspects of phonological grammar. Thus, the intuitive sense that one gets from consideration of general harmony can be fleshed out into a rigorous technical case for the biopsychological grounding of all linguistics.

The Phonology-Morphology Interface

Again, one might grant based on the previous section that there are general principles of linguistic sound structure that could not be logically independent of biology and psychology, but claim still that the general formal principles must be logically independent. In other words, one might claim that the principles of language that apply to it regardless of whether it is spoken or written (or signed or whatever) must be logically independent of representation and implementation. Once again a strong case can be made that this objection fails.

As it turns out "morphological and phonological properties of an output form are mutually dependent" (Kager, 25). Even when abstracted from actual articulation, the general formal principles that apply to complex word formation remain logically dependent on psychological representations of markedness constraints.

Reduplication

A clear illustration of the dependency of morphology on phonological markedness constraints can be seen in the phenomenon of reduplication as pluralisation.

Reduplication can be either total or partial:

Total reduplication: Indonesian: 'woman' = 'wanita'; 'women' = 'wanitawanita' (Kager, 195).

Partial reduplication: the Australian language, Yidin: 'initiated man' = 'mulari'; 'initiated men' = 'mula-mula.ri' (Kager, 196).

In the case of partial reduplication, "reduplicants tend to have unmarked phonological structures, as compared to the phonotactic (14) options generally allowed in the language" (Kager, 196). A reduplicant, as Kager writes: is, by nature, a phenomenon which is dependent in its identity upon another morpheme. Since the reduplicant is not burdened with lexical contrasts, its phonological form naturally drifts towards the unmarked... any language, given the chance, will develop unmarked structures in contexts where the influence of faithfulness is absent... [this brings about an] 'emergence of the unmarked' ... [which is a] major argument... that languages are subject to universal markedness constraints (Kager, 198).

Thus even in written language (or any other medium in which the formal level of natural language may be abstracted from its primary articulatory form), the brute human biology of natural language plays a role in the general principles that apply to it. Phonological constraints directly impact morphological structure. The latter thus cannot be fully understood without the former. There is no way to abstract a correct understanding of the principles of natural languages as logically independent of their psychological representation and biological implementation. Even formal principles are rooted in the nature of the human articulatory system and its discrete representation in the form of feature instructions and markedness constraints. Since the phonologically determined structural principles of morphology can be designated as essential features of morphological representation, it follows that the nature of natural language morphology is not merely contingently, but essentially biopsychological.

Haplology

In contemporary linguistics it is well established that phonological markedness constraints have a strong effect on inflectional morphemes. There is a phenomenon resulting from this called "morphological haplology" (Stemberger, 1981) or "the repeated morph constraint" (Menn and MacWhinney 1984). Haplology consists in the deletion of an affix in a context where the other rules of the grammar would determine it to be directly adjacent to a phonologically equivalent affix. An example of this can be seen in the English possessive plural dogs'. Haplology aside, the rules of the grammar would determine two--s affixes resulting in 'dogs's', but the second--s is consistently absent (Bernhardt and Stemberger, 590).

Again we have an effect from phonology that is carried up to formal grammar. The effect of haplology is not medium-specific. It affects written English equivalently to spoken English. Just as one says [dagz] rather than [dagzz], so one writes 'dogs' rather than 'dogs's'. The Platonist cannot escape the effects of phonology in her theory of linguistics by claiming to only consider formal grammar as genuine linguistics. (15)

Affix Position

Yet further examples come from consideration of affix position. Most familiar affixes are either morphemes added to the start of words, called 'prefixes', or to the end of words, called 'suffixes'. There exist languages with affixes positioned differently, however, and their consideration provides various examples for the case at hand, since there is evidence to support that "when affixes occur anywhere other than the edge of a word, phonological pressures are always responsible" and that "the influence can be quite important, to the extent that phonological well-formedness can determine morpheme position" (Ussishkin, 457).

According to General Alignment theory (GA) all affixes are prefixes or suffixes by nature, which deviate from word-edge position strictly due to phonological requirements. More specifically, there is a morphological alignment constraint that requires every affix to be placed at one end of a word or the other, but there are also phonological well-formedness constraints that cause deviation from this morphological constraint due to the phonological constraint's higher ranking in the grammar (Ussishkin, 460). Below are some examples to demonstrate.

Variable-direction Affixes

In Afar, the verbal system determines variable-direction affixes for person marking. The affixes marking person, in Afar, vary in location as a result of phonological constraints. The verbal affix, 't', marking second-person forms, for example, occurs at the end of stems beginning in consonants and the start of stems beginning in vowels. Examples are as follows: "[consonant-initial stems] nak-t-e--'drink milk'; haj-t-e--'put'; sug-t-e--'had'; kal-t-e 'prevent'... [vowel-initial stems] t-eh-e--'give'; t-ibbid-e--'seize'; t-okm-e --'eat'; t-usuul-e--'laugh'" (Ussishkin, 460).

The position of t is phonologically based in that the stem-initial phoneme, to which the affix is added, is determined by its phonological property. Optimality theory offers a straightforward explanation:

A right edge-oriented alignment constraint on the person marker (capturing its suffixal nature) is dominated by ONSET, a constraint requiring syllables to have onsets. (16) Since consonant-initial stems have an onset, the alignment constraint exerts its effect on the position of the person marker. However, vowel-initial stems surface with the person marker at the left edge, resulting in a more harmonic output from the point of view of syllable structure, to which alignment is subordinated (Ussishkin, 460).

In other words, in the grammar of Afar, the phonological constraint demanding syllables to have onsets outranks the morphological constraint demanding affixes to be placed at the end of stems. Since onsets must be consonants, and since the phonological onset constraint in Afar outranks the relevant morphological constraint, in vowel initial stems the affix t ends up steminitial.

Again, of course, this example is medium-transcendent. Though the grammatical cause is a phonological one, the morphological result exists at the formal as well as phonological level of representation.

Infixes

An infix results when phonological constraints force a morpheme to occur away from either stem edge. A somewhat comical example comes from English expletive infixation, "where the expletive prefixes to a stressed syllable... [e.g.] Ari-fuckin-zona, cali-fuckin-fornia... Minne-fuckin-sota" (Ussishkin, 461).

Here, morpheme position is determined by a specific phonological factor, namely, the location of phonological stress. The morphological component 'fuckin' consistently occurs directly prior to the stressed syllable in the word, e.g. 'zo' of 'Arizona,' 'for' of 'California,' and 'so' of 'Minnesota'. Yet again this example shows results that are independent of medium. While phonological stress is not represented in standard writing, the morphology of the word transfers over. This does not require that orthography be linguistically represented. (17) The point is that the formal linguistic reality that is present in spoken and written utterances is determined partially by the structural nature of its spoken form. Phonological grammar determines formal grammatical facts.

Interfixes

An interfix occurs when phonological constraints cause a morpheme to split apart within a word. The morphology of interfixes is called "'nonconcatenative morphology', where the segmental content of an affix may be distributed within a stem" (Ussishkin, 463).

Modern Hebrew offers an example of nonconcatenative morphology. In Hebrew, constraints on syllable structure and word size "impose a set of restrictions on the optimal phonological shape of words that results in interfixational phenomena, without explicit recourse to interfixes as a special class of morpheme" (Ussishkin, 464).

Abstractly speaking, there is a passive-making affix in Hebrew 'ua'. 'Dubar', 'it was spoken' is derived from 'diber', 'he spoke' in a way determined by phonological constraints. The constraint COMPLEXONSET requires syllables not to have multiple onset consonants. The constraint ONSET, however, requires that syllables have onsets of one form or another. Thus COMPLEXOnSeT rules 'out .dbrua.' and '.dbu.ra.' as passive forms and ONSET rules out '.du.abr' and '.ud.bar.' as possible passive forms. The remaining result is 'dubar' (Ussishkin, 470).

This is yet another example of phonological causes having mediumindependent consequences for formal linguistic properties. All the examples provided thus far have been morphological, but this already provides argument for the biopsychological grounding of syntax as well, since the nodes of a syntactic tree are themselves the output of morphology.

As we will see, however, there is even stronger reason to take all of formal grammar to be grounded biopsycholgically. In the next section I will address the morphology-syntax interface. By demonstrating the interdependence of morphology and syntax, given the already-established interdependence of morphology and phonology (and of phonology and biopsychology), one can ultimately defend the grounding of syntax, too, in biopsychology.

The Morphology-Syntax Interface

It has now been demonstrated that morphology is dependent on phonology. If it can also be demonstrated that syntax is dependent on morphology, then, in an indirect sense, it can be inferred that syntax is not independent either. It is the role of this section to demonstrate the dependence of syntax on morphology. The same phonological-determination of morphological structure must indirectly be ontologically essential to syntactic structure.

Case in Australian Languages

In many languages, morphological words can play the same functional role as syntactic phrases. Morphology and syntax offer alternative means for encoding the same formal linguistic relations. In fact, in "nonconfigurational" languages "inflectional morphology takes on much of the functional load of phrase structure in more configurational languages like English, determining grammatical functions and constituency relations" (Nordlinger, 2).

Australian languages offer a clear demonstration of the contingency and variability of distinctions between morphological and syntactic representations and relations. This is due to their surprisingly extensive case system. Many Australian languages have free constituent order in simple clauses and the only marker for grammatical relations is provided by case morphology. The sentence 'the dog saw the boy' in the Non-Pama-Nyungan language, Wambaya, for example, may be grammatically represented with any constituent order provided that the auxiliary gin-a occurs in second position. Thus "Ngajbi gin-a alaji janyi-ni... alaji gin-a ngajbi janyi-ni... alaji gin-a janyi-ni ngajbi... ngajbi-ni gin-a janyi-ni alaji. janyi-ni gin-a alaji ngajbi. [and] janyi-ni gin-a ngajbi alaji" are all semantically equivalent (Nordlinger, 2-3).

In Wambaya and many other Australian languages main clauses have no syntactic indicators of grammatical function. Which words play the roles of "Subject, Object and other grammatical functions is specified solely from the morphology; usually from the case morphology," which demonstrates that there is no essential rule regarding which parts of formal grammar are part of syntax and which are parts of morphology (Nordlinger, 3).

Even the seemingly quintessentially syntactic process of iterative embedding can be done morphologically in Wambaya and similar languages. This is done via case stacking where the iteration of case suffixes on a single word mark successively embedded formal linguistic relations. As Nordlinger writes:

It is clear that case marking in these languages has a fundamental role in determining the syntactic relations. In fact, these relations need not be expressed in phrase structure at all, but are constructed directly from the case morphology; in these languages the morphology builds the syntax, expressing the same types of relationships encoded in the phrase structure in languages like English (Nordlinger, 4-5).

In other words, in translating Wambaya into English, one must use syntax to represent the formal linguistic relations that Wambaya represents morphologically. In saying that "morphology builds syntax" in Wambaya and similar languages one should be careful to disambiguate two senses of "syntax." We may say that a formal linguistic relation is syntactic when it is represented by sentence structure in the metalanguage of our linguistic analysis, here English, or we may say that a formal linguistic relation is syntactic when it is represented by sentence structure in the object language that the linguistic analysis is about, here Wambaya.

Note that the above distinction makes a difference in the analysis of the morphology-syntax relation in Wambaya. If we take a metalanguage definition of syntax, then it is true that morphology builds syntax in a straightforward way: the morphological principles of Wambaya bring about syntactic relations. This shows the dependency of syntax on morphology and thus, through the chain of syllogisms of this thesis, ultimately on biopsychological phenomena.

If one takes an object language definition of syntax, we can say something even stronger. In this case we can say that there is no categorical distinction between morphology and syntax at all. The very same grammatical relations may be represented morphologically in one language and syntactically in another. The difference relates only to our theoretical system for analyzing these languages. This is a more radical interpretation, though nothing, it seems, but an English-style language bias could really count against it. Regardless, the weaker metalanguage interpretation is sufficient for the argument of this paper.

Back to General Harmony

In addition to these technical arguments for the dependency of formal grammar on phonological grammar, a further theoretical case can be made by appeal to the successful application of the same theoretical framework for both phonological and formal grammar. If the same principles of optimality theory that make sense of phonological data also make sense of syntactic data, this suggests that phonological and formal grammar are essentially interdependent in a mutual ontological network. That is, it gives reason to think that, if phonological grammar is biopsyhcological, formal grammar is likely biopsychological as well. It is the role of the pages to come to demonstrate the useful and correct applicability of optimality theory to syntactic data.

OT Syntax

Though thus far in this paper I have addressed optimality theory as it relates to phonological phenomena, the framework is actually applicable as a theory of natural language grammar in general. In fleshing this out it will be worth highlighting and reiterating some basic points. In brief, optimality theory could be defined, in relation to linguistics, as a hierarchical ranking of constraints that grammar works to violate minimally in producing linguistic outputs. This is a very general idea and thus is not inherently restricted to phonology (Kager, 341).

One thing it is crucial to highlight about OT is the sense in which it is empirically divergent from predecessor frameworks based on the notion of parametric settings. In parametric frameworks a given rule could be "switched off' in a given language such that it did not at all apply in that language. In OT, contrarily, constraints are never switched off, but rather are merely "dominated" by higher ranked constraints within the grammar of a given language. The notions of off-switching and dominating are conceptually similar but empirically divergent in important ways. As Kager writes:

It is predicted [by OT] that the effects of some constraints may show up even in a language in which it is dominated. Given the chance, even a dominated constraint will make its presence felt, and 'break into activity'. The canonical example of such situations... are cases of 'the emergence of the unmarked'. In contrast, a parameter, once it has been 'switched off', can never thereafter leave its mark on the grammatical patterns of the language (Kager, 342).

In other words, if constraint A were to be dominated by constraint B, then, though in any linguistic context where B applies, A would be irrelevant, in any context where B and all other constraints out ranking A were to happen not to be relevant, A would apply. Contrarily, however, if, assuming a parametric framework, rule A' were switched off in the grammar of a given language, then A' would simply never apply in that language. Kager offers the following illustrative example:

Consider a language which selects the value 'negative' for the parameter 'onsets are obligatory'... On a parametric view it is predicted that such a language lacks processes. to bring about syllables with onsets, rather than onset-less syllables. (18) Yet we know. that this prediction is simply false. In OT the (correct) prediction is made that the relevant constraint ONSET may continue to be active in a phonology even when the language allows for onset-less syllables. (Onset-less syllables show that Onset is dominated by faithfulness constraints, obscuring its effects in most contexts.) There is a subtle yet robust difference between parametric theory and OT in this respect (Kager, 342).

This is to say that in languages with grammars in which the syllables-must -have-onsets constraint is dominated by constraints a, b,. q, the processes of the language which would force an onset will not apply where a or b.. or q apply, but will apply where none of a or b. or q apply. The grounds for testing OT against parametric frameworks are thus empirically straightforward. And as Kager shows, there is also strong evidence for the limited applicability of dominated constraints in the realm of syntax, analogously to the realm of phonology (Kager, 342).

Kager gives the example of 'do-support,' which will be explained below. He cites Chomsky 1957 and 1991 as showing that "do-support is possible only when it is necessary" and shows that optimality theory can make straightforward sense of this general principle (Kager, 358).

'Do' is obligatory as a single auxiliary verb only in simple interrogative sentences and is unpermitted in positive declarative sentences, or in sentences with additional auxiliary verbs. In other words, 'do', in past form, is obligatory in 'what did Mary say?'. One cannot instead ask 'what Mary said?' with the same meaning. Alternatively, however, in similar linguistic contexts with additional auxiliary verbs such as 'what will Mary say?', do forms are unpermitted. One cannot say 'what does Mary will say?' or 'what will Mary do say?' (Kager, 358-9).

Similarly, in a positive declarative sentence, one must say 'Mary said much,' not 'Mary did say much.' Auxiliary do forms are also ruled out in declarative sentences already containing an auxiliary verb. One must say 'Mary will say much' rather than 'Mary does will say much' or 'Mary will do say much. ' Indeed, auxiliary do forms cannot even co-occur with themselves. One must say 'what did Mary say?' not 'what did Mary do say?' As Kager summarizes, "no more occurrences of do-support take place than are strictly necessary. The auxiliary do is possible only when it is necessary" (Kager, 359-60).

An optimality theoretic explanation of this phenomenon is made by reference to the constraints OB-HD (Obligatory Heads) and FULL-INT (Full Interpretation). OB-HD requires each syntactic projection to generate a syntactic head, that is, a constituent that determines the syntactic type of the generated phrase. In a verbal projection, for example, a verb must be generated as head (Kager, 349).

FULL-INT, in turn, requires that lexical conceptual structure be parsed, meaning it functions to eliminate any semantically empty lexical items such as the forms of do in English that are at issue. The verb 'do', used as an auxiliary, is semantically empty. Thus sentences such as 'what did Mary say?' violate FULL-INT (Kager, 352).

The reason this occurs, however, is because, in the grammar of Standard English OB-HD ranks more highly than FULL-INT. Did is generated in 'what did Mary say?' so that the Verb Phrase 'what did' can be headed by a verb as OB-HD demands. When auxiliary verbs with semantic content such as 'can', 'will', or 'may' are available to satisfy the OB-HD constraints of verbal projections, 'do' will be ruled out of entry to the projection by the FULL-INT constraint. But where no verbal auxiliary with semantic content is applicable, the grammar will ignore FULL-INT in order to use 'do' to satisfy OB-HD (Kager, 363). (19,20)

This shows that the same theoretical framework that explains phonological phenomena can also be used to explain syntactic phenomena. Considered out of context, it is of course logically possible that this is a sheer coincidence, but together with the technical arguments I have provided regarding the dependency of aspects of formal grammar on aspects of phonological grammar, I think it fits a further component into a strong case against Platonism for formal grammar. I have here strengthened the case from general harmony to show not just that linguistic theory in general should apply to both phonological and formal grammar, but that the same theoretical framework should apply to both. This provides further reason to think phonological and formal grammar of equivalent ontological status.

Received 24 December 2015 * Received in revised form 31 January 2016 Accepted 5 February 2016 * Available online 20 April 2016

NOTES

(1.) Though either theory would align with the philosophy of this paper.

(2.) Presumably a modular sort.

(3.) Front being the tongue position within the axis, not the axis itself.

(4.) Lip reading presents an interesting potential connection.

(5.) Auditorism clearly fits well with the acoustic invariance account of speech production previously discussed.

(6.) Both articulatory and acoustic.

(7.) In personal correspondence Chris Viger offered me the analogy to think of the relation of phonological representations to phonetic implementations as we think of the relation between our color representations to light. Perhaps this serves as a helpful heuristic.

(8.) 'C' for 'consonant', 'V' for 'vowel:' so 'a' is V, 'ba' is CV, and 'bad' is CVC, etc.

(9.) See end of section on Phonetics and Speech Science.

(10.) In English this is changing the word 'baby,' /bebi/ to /bibe/, 'beebay'.

(11.) Morphology is the formal grammar for constructing words from minimal meaning units called 'morphemes,' whereas syntax is the formal grammar for constructing phrases and sentences from words.

(12.) One can phonologically stress a syllable of a word, for example, without syntactically emphasizing it. E.g. in the word 'emphasis,' the syllable 'em' is phonologically stressed, but typically in no way logically emphasized.

(13.) This may also be a case where phonology ultimately connects up with semantics.

(14.) Phonotactics is, essentially, the syntax that applies at a phonological level of representation.

(15.) Importantly, the rule relates to /z/ represented as a plural morpheme, not merely phonologically represented. This explains why 'Chris's' remains an acceptable output.

(16.) Onsets are consonantal phonemes starting a syllable. In 'dog' /d/ is the onset.

(17.) It may or may not be.

(18.) That is, to bring about onsets for the sake of having onsets.

(19.) Actually there is some slight complication on this matter, but further use of the OT framework can explain it. As Kager notes "what said Mary?'... satisfies OBHD... [and] avoids do-support. Then why should it be ruled out? The answer resides in the undominated constraint NO-LEX-MVT that. blocks head-movement of a lexical verb out of VP. When movement of the lexical verb to the head position of CP is blocked, while this head position must be filled by some verb, then there is nothing better than to insert a form of 'do'. English thus prefers violations of FULLINT to violations of NO-LEX-MVT" (Kager, 367-8).

(20.) Another apparent counter example comes in cases of stress 'do' such as 'Rob does like beer. 'In such cases, however, do does carry semantic weight. 'Rob does like beer' (or 'do does carry semantic weight') differs from 'Rob likes beer' in a way semantically analogous to 'Rob likes beer, actually,' the 'does' or 'actually' implying this fact about Rob was not previously known, acknowledged or accepted.

REFERENCES

Bayley, Robert (2013), "The Quantitative Paradigm," in J. K. Chambers and Natalie Schilling (eds.), The Handbook of Language Variation and Change. 2nd edn. Oxford: Wiley-Blackwell, 83-107.

Bernhardt, Barbara, and Joseph P. Stermberger (2007), "Phonological Impairment in Children and Adults," in Paul de Lacy (ed.), The Cambridge Handbook of Phonology. New York: Cambridge University Press, 575-94.

Bromberger, Sylvain, and Morris Halle (1992), On What We Know We Don't Know: Explanation, Theory, Linguistics, and How Questions Shape Them. Chicago, IL: University of Chicago Press.

Chomsky, Noam (2000), New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press.

Chomsky, Noam, (2000), "Language from an Internalist Perspective," in New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press, 134-63.

Chomsky, Noam (1995), The Minimalist Program. Cambridge, MA: MIT Press.

Chomsky, Noam (1982), Some Concepts and Consequences of the Theory of Government and Binding. Cambridge, MA: MIT Press.

Chomsky, Noam, and Morris Halle (1968), The Sound Pattern of English. New York: Harper & Row.

De Lacy, Paul (2007), The Cambridge Handbook of Phonology. Cambridge: Cambridge University Press.

Dooling, Robert J., and Stewart H. Hulse (1989), The Comparative Psychology of Audition: Perceiving Complex Sounds. Hillsdale, NJ: Lawrence Erlbaum Associates.

Dretske, Fred I. (1981), Knowledge and the Flow of Information. Oxford: Blackwell.

Durand, Jacques (1990), Generative and Non-linear Phonology. London: Longman.

Fodor, Jerry A. (1990), A Theory of Content and Other Essays. Cambridge, MA: MIT Press.

Fodor, Jerry A. (1975), The Language of Thought. New York: Crowell.

Fodor, Jerry A. (2008), LOT 2: The Language of Thought Revisited. Oxford: Clarendon.

Fodor, Jerry A., and Ernest LePore (1992), Holism: A Shopper's Guide. Oxford: Blackwell.

Frege, Gottlob, and Michael Beaney (1997), The Frege Reader. Oxford: Blackwell.

Frege, G. (1956), "The Thought: A Logical Inquiry," Mind 65(259): 289-311.

Frege, Gottlob (1879/1967), "Begriffsschrift, a Formula Language, Modeled upon That of Arithmetic, for Pure Thought," in Jean Van Heijenoort (ed.), From Frege to Godel: A Source Book in Mathematical Logic, 1879-1931. Cambridge, MA: Harvard University Press, 1-82.

Heyting, Arend (1983), "The Intuitionist Foundations of Mathematics," in Paul Benacerraf and Hilary Putnam (eds.), Philosophy of Mathematics: Selected Readings. Cambridge: Cambridge University Press, 91-121.

Hoffman, Michol F., and James A. Walker (2010), "Ethnolects and the City: Ethnic Orientation and Linguistic Variation in Toronto English," Language Variation and Change 22(1): 37-67.

Isac, Daniela, and Charles Reiss (2008), I-language: An Introduction to Linguistics as Cognitive Science. Oxford: Oxford University Press.

Jackendoff, Ray (2002), Foundations of Language: Brain, Meaning, Grammar, and Evolution. Oxford: Oxford University Press.

Kager, Rene (1999), Optimality Theory. Cambridge: Cambridge University Press.

Katz, Jerrold J. (1985), The Philosophy of Linguistics. Oxford: Oxford University Press.

Katz, Jerrold J. (1985), "An Outline of Platonist Grammar," The Philosophy of Linguistics. Oxford: Oxford University Press, 172-203.

Katz, Jerrold J. (1981), Language and Other Abstract Objects. Totowa, NJ: Rowman and Littlefield.

Kingston, John (2007), "The Phonetics-Phonology Interface," in Paul de Lacy (ed.), The Cambridge Handbook of Phonology. Cambridge: Cambridge University Press, 401-34.

Kripke, Saul A. (2011), "Vacuous Names and Fictional Entities," Philosophical Troubles. Collected Papers. Vol. 1. Oxford: Oxford University Press, 52-74.

Kripke, Saul A. (1980), Naming and Necessity. Cambridge, MA: Harvard University Press.

Kripke, Saul A. (1973), Reference and Existence: The John Locke Lectures for 1973. Oxford: Oxford University Press.

Labov, William (1982), "Objectivity and Commitment in Linguistic Science: The Case of the Black English Trial in Ann Arbor," Language in Society 11(2): 165-201.

Labov, William (1969), "The Logic of Nonstandard English," paper presented at the Georgetown University 20th Round Table, Washington, D.C., March.

Lieberman, Philip, and Sheila Blumstein (1988), Speech Physiology, Speech Perception, and Acoustic Phonetics. Cambridge: Cambridge University Press..

Lewis, David (1975), "Languages and Language," in K. Gunderson (ed.), Minnesota Studies in the Philosophy of Science. Vol. 7. Minneapolis, MN: University of Minnesota Press, 3-35.

Lewis, David (1972), "General Semantics," Synthese 22: 18-67.

Martinich, Aloysius (2010), The Philosophy of Language. New York: Oxford University Press.

Miller, Alexander (2003), An Introduction to Contemporary Metaethics. Cambridge: Polity.

Nordlinger, Rachel (1997), "Morphology Building Syntax: Constructive Case in Australian Languages," Proceedings of the LFG97 Conference University of California at San Diego. CSLI Publications.

Pinker, Steven (2000), The Language Instinct: How the Mind Creates Language. New York: Perennial Classics.

Priest, Graham (2001), An Introduction to Non-classical Logic. Cambridge: Cambridge University Press.

Quine, W. V. (1951), "Main Trends in Recent Philosophy: Two Dogmas of Empiricism," The Philosophical Review 60(1): 20-43.

Russell, Bertrand (1957), "Mr. Strawson on Referring," Mind 66(263): 385-89.

Russell, Bertrand (1948), "Analogy," in Human Knowledge: Its Scope and Limits. New York: Simon and Schuster, 482-486.

Russell, Bertrand (1959), My Philosophical Development. New York: Simon and Schuster.

Sandler, Wendey (2003), "Sign Language: Phonology," in W. Frawley (ed.), International Encyclopedia of Linguistics. Vol. 4. 2nd edn. Oxford: Oxford University Press, 57-60.

Sellars, Wilfrid (1963), Empiricism and the Philosophy of Mind. London: Routledge & Kegan Paul, 1-40.

Soames, Scott (1991), "The Necessity Argument," Linguistics and Philosophy 14(5): 575-80.

Soames, Scott (1984), "Linguistics and Psychology," Linguistics and Philosophy 7(2): 155-79.

Soames, Scott (1985), "Semantics and Psychology," in J. J. Katz (ed.), The Philosophy of Linguistics. Oxford: Oxford University Press, 204-226.

Stich, Steve (1985), "Grammar, Psychology and Indeterminacy," in J. J. Katz (ed.), The Philosophy of Linguistics. Oxford: Oxford University Press, 126-145.

Truckenbrodt, Hubert (2007), "The Syntax-Phonology Interface," in Paul de Lacy (ed.), The Cambridge Handbook of Phonology. Cambridge: Cambridge University Press, 435-56.

Ussishkin, Adam (2007), "Morpheme Position," in Paul de Lacy (ed.), The Cambridge Handbook of Phonology. Cambridge: Cambridge University Press, 457472.

Wallace, D. Foster (2005), "Authorit[gamma] and [lambda]merican Usage," Consider the Lobster and Other Essays. New York: Little, Brown, 66-127.

JONATHAN J. LIFE

jonathanjameslife@hotmail.com

Ph.D., The University of Western Ontario
COPYRIGHT 2016 Addleton Academic Publishers
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Life, Jonathan J.
Publication:Linguistic and Philosophical Investigations
Article Type:Report
Geographic Code:1USA
Date:Jan 1, 2016
Words:11357
Previous Article:A conceptual-phenomenological approach to exploring education: re-conceiving standardized curriculum in terms of a poetic, transcendent, and...
Next Article:The critical role of social media in crisis communication.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters