Printer Friendly

The word in sign language: empirical evidence and theoretical controversies *.

Abstract

This article is concerned with the "word" in sign language, the "grammatical" and especially the "prosodic word". Both notions of the "word" are central in sign language linguistics and psycholinguistic. Converging evidence for the size and complexity of the prosodic word is reviewed, stemming from morphological processes such as compounding, derivation, and classification as well as from phonological processes such as coalescence, epenthesis, and deletion. Additional evidence from slips of the hand and their repairs is presented showing that (i) in slips, grammatical as well as prosodic words are involved and that (ii) slip-repair sequences may keep within the limit of the prosodic word. The distinctive morphological typology and the canonical word shape pattern in sign language is explained by modality differences which act on the Phonetic Form (PF) interface. Sign languages are processed more on the vertical axis--simultaneously whereas spoken languages are processed more on the horizontal axis--sequentially. As a corollary, the information packaging in both language modalities is different while processing is basically the same. Controversial theoretic topics around the notion of the "word" in sign language such as iconicity and notoriously recalcitrant constructions such as classifier predicates are discussed.

1. Introduction

Since the past forty years--beginning with the seminal work of Stokoe (1960)--research on sign languages has revealed that they are natural languages on a par with spoken languages, having the same universal linguistic properties, and being composed of the same modules, i.e., syntax, morphology and phonology, as well as a lexicon. Therefore, they can (and in fact have to be) analyzed with the same theoretical tools as spoken languages.

Since then, sign language research has experienced a growing interest from linguists and psychologists and an impressive body of research has been accumulated on the grammar, the processing, and acquisition of the various sign languages of the world, along with research on the relation of sign language and other cognitive areas, i.e., spatial processing and memory (Emmorey 2002).

The standard reasoning for doing sign language research to be found in many publications holds that sign language research provides a unique opportunity to study the human mind and human language as sign languages are processed in a different modality. In fact, the modality difference is highly suited to put claims on the universality of language (as in generative grammar) to the test (van der Hulst and Mills 1996; Crain and Lillo-Martin 1999; Sandler and Lillo-Martin 2006). Before the rise of sign language research linguistic and psycholinguistic theories had abstracted away from the modality issue. This ignorance has been widely overcome by now. Today, linguists and psycholinguists increasingly call on sign language as an arbiter for or against various hypotheses on the nature of the human language faculty and the human mind. If identity of patterning is found in signed and spoken languages, this has the twofold implication that (i) sign languages can be proven the status of natural languages and (ii) the universalist claim is proven right.

Sign language research has gone through phases of assimilation to and dissimilation from spoken languages, as research emancipated itself over the decades (Woll 2003). The modality issue tends to level intramodal, i.e., crosslinguistic differences between sign languages and to highlight intermodal differences between signed and spoken languages. Obviously, both kinds of differences exist--inter- and intramodal ones (Meier et al. 2002; Sandler and Lillo-Martin 2006: Hohenberger 2007). With the growing range of analyzed sign languages these crosslinguistic differences have become better understood (van der Hulst and Mills 1996; Baker et al. 2003; Perniss et al. 2007). However, for processing, the intermodal differences between signed and spoken languages are more relevant. This is because all sign languages are subject to the same constraints imposed on them by the visual-gestural mode of processing and all spoken languages to those of the aural-oral mode, respectively. In this paper, I treat sign languages as a whole group and contrast them with different typological groups of spoken languages. This procedure is legitimate, for typological and modality reasons (Hohenberger 2007). Morphological typology captures differences between languages in general (signed and spoken). In the case of sign languages, their similar typological behavior can be traced back to a modality difference in processing. Evidence, however, will come from various sign languages, among them American Sign Language (ASL), Israeli Sign Language (ISL), British Sign Language (BSL) and German Sign Language (Deutsche Gebardensprache, DGS).

2. Contrasting spoken and sign language processing

In Table 1, I give a summary of the relevant processing differences between signed and spoken languages, as we have characterized them for our analysis of language production differences between DGS and spoken German (Hohenberger et al. 2002; Leuninger et al. 2004; Leuninger et al. 2005). (1) The first two differences--the processing modality and the articulators--pertain to the Phonetic Form (PF)-level. They are objectively assessable and can be characterized by physical and physiological/ anatomical parameters. The latter three are corollaries of the former two, in a nontrivial way. They do not necessarily have to be the way they are, however, they fall out naturally, as will be shown.

The most obvious way in which signed and spoken languages differ pertains to modality, i.e., the different information channels and articulators they use to process language. Sign languages are processed in the visual-gestural modality; spoken languages in the aural-oral modality.

The articulators in spoken language, the speech organs, jointly produce single speech units, the phonemes, from which morphemes and words are built. In sign languages, there are multiple articulators, manual and non-manual ones. The manual articulators, the two hands, come as a pair, unlike the speech organs, which offers additional expressive possibilities in phonology and morphology. The range of articulators is even broader: facial and bodily expressions add morphosyntactic information. Signing, more obviously than speaking, appears like the orchestration of various instruments which play their tunes and from which, through synergy, a silent language symphony emerges. With their multiple tiers of representation, glosses of sign language actually look like scores (Hohenberger and Happ 2001).

Not by logical necessity but by processing economy, signed and spoken languages fall into two broad classes of processing type. Sign languages are characterized by vertical processing; spoken languages by horizontal processing (Brentari 1998, 2002; Leuninger et al. 2004; Keller et al. 2003). Due to the high spatial resolution of the visual system and the co-temporal articulation by the multiple articulators in sign language, linguistic information can be processed simultaneously in space and time. Due to the high temporal resolution of the auditory system and the rapid temporal articulation by the speech organs in spoken language, linguistic information is preferably processed sequentially in time. Of course, in both systems, vertical and horizontal processing takes place (with prosody being a long neglected vertical aspect of linguistic information processing in spoken language). However, there is a clear preference for one or the other in both modalities. The strengths and weaknesses of processing in both modalities are complementarily distributed. High spatial resolution in sign language goes along with poor temporal resolution and slow production time (on the word level) due to the gross-motor articulators (hands, arms, body). High temporal resolution in spoken languages goes along with poor spatial resolution and fast production time. In total, both modalities bring an equally well adapted processing system to the task of communication: conveying linguistic information in real time (Slobin 1977; Gee and Goodhart 1988; Hohenberger et al. 2002). Note that whereas on the word level, speaking is twice as fast as signing, on the propositional (sentence) level, signing is equally fast as speaking (Klima and Bellugi 1979). This is because morphosyntactic information can be expressed simultaneously in sign language, due to vertical processing. Thus, there is an adaptive trade-off between production time and processing dimension.

Typologically, the different processing types correspond to different morphological classes. Vertical processing corresponds to a fusional/ simultaneous type of language; horizontal processing to a (predominantly) concatenative type of language. Of course, spoken languages are not all of the concatenative type. Spoken German with its abundant use of Umlaut and Ablaut is a case in point (cf. Hohenberger et al. 2002, Note 1). Yet, overall, the degree of concatenativity in the sense of seriality or linearity is higher in spoken languages as compared to sign languages (Aronoff et al. 2005). Theoretically it does not even make sense to limit "concatenativity" to the horizontal dimension. As Akinlabi (1999) has convincingly argued, so-called "floating" morphemes, i.e., featural affixes such as tones or nasalization, palatalization, etc., are no less concatenative than the classical concatenative morphemes. Only the dimension of concatenation, or alignment, differs, in the sense of a parameter (Leuninger et al. 2005). In the case of sign languages, the dimension of concatenation/alignment is vertical; in spoken languages it is more often horizontal and linear. From a universalist perspective, what is common to all languages is the compositional nature of language. How compositionality is brought about is secondary. (2) In Section 8.2, I will also discuss alternative, gestural accounts of sign language.

In her typology of the canonical "word shape patterns", Brentari (2002) classifies sign languages as a specific typological class which has polymorphemic but monosyllabic words, whereas spoken languages can fall into one of the three remaining ones. This classification, which is based upon the number of syllables and morphemes in a word, is given in Table 2.

In this table, (intramodal) typological differences between spoken languages are retained. Note that a classification of signed and spoken languages merely on (morphological) typological grounds (such as concatenative, agglutinating, isolating, polysynthetic, incorporating morphology), is not sufficient since nonconcatenative spoken languages do differ from nonconcatenative sign languages in a crucial way. Brentari comments that

spoken languages tend to create polymorphemic words by adding sequential material in the form of segments or syllables. Even in Semitic languages, which utilize nonconcatenative morphology, lexical roots and grammatical vocalisms alternate with one another in time; they are not layered onto the same segments used for the root as they are in sign languages. This difference is a remarkable one; in this regard, sign languages constitute a typological class unto themselves. No spoken language has been found that is both as polysynthetic as sign languages and yet makes the morphological distinctions primarily in monosyllabic forms. (Brentari 2002: 57)

As Table 2 shows, spoken languages seem to be freer to vary with respect to canonical word shape as compared to sign languages which occupy a single typological subspace as a whole group. There seems to be a single stable solution for all sign languages to the problem of communicating effectively in real time. This hints at strong modality-specific constraints all sign languages have to cope with and which discourage any other solution. For spoken languages, there seem to be more degrees of freedom allowing them to choose more freely among various alternatives, the various morphological types. Languages have to adapt to their respective processing constraints and do so in dynamical ways by exploiting the strengths and respecting the weaknesses of their systems. The typological patterns in Table 2 can be understood as emergent solutions to the various constraints operative in different language processing systems. Modality is one of the major constraints, in this respect.

As a result, quite different production characteristics ensue for both. If one looks at the specific information packaged onto units of processing, in sign languages there is a high information load on few big chunks whereas in spoken languages there is a (relatively) low information load on many small chunks. In sign languages, big chunks (slowly produced monosyllabic but polymorphemic words) carry relatively more information; in spoken languages many small chunks (the morphemes and syllables in a word) carry less information.

As an example, consider the polymorphemic and monosyllabic sign in DGS in (1) (cf. Leuninger et al. 2004: 236-237), meaning "Animate beings approaching each-other slowly, reluctantly, hostile":
(1) 6. Adverb (adversely) (body lean) exclusion: hostile
 5. Adverb (reluctantly) (face) reluctantly
 4. Aktionsart (slowly) (movement) slowly
 3. Derivation (2-hand config.) AUFEINANDER-ZU-
 [GEH.sub.CL:+ANIMATE]
 'to approach each other'
 2. Classification (handshape) [GEH-.sub.CL: +ANIMATE]
 'to go'
 1. Root GEH-(abstract)
 'to go'


(1) is a morphologically complex classifier predicate (Emmorey 2003 and articles cited therein; see Section 8.2). Its lexical root (GEH- 'to go') is completely abstract, at first, still underspecified for handshape ([right arrow] 1. Root, in [1]). The handshape must, however, be specified for the sign to be phonologically well formed. It is specified as the "G-handshape" (extended index finger) by introducing a classifying morpheme for a [+animate] subject. Crucially, morphological information is expressed by phonological means, here, handshape specification ([right arrow] 2. classification, in [1]). This stem is now phonologically well-formed. The classified stem then undergoes derivation and acquires the meaning 'to approach each other'. This reciprocal morpheme is expressed by the introduction of the second hand and the overall hand configuration ([right arrow] 3. derivation, in [1]). The verbal aspect "slowly" is expressed by a movement alternation (slowing down), again, by a phonological means ([right arrow] 4. Aktionsart, in [1]). The adverb "reluctantly' is conveyed by the facial expression, i.e., on a nonmanual articulator ([right arrow] 5. adverb, in [1]). Eventually, the adverb 'hostile' is expressed by the body lean (backwards), again, a nonmanual articulator ([right arrow] 6. adverb, in [1]). Actually, the semantics of 'hostile' is derived from a more abstract feature "exclusion" which is conveyed by the body lean. It is the product of the compositional semantic calculation rather than a pre-given meaning. This example shows impressively the wealth of morphological information that can be hosted within a single sign (see also Brentari 1998: 21, for an analogous example in ASL; and Sandler and Lillo-Martin 2006, for ISL). The signer orchestrates various morphological information units on different manual and nonmanual articulators simultaneously. In the spoken language translation, each morpheme is expressed by a separate word. This raises the question whether classifier predicates are words or sentences or some other unit. Obviously, they carry propositional content. It also shows that there is a non-isomorphy between morphosyntactic and prosodic units. While the classifier predicate in (1) hosts six morphemes, it consists of just a single syllable. Besides the question into which units they are to be parsed, there is a hot debate whether or not such classifier constructions should be regarded as fully linguistic or rather gestural (cf. Emmorey 2003; Liddell 2003; Cogill-Koez 2000a, 2000b). This controversial topic will be further discussed in Section 8.2. Example (1) is presented here in order to introduce pertinent research questions around the "word" in sign language. They are all related to the striking difference in information packaging between signed and spoken languages. This difference is also evidenced by their different production behavior, as I will show by comparing slips of the hand and tongue in DGS and Spoken German (see Section 7).

[FIGURE 1 OMITTED]

3. The "word" in spoken and signed language

The "word" is the basic unit of language whose existence and psychological reality (Sapir 1921) is traditionally taken for granted by most but not all--researchers (for a comprehensive overview, see Dixon and Aikhenvald 2003). Despite notorious problems with defining "word", it is usually held that all languages have words. Words are organized in the mental lexicon (Aitchison 2003). Phonological and morphosyntactic rules operate on them and they are the building blocks of clauses, sentences and discourse.

The word, most clearly since Saussure's (1986 [1901]) distinction between signifiant and signifie has been identified as the central unit with a form and a meaning part. In contemporary terms, words are "programs of mental computation" (Bierwisch 1999) which coordinate the two interfaces, the Articulatory-Perceptual (AP-) system and the Conceptual-Intentional (CI-) system. Mediating between the signifie (meaning) and the signifiant (form) is language-systematic, i.e., grammatical information (Saussure's langue). That is, each word is an ordered triple < a [??] [??]) of articulatory/(perceptual) (a) and semantic ([??]) information within the lexicon, coordinated by formal principles of grammar ([??]). The obvious incompatibility of the two domains--form and meaning--calls for such abstract principles of grammar ([??]) to control their relation. For a single sign, they guarantee the symbol function and allow for an arbitrary mapping between form and meaning (I will come back to this issue in Section 8). Furthermore, they regulate the composition of meaning from morphemes in the case of polymorphemic words. Between signs, they guarantee the syntactic combination of signs to form phrases, clauses, and sentences. Still, both domains--form and meaning--are subject to the same abstract constraints, e.g., that their representation be discrete and recursive (Bierwisch 1999). Whereas the AP-domain is much more obviously subject to constraints on temporal linearity, the CI-system is not. However, as Bierwisch argues, both systems are organized in the same abstract way. With respect to the AP-domain, sign languages with their lower degree of linearity and their higher degree of iconicity seem to be a good counter example, at first sight. Signs have long been taken for holistic pantomimes without any linguistic structure at all until, with the beginning of contemporary SL research (Stokoe 1960), this myth was eventually abandoned. Discreteness of sign languages is evidenced on any level of the grammar. On the phonological level, signed words are composed of discrete phonological features--handshape, hand orientation, movement, and place of articulation--which can be shown to act as discrete units in processing, hence the existence of phonological slips of the hand (Klima and Bellugi 1979; Newkirk et al. 1980; Hohenberger et al. 2002; Leuninger et al. 2004; see Section 4). Signs are also composed of morphemes and processes of word formation (derivation, compounding) are widely attested (Liddell 2003; Sandler and Lillo-Martin 2006; among many others).

Saussure considered the relation between the two parts of a word, the "signifiant" and the "signifie" to be arbitrary, i.e., the form usually has no resemblance to the object referred to by the word. This may be different in sign language with its rich potential for iconicity. Despite the higher degree of iconicity, sign languages show the same "duality of patterning" (van der Hulst and Mills 1996) of form and meaning and the same abstract character of discrete, combinatorial representation and recursive combination as any natural language. The topic of iconicity therefore has to be treated very cautiously. It will be discussed in more detail in Section 8.1.

3.1. What's in a sign?

3.1.1. Form: the phonological base. Form--in Saussure's terms, the "signifiant", in Bierwisch's terms, the articulatory-perceptual features making up a--is what makes signs communicable, in the aural-oral or visual-gestural modality. Form is one organizing principle of the mental lexicon. This principle holds in sign languages as well. In spoken languages the form of words can be described in terms of their phonological features. In sign languages, too, there are phonological feature classes; only, they are phonetically grounded in the visual-gestural modality.

Signs (in the narrow sense of sign language words) are composed of elements from four phonological feature classes, namely handshape, hand orientation, movement, and place of articulation (POA), (3) the alternation of one of which creates a minimal pair, as in spoken language (for examples, see Newkirk et al. 1980 for ASL, Leuninger et al. 2004 for DGS, and van der Hulst and Mills 1996 for Sign Language of the Netherlands, SLN). The psychological reality of these features is evidenced nicely in slips of the hand where each single feature can be affected independently of the others. The first demonstrations of phonological slips of the hand proved that there was indeed a sublexical level of representation and processing that could be described with the same notions as in spoken language. As for the organization of syllables, Perlmutter (1992) showed that the very same notions of "sonority", syllable structure, and the vowel/consonant distinction apply to signed as well as to spoken language syllables.

In the meantime, sign language phonology has become a major topic of research and shows an impressive multitude of theoretical approaches (as briefly reviewed in Brentari 1998: 83-92), ranging from the cheremic model (Stokoe 1960), over the hold-movement model (Liddell 1993), the hand tier model (Sandler 1989: Sandler and Lillo-Martin 2006), the moraic model (Perlmutter 1992), the dependency phonology model (van der Hulst 1996), the visual phonology model (Uyechi 1995), to Brentari's (1998) prosodic model. The latter is a feature-geometric account which will be presented in some more detail in the following paragraphs.

Brentari (1998, 2002, 2006) distinguishes two broad features classes, namely inherent features (IF) which are static and have consonantal qualities and prosodic features (ProsF) (4) which are dynamic and have vocal qualities. (5) Of the four feature classes introduced above, handshape, hand orientation, and POA belong to the IF, whereas movement is a ProsF. A coarse-grained feature-geometric representation of a sign is given in Figure 2.

This feature tree expands into many binary-branching subtrees (only the most important ones of which are shown in Figure 2). The articulator node, for example, expands into a manual and a nonmanual node, the manual node into hand 1 and hand 2, etc. The POA node expands into specifications of the signs in the three spatial planes, etc. The prosodic branch hosts the kind of movements (straight, circle, trilled, alternating, etc.) and handshape changes, among others (see Brentari 1998).

Like Perlmutter (1992), who draws a parallel between the static features of holds and consonants on the one hand and the dynamic features of movements and vowels on the other hand, claiming that movements in sign language usually constitute the syllable nucleus as vowels do in spoken language, Brentari, too, parallels IFs with consonants and ProsFs with vowels. The arguments for the presumed C:IF/V:ProsF analogy are given in Table 3, presupposing that consonants and vowels contrast with respect to the dimensions listed there.

[FIGURE 2 OMITTED]

The analogy is not complete, though. (6) In sign languages, IF and ProsF are realized at the same time, whereas in spoken language they alternate (see Section 2). Importantly, in sign language features dominate segments; in spoken language segments dominate features (Brentari 2002: 55, 2006). Note that in sign language a feature such as handshape is present throughout the articulation of the whole sign, i.e., during its movement (no matter if it is a single handshape or if there is a handshape change during the movement), in which case the movement defines a segment or, in Brentari's terms, rather an x-slot, in the sense of a "minimum concatenative unit referred to by the grammar" (Brentari 1998: 183). In spoken language, however, a segment dominates a set of phonological features. In spoken language, the default ratio of roots to x-slots is 1:1, i.e., one root (traditionally, one segment) is assigned one timing slot. In sign language, the default ratio of roots to x-slots node is 1:2, i.e., one root typically receives two timing (x-) slots (that span from the start to the end of the movement). (7) The number of units in a signed word will be discussed in Section 5.

3.1.2. Meaning: the conceptual-semantic base. Meaning--in Saussure's terms, the "signifie", in Bierwisch's terms, the semantic features that make up [sigma]--is what makes signs worth communicating. (8) Words are organized in the mental lexicon in terms of meaning. Since the C-I system is not modality specific, the simplest assumption is that the semantic component in sign languages is organized in the same way as the one in spoken languages. The conceptual-semantic space is not parameterized as is the articulatory-perceptual interface. Humans can entertain the same kinds of concepts irrespective of their formal expression. This is the idea behind "concept nativism" (Fodor 1975, 1981). In Laurence and Margolis' (2002) interpretation, concept nativism holds that humans have the innate capacity to acquire new primitive semantic concepts by relating perceptual information about a kind to an innately available conceptual representational form via a sustaining mechanism. This sustaining mechanism links the perception with the concept and can be identified with our capacity to form and use symbols. Concept formation and concept possession is neither modality- nor language-specific. However, in what units, linguistic or nonlinguistic, conceptual meaning is predominantly contained, has become a major controversy in sign language research which will discussed in detail in Section 8.

I will now elaborate on the two traditional notions of the word--the grammatical and the phonological word. (9)

4. The grammatical word in sign language

Dixon and Aikhenvald (2003) offer a number of (types of) criteria for the grammatical word:

A grammatical word consists of a number of grammatical elements that

(2) a. always occur together, rather than scattered through the clause (the criterion of cohesiveness);

b. occur in a fixed order;

c. have a conventionalized coherence and meaning. (Dixon and Aikhenvald 2003: 19)

Zeshan (2003) discusses whether these three criteria can be properly applied to sign languages as well.

Cohesiveness: In sign languages, a basic form (stem, root) and various morphological affixes often occur simultaneously. This is the case when the affix is bound and cannot occur alone, in a sequence. As an example, she mentions numeral incorporation and the "gradual Aktionsart" derivation. Since the simultaneity of the various morphemes is triggered phonetically, their co-occurrence cannot be taken as an unambiguous indicator of a grammatical word.

Order: For the same reason, it is hard to argue for any fixed order of elements in a signed word that contains simultaneously realized morphemes. However, for the example in (1), an order at least between the three first morphological processes (from the root [1] to the classified stem [2] to the derived reciprocal meaning [3]) can be assumed (see Section 2).

Coherence and meaning: The last criterion fully applies to signed words. Signers, like speakers, "think of a word as having its own coherence and meaning. That is, they may talk about a word (but are unlikely to talk about a morpheme)." (Dixon and Aikhenvald 2003: 20).

Grammatical, or morphosyntactic words, comprise both lexical and functional categories. The former comprise nouns (N), verbs (V), adjectives (A) and (some semantic) prepositions (P), the latter grammatical words such as auxiliaries (AUX), determiners (D), complementizers (C), pronouns (Pron), and others. In the spoken sentence I read as well as in the signed sentence "I READ", both kinds of grammatical words combine (example purposefully taken from Sandler and Lillo-Martin 2006: 248). While, grammatically speaking, function words are equal to lexical words, they are unequal, prosodically speaking. Function words are often a-tonic, weak, unstressed and cannot stand alone. Therefore they have to combine with a lexical host, as in the French sentence Je t'aime where both the a-tonic subject pronoun (je) and the object pronoun (te) cliticize onto the verb. The same is true in the sign language sentence where the subject pronoun 'I' may cliticize onto the verb 'READ'. Sandler and Lillo-Martin (2006: 248) show that the handshape of the pronoun T--the unmarked G-handshape--assimilates to the handshape of verb 'READ'--the U-handshape. (10) Thus, grammatical words are not necessarily isomorphic with prosodic words to which I will turn in the following section. This non-isomorphism has been the major argument for postulating an independent prosodic level apart from a morphosyntactic level.

5. The prosodic word in sign language

What is a prosodic or phonological word in sign language? According to Sandler (1999, 2000) (see also Sandler and Lillo-Martin 2006 and Zeshan 2003: 159), the phonological word in sign language is constrained by

(3) a. Monosyllabicity: Ideally, a phonological word consists only of one syllable

b. Selected Finger Constraint: within the prosodic word, there is only one set of selected fingers

c. Place Constraint: for the entire prosodic word, only one major POA is selected

d. Symmetry Constraint: For two-handed signs, the symmetry constraint demands that the two hands move in symmetry, within the phonological word.

In order to approximate a definition of the phonological word in sign language and before providing evidence for constraints on its complexity, several terms have to be clarified, namely the notion of a sign language syllable, the role of the prosodic features, especially movement in constituting a syllable, a sign language morpheme, and the minimal word. Besides, the "tendency for monosyllabicity" (Brentari 1998: 6) or "monosyllabic conspiracy", as Sandler and Lillo-Martin (2006: 228-231) call it, will be discussed.

5.1. Constraints on the syllable in sign language

The notion of the syllable is hotly debated in sign language research and there exists a multitude of accounts for it (Pfau 1997). As Perlmutter (1992) has convincingly shown, there are phenomena which cannot be explained without recourse to the notion of a syllable, e.g., the distribution of secondary movement and handshape changes. Both are only licensed on the syllable nucleus. (11) As in spoken language, sonority is a central concept in the domain of the syllable which determines the prosodic structure of a sign.

It has been argued, though, that sign language syllables are often equiextensional to morphemes/words and therefore redundant. Recall that the typological uniqueness of sign languages with respect to canonical word shape stems from their being monosyllabic but polymorphemic (Section 1). Nevertheless, there are distinct well-formedness constraints on the syllable and on the word (Brentari 1998: 70; Sandler and Lillo-Martin 2006). In the following, the prosodic domain of the sign language syllable will be clarified.

What counts as a syllable in sign language? According to Brentari (1998: 205), there are two definitions, a functional one and a formal one. On a functional account, roughly, "the number of sequential phonological dynamic units in a string equals the number of syllables in that string" (Brentari 1998: 6; see also p. 250). A sign language syllable is only well-formed if there is some kind of coherent movement (primary or secondary). In short, sequential movements are syllables. On a formal account, "a syllable must contain at least one weight unit" (1998: 205). A single weight unit corresponds to a simple movement, e.g., a single local or path movement; two weight units correspond to a complex movement, e.g., a co-occurring local or path movement (1998: 237). As the functional account is much closer to processing considerations, I will mainly rely on it in the following. Note that Brentari functionally defines syllables in terms of sequential movements but dispenses with the notion of a segment (traditionally, movements and holds). She rather draws on x-slots, i.e., abstract timing units, in order to determine the extension of the syllable and the prosodic word. Prosodic features are aligned with these timing slots from left to right, by a simple association convention (Brentari 2002: 184). Minimally, a syllable must contain one x-slot, maximally, it may contain two (Brentari 2002: 205; see also p. 227). This is the basic demarcation of a syllable in sign language in Brentari's prosodic model of sign language phonology.

Sandler and Lillo-Martin (2006) point out the specific phonological constraints the syllable is subject to. First, the "Syllable-level Hand Configuration Binary Branching Constraint" (SHCC) requires that the hand configuration (HC) within a single syllable dominate only one binary branching constituent. In Sandler's hand-tier model this would be either the constituent "finger position" or "orientation". Second, the "Syllable Timing Constraint" (STC) requires that the branches of the above constituent be aligned with the edges of the syllable. The same holds for lexically specified nonmanual movements (Sandier and Lillo-Martin 2006: 232).

Not only the syllable but also other relevant units in this context--the morpheme and the morphosyntactic word (see Sandler and Lillo-Martin 2006: 222)--have their specific characteristics and obey specific constraints.

5.2. Constraints on the prosodic word in sign language--core lexemes

The next step is to relate the syllable to the prosodic word which is the next higher and the most central unit in the prosodic hierarchy of Nespor and Vogel (1986). As we know already, most signs are monosyllabic. Thus, in the default case, the syllable, the prosodic word, and the morphosyntactic word are co-extensional. How many syllables may a well-formed prosodic word contain? Again, the answer is two. There are no well-formed core lexemes which exceed two sequential movements. (12) Brentari (1998: 295) states this restriction in terms of an Optimality-theoretic (OT) constraint:

(4) PROSODIC WORD = 1 [greater than or equal to] 2[??] (PW D = 1 [greater than or equal to] 2[??])

Core lexemes consist of at least one syllable and not more than two.

The range of well-formed core lexemes thus comprises four cases, two of which are monosyllabic and two of which are disyllabic (for examples, see Brentari 1998):

(5) a. One x-slot = 1 syllable: signs with no primary (path) movement but only a secondary (e.g., trilled) one

b. Two x-slots = 1 syllable: signs with a path movement (both simple and complex), the most common case

c. Three x-slots = 2 syllables: two movements with a deleted (inner) segment

d. Four x-slots = 2 syllables: two (identical) movements, e.g., reduplication

Note that this restriction holds for core lexemes and not for derived signs. Derived signs may well exceed this upper limit, but without such morphological licensing, PW D = 1 [greater than or equal to] 2[??] applies. Note also that despite the upper boundary of two syllables for a single sign, its canonical form--the monosyllabic sign--rarely exploits this full potential. In Sandler's hand-tier model, it takes the following form:

(6)

[ILLUSTRATION OMITTED]

A single hand configuration (HC) dominates an initial location (L), a movement (M) and a subsequent location (L). Sandler, like Brentari, assumes that the canonical monosyllable in sign language results from an output constraint, in the sense of OT (Sandler and Lillo-Martin 2006: 231).

6. Evidence for the prosodic word in sign language

The PW D = 1 [greater than or equal to] 2[??] constraint on the prosodic word in sign language acts as a powerful restriction on the extension and prosodic complexity of signs. In this section, I adduce external evidence for this constraint by looking at cases in which this constraint is fulfilled or challenged, either by under- or over-shooting. I will consider cases of movement epenthesis, movement deletion, cliticization, compound formation, classifier constructions, derivation, and nonmanual prosodic features.

6.1. When the input is not enough: movement epenthesis

Lexemes may or may not be specified for a movement in the underlying representation. Usually they are, but there are cases in which such a specification is missing. The resulting output form would then be ill-formed. (13) In order to fix this defect, an epenthetic movement is inserted, either a primary (path) movement or a secondary (e.g., a trilled) movement.

[FIGURE 3 OMITTED]

Insertion of a movement path applies to number signs, finger-spelling, or certain lexemes without any underlying movement. In DGS, for the number signs from 1-10, the number of fingers corresponds to the number itself. In ASL, this is only the case for the numbers from 1 5. (14) Numbers are produced laterally, at some distance from the signer's torso. The crucial point is that a small epenthetic movement forward on the mid-saggital z-plane is added which satisfies the phonological requirement of having a syllable nucleus (see Figure 3a). (15) For numbers higher than 10, other movements (primary and secondary) can serve the same function (see Figure 3b).

Epenthesis is also common in fingerspelling. Only a few finger-spelled forms have an underlying movement, in DGS "J", "CH", "Z", and double letters, such as "NN", or "TT". In all other "citation" forms, movement epenthesis is triggered.

Fingerspelling is the common strategy for introducing novel or unfamiliar words. They are fingerspelled for the first time, whereby each letter is spelled out by an individual handshape. For frequent fingerspelled words, a different strategy has emerged, which will be discussed in the next section (6.2). Last, lexemes which do not have an underlying lexical movement, such as GOTT ('god') in DGS are also completed with an epenthetic movement.

It is important to note that epenthetic movements are minimal movements that have a distinct form in comparison to regular primary or secondary movements, which makes it easy to distinguish them from true underlying movement specifications. Just as there exists a least marked default handshape, the flat hand, there also exists a least marked movement, the small frontal movement on the z-plane. Why the movement is on the z-axis (small straight movement forward) remains to be explained.

6.2. When the input is too much: movement deletion

The reverse case of movement epenthesis is movement deletion. This strategy occurs when more than two syllables have to be integrated into a single prosodic word and thus PW D = 1 [greater than or equal to] 2[??] is overridden. This frequently happens when fingerspelled forms become assimilated to the sign language lexicon as proper signs, a process called "local lexicalization" (Brentari 1998: 208-211). The neat 1:1 match from single fingerspelled forms to letters of the spoken language is broken up and the fingerspelled form becomes a native sign for the concept in question. Most often the number of letters exceeds PW D = 1 [less than or equal to] 2[sigma] Therefore the number of fingerspelled forms has to be reduced. This is brought about by integrating as many of them into a single movement path as possible and dropping those that cannot be accommodated.

For example, the input string M-O-R-P-H-E-M-E is abbreviated to the locally lexicalized sign M-P-H-E, which is now only disyllabic. The first syllable comprises M-P-H (the movement allows for a hand-shape change from M to P to H; the second syllable comprises the handshape change from H to Y (see Brentari 1998: 209). The deletion of superfluous fingerspelled forms can best be explained in an original fingerspelled sign with only one letter to be dispensed with, as in J-O-B. Since the J already has a movement to it, integrating the other two letters into the same sign would violate PW D = 1 [greater than or equal to] 2[??]. Therefore, the medial letter O is dropped altogether and the initial letter J and the final letter B come to share the original single syllable of J along with a handshape change from J to B. Thus, a single syllable results, a highly desirable output, because it not only keeps within the upper boundary of PW D = 1 [greater than or equal to] 2[??], but even monosyllabicity is achieved (see Emmorey 2002: 20).

Local lexicalization or routinized sequences are subject to a whole set of constraints, summarized in Brentari (1998: 231f) among which PW D = 1 [greater than or equal to] 2[??]is only one, though a very prominent one. They can be understood as competing demands on the well-formedness of core lexemes in an OT-framework. They interact dynamically and create, over time, forms which are assimilated to the native sign language phonology. This process of assimilation provides evidence for the constraints on core lexical signs. The following cases (6.3 6.5) also involve movement deletion, however, they involve multiple words or morphemes, either function words (as in cliticization, 6.3), lexical words (as in compounding, 6.4), classifier morphemes (6.5), or derivational morphemes (as in negative incorporation, 6.6).

[FIGURE 4 OMITTED]

6.3. Cliticization

Cliticization may occur when two independent adjacent signs, one being a lexical category, the other being a functional category (e.g., a pronoun), co-occur in a syntactic phrase. In Sandler's example below (Sandler 2005: 66; see also Sandler 1999, 2006; Sandler and Lillo-Martin 2006), the two Israeli signs for SHOP and THERE (Figures 4a and 4b) undergo coalescence and form a clitic group (Figure 4c).

Sandler describes the process of cliticization through coalescence as follows:

When a symmetrical two-handed sign (the host) is followed by a pronoun, the dominant hand begins the host sign together with the nondominant hand, but halfway through production of the host sign, the dominant hand signs the pronoun, while the nondominant hand simultaneously completes the host sign. (Sandler 2005: 65)

Sandler and Lillo-Martin (2006: 7) point out that "... what was originally two signs, each with its own movement, has become a cliticized form with a single movement." Since "..., a single movement is considered by many researchers to define a syllable, and has been argued to be the optimal prosodic form of a word" (2006: 7, see also Sandler 1999), the above example can be considered as a particularly clear example that the syntactic process of cliticization operates within the prosodic domain of the phonological word and results in an optimal form, in the sense of monosyllabicity. Sandler adduces further evidence from mouthing that the resulting clitic group forms a single prosodic constituent. Only the host sign SHOP is mouthed and, crucially, it spans the entire clitic group.

Besides coalescence which occurs in prominent phrase-final position and results in a single syllable, Sandler discusses handshape assimilation which occurs in weak phrase-initial position (Sandier 2005: 68). In this case, the handshape of the a-tonic clitic pronoun T is assimilated to that of the following verb, e.g.:

(7) I (G-Hand) READ (V-Hand) [right arrow] I (V-Hand) READ (V-Hand)

While coalescence relates to the monosyllabicity constraint, assimilation relates to the selected finger constraint. Both constraints operate within the phonological word and therefore yield evidence for this prosodic domain.

6.4. Composition

Compounding is a challenging morphological process as putting together two signs potentially exceeds PW D = 1 [less than or equal to] 2[??]. Therefore it is interesting to see through what mechanisms such overshooting is reduced and the limit of two syllables or even monosyllabicity is retained. It has long been noticed that in sign language specific compound formation rules operate which guarantee a well-formed output with respect to the phonological word (Klima and Bellugi 1979; Liddell and Johnson 1986; Brennan 1990; Emmorey 2002; Leuninger 2001; Happ and Hohenberger 2001; Wallin 1983). There are three such compound formation rules (adapted from Brennan 1990):

(8) Compound rule

The first part of the compound is shortened. It may loose its movement and reduce to a hold.

The second part of the compound looses the movement repetition.

The handshape of the second sign assimilates to that of the first sign. If one of the signs is a two-handed sign, the nondominant hand is already in place or remains in place.

Example (DGS): GOTT#WARTEN [right arrow] 'advent' god# to wait [right arrow] 'advent' (Leuninger 2001)

In the DGS compound GOTT#WARTEN ('god'# 'to wait'), the epenthetic movement of the first sign, GOTT ('god'), is cancelled and the sign is reduced to a hold at its place of articulation. The second sign, WARTEN ('to wait'), starts out at the POA of the first sign and looses its movement repetition.

(9) Hierarchy rule

The sign which is placed higher in signing space precedes the one that is placed lower.

Example (DGS): WEIN#ROT red#wine [right arrow] 'red wine'

In the DGS compound WEIN#ROT ('red wine'), the sign WEIN ('wine') is produced first as it is positioned higher in signing space (POA in front of the nose) than ROT ('red', POA at the lips). Both signs loose their movement repetition and are fused into a single movement.

(10) Rule of identical movement direction

The direction of the movement may not change within the compound. If there is a clash in the movement direction of both signs, one part of the compound will adapt to the direction of the other part.

Example (DGS): MONCH#CHEF [right arrow] 'abbot' (monk#chief) [right arrow] 'abbot' (Leuninger 2001)

In the DGS compound MONCH#CHEF ('abbot'), the first sign MONCH ('monk') is placed higher in signing space (POA above the head) than the second sign 'CHEF' ('chief', POA lateral to the body). According to the hierarchy rule (9) it is produced first. Crucially, the second sign 'CHEF' ('chief') has an upward movement which would require the movement from MONCH to CHEF to go down first before going up again. Such a movement change is illicit and therefore replaced by a single downward movement from the higher to the lower place of articulation. The result is a smooth single movement with a handshape change (from 'G' to 'A' for MONCH to CHEF, respectively) on the nucleus of the single syllable.

The result of the joint operation of these rules is that compounds do not take longer in production than monomorphemic signs, in general (Wallin 1983) and often also are monosyllabic. This reduction is achieved through a process of fusion in the course of which the two independent grammatical words are integrated into the prosodic contour of a single word. This process is comparable to and the regular counterpart of what happens in spontaneous slips of the hand, in the case of fusions (see Section 7). In both cases, neighboring signs share a single prosodic word frame. The input conditions, however, differ. In compounding, the juxtaposition is due to the morphological process of word formation; in fusions as a slip category, it is due to the contingency of syntactic adjacency. Nevertheless, the operation as such is the same.

6.5. Lexicalization of classifier constructions

When classifier constructions become lexicalized, handshape, location, and movement loose their morphological status and have only phonological status, as in regular signs. This process is called "freezing". One effect of freezing is that the lexicalized sign becomes shorter so as to conform to the prosodic word frame of core lexemes. Aronoff et al. (2003) (see also Sandler and Lillo-Martin 2006: 97-98) give an illuminative example of quasi online lexicalization of a classifier construction designating a "ligament", in the laboratory. An ISL signer had been asked to express how bones were attached at the joints. In his first go he produced a sequence of classifiers which spanned 1,440 ms. Subsequently, he spontaneously lexicalized a part of this description and came up with a single classifier construction the length of which, 280 ms, was in the range of a normal sign. Such a temporal reduction to a regular prosodic word happens spontaneously and fast in normal discourse. It also operates on a large time scale since many frozen signs have originated from classifier predicates along these lines.

6.6. Derivation

In some sign languages, modals like CAN, MUST, SHOULD, NEED, WANT, etc., can incorporate a negation (for ASL, Sandler and Lillo-Martin 2006: 229). The input to this derivation is the positive form plus a separate negative morpheme NOT, i.e., two independent signs, the latter of which is a function word. The resulting derived forms CAN-NOT, MUST-NOT, SHOULD-NOT, NEED-NOT, WANT-NOT, etc., conform to the prosodic word template either in that the first location of the base sign is lost to accommodate the negation or in that the additional syllable becomes integrated into the movement of the base sign. In DGS, this happens in the form of an "alpha"-like movement. Therefore, this kind of negation is called "alpha-negation". Figure 5a shows the positive base form KANN ('can'), Figure 5b the negative particle NICHT ('not') and Figure 5c the (alpha-)negated form KANN-NICHT ('cannot'). (16)

6.7. Nonmanual prosodic features

Apart from manual prosodic features, namely movement, there are also nonmanual prosodic features (Brentari 1998: 173ff.) such as facial expressions, mouth gestures, mouthings, and body leans. As pointed out in Section 1, the diversity of articulators enhances the information load on the vertical dimension of processing through their co-temporal production. Regarding nonmanual features on the lexical level, they can be attributed morphological status (see example 1) or serve as a device for phonetic enhancement. Their simultaneous production requires a process of synchronization so that they can be construed as belonging together and their scope can be read off from their temporal characteristics. Typically, these nonmanual expressions mark the word's prosodic frame under which a single or more manual signs are integrated. (17) It has been emphasized that the nonmanual tier is dependent on the manual tier, though (Boyes Braem and Sutton-Spence 2001). The morphosyntactic word which is expressed manually, licenses the prosodic word in the first place. Nonmanual, dependent features align with this prosodic word template. It may therefore seem that the nonmanual component is primary, however, the head of the entire sign is the grammatical word (compare also the discussion in 8.2).

[FIGURE 5 OMITTED]

Some lexical signs in DGS obligatorily go along with specific facial expressions, such as

(11) smiling

GLUCKLICH

'happy'

In (11) and related cases, the facial expression is part of the underlying lexical representation. Thus, the sign GLUCKLICH would be incomplete without the facial expression 'smiling'. It is not an independent morpheme, though. In (12), the facial expression expresses an adverbial whose meaning is construed independently of the meaning of the manual sign, hence, it has proper morphological status:

(12) smiling

TANZEN

'to dance happily'

In either case, the nonmanual expression is synchronized with the manual sign. In structural terms, the nonmanual tier is aligned with the manual tier and the nonmanual expression spreads over the prosodic domain.

Mouth gestures can also be obligatory nonmanual specifications of signs which have to be present in order for the sign to be complete. The three items in (13) are examples of this kind of manual-oral coordination:
(13) a. sch b. pf c. sss
 BESITZEN ES-WAR-EINMAL GENAU
 'to possess' 'once upon a time' 'exactly'


Among the set of mouth gestures, there is a subset which has particularly interesting prosodic features with respect to those of the manual base. They mimic or agree with the phonological features of the manual base in terms of movement direction and aperture change, a phenomenon called "echo phonology" (Woll 2001). Thus, if the manual sign shows a distal-proximal movement and an open-close aperture change, as in the BSL item 'WON' (14), the mouth gesture shows the same prosodic features. If the aperture change is from close to open, the mouth gesture mimics that change, too, as in 'SUCCEED' (15):
(14) <thp> (15) <pa>
 WON SUCCEED


Echo phonology provides evidence for the strong tendency of cross-modal synchrony between different articulators, manual and nonmanual. In addition, this phenomenon highlights the dependency of nonmanual components on the manual component, in the linguistic domain.

Mouthings are oral components in sign languages mimicking the respective spoken language words. They, too, are co-temporal with and dependent on the manual component (Boyes Braem and Sutton-Spence 2001, and articles therein). Interestingly, when the prosodic extension of the mouthing does not match that of the manual sign, mouthings are either stretched or reduced to that of the manual sign. Most importantly, if mouthings exceed PW D = 1 [less than or equal to] 2[sigma], they tend, in the course of time, to be reduced to the admissible two syllables. (18)

Summarizing, in all cases reviewed here, the prosodic features, manual and nonmanual, delimit the minimal and maximal size of signs. The empirical evidence provided in this section shows that the prosodic word plays a key role in sign language phonology.

7. Evidence for sign language words from processing: production and perception

In this section, I will mainly present evidence for the word in sign language from production (Sections 7.1-7.3), but perceptual studies will also be shortly reviewed (Section 7.4).

Evidence for words as units in sign language comes from comparative studies on signed and spoken language production, specifically from the comparison of slips of the hand and tongue (Hohenberger et al. 2002; Keller et al. 2003; Leuninger et al. 2004; Hohenberger and Waleschkowski 2005). (19) Slip research has always been conducted with the goal to provide external evidence for language production processes and models of language production. Comparative slip research adds to this goal the objective to assess the impact of modality on processing. By comparing two extensive data bases of elicited slips, one for DGS and one for spoken German, we could directly compare the distribution of errors with respect to slip categories and affected units. The slips were elicited in a picture-storytelling task that yielded natural language data. The slips were categorized according to major parameters such as slip category, affected unit, and others (see Table 4 below). (20) Slip categories, namely paradigmatic slips such as substitutions (semantic and formal) and blends as well as syntagmatic slips such as anticipations, perseverations, exchanges, and fusions, indicate processes and levels of production. Affected units, namely phrases, words, morphemes, and segments/phonological features indicate major processing units. In a nutshell, we found evidence for all major slip categories and units in DGS as well but different quantitative distributions of errors in spoken German and DGS, especially for the affected units. We conclude from the overall similarity between our two corpora that language processing is modality independent in general but that there are a few distinctive differences grounded in the modality. These differences have mainly to do with the information packaging onto different units of processing, i.e., with the linguistic information flow. Table 4 displays the percentage of errors with respect to the four major processing units in both languages and gives brief explanations in terms of structure, processing, and typology.

As can be seen from Table 4, the most obvious differences in the distribution of errors on affected units pertain to words, morphemes, and phrases. There are more word errors in DGS but much less morphological and phrasal errors. Behind these differences, there is a common denominator which is the degree of concatenativity in morphology and syntax. This typological difference goes back to a processing difference due to modality, namely the different load on the horizontal or vertical axis of processing (see Section 1). Morphemes are less affected in sign language as they are much harder to separate from each other and therefore cannot be manipulated as independent units in the course of the planning process. Morphemes which are either expressed by phonological changes (e.g., handshape changes for classifiers, or movement changes, as in example [1]) or by strongly synchronized nonmanual expressions (again as in [1]), will not engage in morphological slips as easily as concatenative morphemes which are exposed to the language processor linearly and which can be easily cut off at their edges. A sign word--although being morphologically as rich as a spoken word--has a much stronger cohesion between its morphemes on the horizontal and vertical axis. In an experimental study on the explicit decomposition of polymorphemic words in spoken German and DGS we could confirm this difference in terms of separability of concatenative vs. nonconcatenative morphemes (Leuninger et al. 2004; Leuninger et al. 2005; Hohenberger and Waleschkowski 2005).

Phrasal blends, as in (16), also hardly occur in DGS:

(16) Die Katze mag Sandkuchen namlich auch sehr lecker

The cat likes pound cake namely also very tasty

[left arrow] mag Sandkuchen x findet Sandkuchen lecker

[left arrow] likes pound cake x finds pound cake tasty

'The cat also finds pound cake very tasty, indeed.'

In (16), a phrasal predicate, lecker finden 'to find tasty', is blended with a lexical predicate, mogen 'to like'. Such alternations are abundant in spoken German. They are rarely encountered in DGS, though. The same holds true of blends of idiomatic expressions. Typically, in DGS, idiomatic phrases would be expressed by a single sign, as in (17):

(17) 1UBEREINSTIMMEN2UBEREINSTIMMEN1

1agree2agree1

'We both agree (with one another).'

[FIGURE 6 OMITTED]

As in example (17), a whole phrase, here 'we both agree (with each other)', can be expressed by a single sign word. In English, the reciprocal morpheme is expressed by a lexical phrase one another (which calls for a case-marking preposition); in DGS, it is expressed by reduplication of the movement between spatial referential loci (from locus1 to locus2 and back to locus1).

So far, the greater incidence of word errors is due to the different information packaging and not to a difference in processing, as would be evidenced by a difference in slip categories. Slip categories show similar frequencies in both DGS and spoken German. (21) There is, however, one slip category which is almost absent in spoken language which has a moderate frequency of 8% in DGS, namely fusions. (22) In fusions, two adjacent words are squeezed into a single prosodic word frame, leaving out material from both input signs. Fusions therefore attest the psychological reality of the prosodic word (for an example, see Section 7.2).

The fusional-simultaneous typological character of sign language (see Section 1) constrains the linguistic information packaging in important ways, both in representation, i.e., in the grammar and in processing. It is through these typological constraints that the signed word receives its characteristic shape and complexity. In the next sections, I will provide evidence for the importance of the morphosyntactic word (7.1) and for the prosodic word (7.2) in sign language as it is revealed in sign language production errors and repairs (7.3).

7.1. Evidence for the grammatical word in sign language from slips of the hand in DGS

In this section, I adduce evidence for the morphosyntactic word from slips of the hand (Leuninger et al. 2004; Hohenberger et al. 2002; Keller et al. 2003). As pointed out above, in our slips of the hand corpus, there are many semantic substitutions (16,5%). In a semantic substitution, a word which is close in meaning to the target word, is more highly activated than the target word itself and therefore substitutes it. Semantic substitutions occur among members of the same grammatical category, i.e., there are noun-noun, verb-verb, and adjective-adjective interactions. This strong generalization has been dubbed the "grammatical category constraint" (for an overview over the literature on this constraint, see Poulisse 1999: 22 and the literature cited therein). Example (18) shows a semantic substitution in DGS:

(18) -- hn

HOCHZEIT PAAR SITZ- // STEH-DUAL.

Wedding couple sits // standsDUAL.

'The wedding couple is standing (there).'

In this slip, the signer substitutes SITZ- 'to sit' for STEH- 'to stand' and corrects herself. The correction is accompanied by a head nod (hn) so as to confirm the correctness of the repaired utterance. SITZ- and STEH are in a close semantic relationship and belong to the same word class--verbs. Note that the inflection for 'dual-', expressed by classifier handshapes, is not affected by the slip but accommodated to the lexical semantics of the two signs, respectively (SITZ- has a bent V-handshape, STEH-a straight V-handshape).

[FIGURE 7 OMITTED]

[FIGURE 8 OMITTED]

Besides semantic substitutions, there are also formal substitutions, where the two interacting words are related by their phonological similarity. Formal substitutions are rarer, in both signed and spoken language slips. They show the organization of the mental lexicon in terms of form. They also respect the "grammatical category constraint". In Example (19), the signer substitutes FENSTER 'window' for ZEITUNG 'newspaper'.

(19) F(ENSTER)//ZEITUNG

'window' // 'newspaper'

The signs FENSTER 'window' (Figure 9a) and ZEITUNG 'newspaper' (Figure 8b) are formally very similar, they only differ with respect to one phonological feature, hand orientation: in the former the hands are oriented on the horizontal axis, in the latter on the vertical axis.

7.2. Evidence for the prosodic word in sign language from slips of the hand in DGS

The semantic and formal substitutions in the previous section yield evidence for the morphosyntactic word. At the same time, indirectly, they are also evidence for the prosodic word, since in each case, the prosodic domain of the substitution is the prosodic word. In this section, more direct evidence for the prosodic word is adduced, following the same logic as in Section 6, where I showed how imminent violations of the prosodic word constraint are avoided by conforming the overgenerating input to the prosodic word template. While the evidence in Section 6 was mostly drawn from synchronic processes (with the exception of online lexicalization of classifiers in 6.5), the evidence here will be drawn from online language production.

As pointed out above, fusions are a medium frequent slip category in DGS, with an incidence of 8%, while they are practically absent in spoken language. A spoken language fusion is given in (20):

(20) Gib mir den Stulrich [left arrow] Stuhl, Ulrich

Give me the chulrich [left arrow] chair, Ulrich

'Give me the chair, Ulrich.'

(Leuninger et al. 2004: 253)

Fusions are word errors resulting from the blending of two neighboring words into a single prosodic word frame. In the example above, the words Stuhl ('chair') and Ulrich yield "Stulrich" ('chulrich'). In sign language fusions, two adjacent signs are compressed into a single prosodic word frame, whereby each input sign contributes some phonological features to the output sign, as in (21):

(21) HALB-ACHT [left arrow] HALB ACHT

half-eight [left arrow] half eight

'half past seven'

In Example (21) and Figure 9 the signer intends HALB ACHT 'half eight' which requires two two-handed signs, HALB and ACHT. However, she fuses both signs into a single prosodic frame by conveying each sign with only one hand, the sign ACHT 'eight' with the dominant hand only (thus, only showing 'three' and the sign HALB 'half' with the non-dominant hand only).

Fusion is a mechanism employed in regular morphological processes in sign languages, too, namely in compound formation (see Section 6.4). In compounding, two input signs are compressed into a single prosodic word frame which is achieved by a fusion. Both phenomena, fusions as a spontaneous production error and the morphological process of regular compound formation make use of the same process. The process can be generalized to the other cases in Section 6, cliticization (6.3), lexicalization of classifiers (6.5), and derivation (6.6), too. Thus, a widely attested, regular morphological process in sign language receives a processing account revealed by external evidence from slip research.

[FIGURE 9 OMITTED]

Parallel to the spreading of nonmanual phonological features such as facial expressions and mouth gestures (6.7), the role of nonmanual prosodic features acting like a prosodic contour which is aligned with that of the manual base, also becomes manifest in slips of the hand. We found some rare examples of "stranding" of nonmanual prosodic features (e.g., facial expressions and mouth gestures, see Leuninger et al. 2004; Leuninger et al. 2005). Typically, the nonmanual feature is correct whereas the manual morpheme is erroneous (e.g., a substitution, or anticipation). In case of a subsequent correction, the prosodic contour conveyed by the nonmanual prosodic feature may integrate both error and repair under its still enduring prosodic curve. In (22), the substitution of SUCHEN 'to search' for UBERLEGEN 'to ponder' does not affect the facial expression which is the one that goes along with uberlegen 'to ponder', the target expression:

(22) --'uberlegen'

SU(CHEN)//UBERLEGEN

'to search//to ponder'

For the language production process, this means that the two parts of the target word UBERLEGEN, the manual morpheme and the nonmanual facial expression can become separated during the planning process. It has to be assumed, though, that they start out together in the planning process since the nonmanual part is licensed by the manual sign (see Section 6.7). (23) When it comes to inserting the manual morpheme into the prosodic frame--to which the nonmanual facial expression has already spread--a semantic substitution occurs replacing the target sign UBERLEGEN with SUCHEN.

[FIGURE 10 OMITTED]

7.3. Evidence for the prosodic word in sign language from monitoring of slips of the hand in DGS

In Section 6.4 I argued that the compound rules adhere to the constraints on the prosodic word. In this section, I will adduce further evidence from monitoring of slips of the hand that the constraints on the prosodic word in general and the compound rules in particular are psychologically real. In order to see how the compound rules are obeyed during online production and monitoring, let us turn to the following slip and its online repair:

(23) VA-TO-BUB

Fa(ther)-daugh(ter)-boy

'boy'

In this semantic substitution, the German signer intended to sign BUB 'boy' (see Figure 11c). However, he first produced the onset of the sign VA(TER) ('father', see Figure 11a). In the following step-by-step repair sequence, he (partially) produced the sign TO(CHTER) 'daughter' (see Figure 11b). Finally, at the end of the same overall movement path he converges on the target sign BUB 'boy' (see Figure 11c). What is of particular interest in this slip and repair sequence is the fact that all three (parts of) signs are produced within the same dynamical movement unit which constitutes a phonological word. Compared with the regular morphological process of compounding, it can be considered as a 3-tuple compound. There are, however, differences between a regular compound and this "slip compound". Those differences concern (i) the motivation for the compound (ii) the involved time scale, and, as a consequence, the stability of its representation, and finally (iii) the involved agent(s). First, while regular compounds are motivated by lexical forces of word formation, the slip compound is motivated by the need to repair a spontaneous production error. Second, while regular compounds emerge in the course of language change, on a medium to macro time scale, the slip compound occurred on a micro time scale, during online production. While word formation leads to new stable entries in the lexicon, the slip compound has only an ephemeral existence--it is unique and unreproducible. Third, while regular compounds are shared by and agreed upon by all speakers of a language community, the slip compound is the private creation of a single subject, neither intended nor appropriate for wider use. Apart from these differences, the very process involved in both kinds of compounds is the same. In both, the compound rules have been applied equally (see Section 6.4), (24) and, most importantly in the present context, the prosodic domain is the same--the prosodic word.

[FIGURE 11 OMITTED]

In (23) the repair occurred within the syllable, on the movement segment of the sign(s). In (24), the repair occurs between two short reduplicated syllables of the sign:

(24) HOLZ 1-SYLL-GLAS 1-SYLL ('wood'-'glass')

[FIGURE 12 OMITTED]

In this slip, the target sign is GLAS 'glass', which is a disyllabic sign with a short reduplicated movement; the erroneous sign is HOLZ 'wood', which has the same reduplicated syllable structure. In the semantic substitution in (24), the signer erroneously starts out with HOLZ 'wood' (Figure 12a). However, he realizes his slip and, after the first short syllable, repairs it to the intended sign GLAS 'glass' (Figure 12b) of which he delivers the second syllable only. So, overall, he has kept constant the common prosodic structure of both the intended and the intruding sign during the online repair. Now the interesting question arises as to which unit constitutes the phonological word, in this case. Is it a single syllable between which the repair takes place or is it the pair of syllables within which the repair takes place? If we look at spoken languages, reduplication, along with compounding, is the typical case when a grammatical word consists of more than one phonological word, e.g., ilo.ilo 'glass' in Fijian (Dixon and Aikhenvald 2003: 29). While each single part constitutes already a phonological word, its reduplication recursively leads to a higher unit, which, however, is still a phonological word. Applying this reasoning to the sign language example in (24), we face the same ambiguous situation. We can conceive of the single base unit as a phonological word as well as of the reduplicated base. While a single short syllable satisfies the formal requirement of a phonological word, namely PWD = 1 [less than or equal to] 2[??], it may not be an optimal one. A single short movement may not be sonorant enough to be properly perceived by the interlocutor; a reduplicated base, however, will be. Much like movement epenthesis in the case of signs without underlying movement (see Section 6.1), reduplication may be needed to optimize signs with only a short movement. In any case, this operation respects the phonological word as its proper domain of application.

Summarizing, there is evidence from slips of the hand and their repairs that the prosodic word is involved in sign language production. However, not all slips or all repairs involve the prosodic word. Furthermore, it needs to be clarified when prosodic units are planned in the course of language production. Since slips occur during the planning of grammatical linguistic units (phonological features/segments, morphemes, words) in the first place, the initial error does not involve any prosody yet. The error and its repair (if there is one) are realized as prosodic units "on the fly" (Levelt et al. 1999). In Berg's (2003) terminology, slips occur when "content" units are mis-selected or wrongly serialized. These content units have a "structural"/grammatical and a prosodic domain (e.g., a syllable for a phonological slip, a prosodic word for a word slip or a phrase for a word or phrasal slip), i.e., they are realized prosodified. Of course, there are phonological, morphological and phrasal slips that involve smaller or bigger prosodic units (syllable, intonational phrase) as well as there are repairs that either undercut the prosodic word or involve larger prosodic units. Since repairs follow Levelt's "Main Interruption Rule": "Stop the flow of speech immediately upon detecting trouble" (Levelt 1989: 478), they may interrupt the production of the erroneous sign at any point in the articulation. They do not generally respect structural, prosodic boundaries (Leuninger et al. 2004). However, they may do so if it is possible, as is the case in online repairs where the repair may occur on the syllable nucleus, i.e., during the movement (as in [23]) or between two parts of a reduplicated sign (as in [24]).

7.4. Evidence for the prosodic word from perception

Further evidence for the prosodic word in sign language comes from a study on word segmentation in ASL by Brentari (2006). She shows that signers indeed use the word as a major perceptual unit when they are to judge the extension of presented nonwords with changes in the major phonological parameters handshape, movement, and POA, or combinations thereof. More specifically, signers adhere to a "1 value = 1 word" strategy, i.e., they assume a new word to start when the value of any of those three parameters or a combination of those changes. Brentari interprets this effect as operative on the word level and not on the syllable level, since it does not only (though predominantly) pertain to movement, which is most closely related to the syllable, but also to the other two parameters, handshape and POA. In order to capture this generalization, the word is more appropriate as a unit than the syllable. Furthermore, the judgment of the subjects did not rely on an alternating pattern, which would be characteristic of syllables. Instead, their word segmentation relied on every change in value which indicated to them the possible beginning of a new word, much as in spoken languages with vowel harmony such as Finnish or Turkish, where a change in vowel quality indicates a new word onset. Interestingly, Brentari also found a modality effect: to deaf signers more than to hearing subjects a handshape change indicated a new word. This finding underscores the special sensitivity signers have towards handshape which they perceive as discrete, categorical information.

8. Theoretical controversies around the "word" in sign language

After having reviewed the empirical evidence for the "word" in sign language--for the grammatical and even more for the prosodic word--I will now address some theoretical controversies related to it, some of which have already been mentioned in the previous part. Whenever possible, I will adduce suitable empirical evidence.

8.1. Iconicity

Words are semiotic units and are to be framed in a semiotic theory, as in Peirce's seminal theory of signs (Peirce 1931-1958). Peirce holds that the relation between a sign and its referent can be of one of the following three kinds: iconic, indexical, and symbolic (cf. Merrell 2006). In an iconic relation, the sign resembles the referent it stands for, like a triangle that stands for a mountain. In an indexical relation, the relation is a physical matter of causality, contiguity, container-contained, or part-whole, like smoke that indicates fire (Merrell 2006: 475). In a symbolic relation, the relation between the sign and the referent is mediated by convention, like the French word arbre that stands for a tree. It is easy to see that this latter relation is what corresponds to Saussure's notion of the "arbitrariness of the sign" (see Section 3).

The issue of iconicity has attracted much attention since the very beginnings of sign language research. (25) It is widely acknowledged that the visual-gestural modality lends itself more naturally to iconicity than the aural-oral modality (van der Hulst and Mills 1996; Goldin-Meadow 2003). Iconicity is naturally related to mimetics. According to Goldin-Meadow (2003), any natural language has to fulfill two quite different functions, the systemic and the imagistic/mimetic function. Signed and spoken languages both can fulfill the systemic function equally well, but the spoken language cannot compete with sign language as for the imagistic function. Therefore, in spoken language, the division of labor is straightforward: speech (mainly) takes over the systemic function, and gestures (mainly) serve the imagistic function. Only when just one channel is available, as in deafness, "... gesture must assume the full burden of communication in order to abandon its global and imagistic form and take on the linear and segmented form associated with natural language." (Goldin-Meadow 2003: 211). After this functional bifurcation, the systemic part of sign language will thus acquire more digital, discrete properties while the imagistic part may preserve its original analogue, iconic qualities. The sign language lexicon is the domain where both kinds of qualities interface: "So it seems that morphology is the most readily observable meeting place for the iconically motivated forms of sign language and linguistic structuring." (Sandler and Lillo-Martin 2006: 21). Yet Sandler and Lillo-Martin conclude that sign languages form morphologically complex words in the same way as in spoken language: "... universal principles of organization and structure that constrain spoken language morphology are active in sign languages as well, despite their iconic base." (Sandler and Lillo-Martin 2006: 21).

Taking a historical perspective, it has often been observed that iconicity is strong initially, but may loose in strength in the course of language change (Frishberg 1975; Klima and Bellugi 1979: 79), when other constraints related to the wellformedness of the phonological form of signs, systematicity in a paradigm, discreteness, and recursion take over. (26) In the beginning of sign language research, iconicity was suspect of indicating a defective linguistic status of the sign language. Therefore, researchers rather deemphasized it. After sign languages had been attested equal linguistic status, interest in iconicity has been renewed (Cuxac 1999; Sallandre and Cuxac 2002; Taub 2001; Wilcox 2004, 2006). Cuxac (1999) separates the lexicon in two subparts, one composed of signs with "iconic intent", the other composed of signs without iconic intent. The former he also calls "highly iconic structures". They conserve a demonstrative dimension "like this" of the experience they render. Highly iconic structures comprise various kinds of transfer, among them transfer of size and/or form, situational transfer, and personal transfer, as in role taking.

According to Merrell (2006) and many others, iconicity precedes arbitrary signs--a topic already debated in the Platonic dialogue Cratylos (Wilcox 2006: 472) and also in line with Goldin-Meadow's, Frishberg's, and Klima and Bellugi's account (see above). On the one hand, this precedence relation is intuitive since in an iconic relation the link between word and referent is externally motivated (Croft 1990, in Wilcox 2006) and thus is established more easily. In functional or cognitive linguistics, iconic relations are supposed to help in the storage, retrieval, and communication of events (for a comprehensive overview on the functionalist view on iconicity, cf. Wilcox 2006, cf. also Note 25). On the other hand, arbitrary relations between words and referents are the rule and not the exception in lexicons.

Gasser (2004) tries to explain this apparent contradiction by invoking two factors--number of items in the lexicon and number of dimensions along which forms and meanings can vary. He argues that iconicity is only advantageous for small lexicons, as in language acquisition, but is superseded by arbitrariness as the lexicon grows. Gasser formalizes iconicity and arbitrariness in terms of shared dimensions between form and meaning in a connectionist network simulating competitive learning of words. Given a restricted number of those form and meaning dimensions, the majority of which have identical values in iconic form-meaning pairs, it becomes increasingly hard to distinguish between various words. However, unambiguous access to words in the lexicon, as is easily achieved for arbitrary form-meaning pairs, is essential for both language comprehension and production. Gasser also addresses the question why there are more iconic signed words than iconic spoken words. Within his computational model, he argues that there may be more iconic dimensions along which the signed words can vary, due to the greater iconic capacity of the visual-gestural modality. Gasser also invokes two different learning mechanisms for iconic and arbitrary form-meaning relations--iconic relations are learned through associative learning and arbitrary relations through competitive categorical learning. Associative learning is simple, fast, and available from early on if there is regularity in the form-meaning relationship, as in iconic form-meaning pairs. Categorical learning is somewhat harder and slower and appears later in acquisition but is more effective if there is no regularity in the form-meaning relationship, as in arbitrary form-meaning pairs. Words as categories mediate between form and meaning so that a stable mapping between both spaces can be achieved. In this sense, words emerge as "local representations that result from the competitive learning of mainly arbitrary form-meaning associations." (Gasser 2004: 6). The mediating function of words in the sense of categories is reminiscent of Peirce's tripartite symbol function of the sign where a sign is related to its object independently of similarity (sign as icon) or real-world connections between them (sign as index) but through the meaning (in terms of categorical dimensions) that is given to it conventionally. (27)

Gasser et al. (in press) pinpoint regions in the lexicon where iconic relations hold and others where arbitrary relations hold. Iconic relations hold for so-called "expressives" (also called "idiophones" or "mimetics" in spoken languages such as Japanese or Tamil) which occupy restricted regions in the lexicon. Arbitrary relations hold for categories of basic concrete nouns which occupy larger regions in the lexicon. Crucially, for bigger categories such as "vegetables", the members (carrots, celery, broccoli, etc.) have to be readily distinguishable, which is problematic for iconic relations if they vary only in few dimensions. In such a case, arbitrary relations are more successful for unambiguous word access. The authors conclude that humans obviously have the capacity to form both kinds of relations, through different learning mechanisms, and to make use of them flexibly, as it is most appropriate for a particular subpart of the lexicon.

So far we have only looked at the role of iconicity in the semiotic function. Next, I will discuss the role of iconicity in the morphological composition of signs. In one area of sign language morphology the validity of the universal principles of organization and structure invoked by Sandler and Lillo-Martin (2006) has been challenged: classifiers.

8.2. Meaning by morphemes or by gestures: the debate on classifier predicates

Despite the overall identical conceptual-semantic space of all languages, it has been doubted whether the role of semantics in the formation of signs and the principles of morphological combination in polymorphemic signs are identical for signed and spoken languages. As indicated above, the more pronounced role of iconicity in sign language has led researchers to posit a stronger semantic boundedness of signs. (28) Especially one group of signs, so-called classifier predicates (CPs), also called classifier constructions (CCs), has attracted a great deal of attention. Classifier signs visually depict aspects of the real-world referents, in their handshapes, movements, orientations, and positions. As you may recall from the example in (1), the G-handshape represents the [+ animate] agents of the predicate, the two hands the number of subjects, their orientation the facing of the two agents, the symmetric movement the approach of the subjects. Such classifier predicates have posed a puzzle to sign language researchers: "These forms are anomalous at every level of analysis. They are iconic yet conventionalized, at once mimetic and linguistic." (Sandler and Lillo-Martin 2006: 17). This inherent ambiguity is mirrored in the various approaches that have been put forward to capture their liminal nature. (29)

8.2.1. Alternative accounts of classifier predicates. The first approach was a classical linguistic analysis (Supalla 1982, 1986): all information in such classifier predicates was thought to be represented by discrete linguistic morphemes. The overall meaning of the classifier predicate was compositionally computed from those morphemes. Consequently, a lot of morphemes had to be assumed, signs became polysynthetic. While there is no upper limit on the number of proposed morphemes in a predicate, the proliferation of morphemes and their semantic fuzziness cast doubt on this approach.

Alternative accounts were therefore proposed which (partly) shifted the focus from the linguistic to the gestural level (Liddell 1995, 2003; Cogill-Koez 2000a, 2000b; Schembri et al. 2005, among others). Now, some, if not all, of the information in classifier predicates was accounted for in terms of the gradient, analogue character of those "depicting" verbs, as Liddell (2003) now calls them. Although Liddell is cautious not to abandon a morphemic analysis altogether, he clearly opts for a gradient view:

Supalla's proposal was based on the idea that all meaning must come from morphemes. I suggest an approach in which some meaning comes from identifiable morphemes, some meaning is associated with the full lexical unit itself, and meaning is also constructed by means of mental space mappings motivated by the variable and gradient ways that the hand is located and oriented. (Liddell 2003:273-274)

If not through a morphemic representation, through what other kind of representation should the iconic information in classifier predicates be accounted for? Cogill-Koez (2000a, 2000b) suggests a Templated Visual Representation, TVR. She claims that in sign language, two fully equal channels of representation of propositional information are used, the linguistic and the visual. Classifier predicates (CPs) are not linguistic, but highly abstract, schematic visual representations built up from discrete parts which she calls "templates", in a combinatorial fashion. A template is a form that occurs repeatedly, over various contexts, with the same physical realization. Some of these templates are discrete but not digital. They allow for "elastic" analogue depictions of handshapes, orientations, movements and locations. Representing CPs by TVRs avoids the trouble a fully linguistic analysis encounters. It integrates the iconic, analogue, and gestural mode, yet it maintains the idea of a combinatorial, compositional construal of meaning. By claiming that TVRs are not language, but "intriguingly language-like", Cogill-Koez underlines their liminal character. In this approach, language is no longer a homogeneous mode of mental representation or processing as it was traditionally claimed for spoken languages, but a heterogeneous, multimodal communication system. The crucial question that is at stake here is the question whether or not and, if yes, to which extent signed and spoken languages are similar and, furthermore, how the human language faculty is to be described after acknowledging alternate pathways of meaning generation:

It is possible, of course, that ASL in particular, and signed languages more generally, are organized differently than vocally produced languages. (...) This is a highly unlikely result since the human brain with all its conceptualizing power creates and drives both signed and spoken languages. It is much more likely that spoken and signed languages both make use of multiple types of semiotic elements in the language signal, but that our understanding of what constitutes language has been much too narrow. (Liddell 2003: 362)

Liddell recognizes the overall similarity of signed and spoken languages on the grounds of a common neural organization of the brain. This noteworthy quotation points to the fact that the current controversy is largely an empirical one that will be decided on the grounds of evidence coming from psycho- and neurolinguistics. If we know how classifier predicates are represented and processed (in the brain), we are in a better position to come up with an appropriate model of not only these constructions but also of what constitutes language in general.

8.2.2. What do we know about the processing of classifier predicates? Emmorey and Herzig (2003) explicitly address the question whether classifier predicates have categorical ("linguistic") or gradient ("analogue") properties. They obtained mixed results. Perceptually given gradient values of size become projected onto a small number of discrete values, as in the case of classifiers for object size. Deaf signers use only three size morphemes--small, medium, and big--in order to refer to a range of objects of continuously growing size (e.g., of a medallion). Locations, however, are not represented in a digital, linguistic way but rather as nonlinguistic, analogue representations of physical space. The latter result contradicts Supalla's (1982) notion of "placement morphemes". Emmorey and Herzig's mixed results show how complex the issue really is, with linguistic and gestural components within the same category of classifier predicates.

A PET study on the processing of English locative prepositions and ASL locative classifiers in hearing bilinguals with native competence in English as well as in ASL revealed modality-independent as well as modality-dependent aspects (Emmorey et al. 2005). Parietal cortex was bilaterally active during the processing of both prepositions and classifiers indicating that in this area, especially in the left supramarginal gyrus, modality-independent spatial information is linguistically encoded. Right parietal cortex was more active in the processing of classifiers. This area supports the transformation of visual information in a scene to locations of the hands in signing space. Classifier handshapes are presumably processed in the left inferior temporal cortex which is exclusively active during the processing of classifiers but not prepositions. Finally, Broca's area is not engaged in the processing of classifiers presumably because not lexical words are accessed but rather classifiers refer to a whole class of objects. The authors conclude that "the neural correlates of spatial language in English and American Sign Language are nonidentical and reflect linguistic and modality-specific processing requirements." (Emmorey et al. 2005: 839).

8.2.3. Classifier predicates in sign language production. In our slips of the hand corpus classifiers (CLs) were found to be involved in errors, too (Leuninger et al. 2004). We had predicted that "Size and Shape Classifiers" (SASS classifiers) are readily available for a slip, due to their status as free morphemes. In (25), the signer wants to sign the complex N-CL expression KUHL CL_SCHRANK (literally 'cool cupboard', meaning 'refrigerator') but substitutes the classifier CL_ZIMMER 'room'.

[FIGURE 13 OMITTED]

(25) Substitution of a SASS-classifier

-- t *

PAPA, KUHL CL_ZIMMER <headshake> KUHL

Daddy cool room cool

CL_SCHRANK OFFNET.

cupboard opens.

'As for Daddy, he opens the refrigerator.'

*(-- 't' denotes a facial expression for topicalization)

We also found evidence for handle classifiers which are bound morphemes expressed by specific handshapes and realized on transitive verbs. The handle classifier expresses the way objects belonging to this class are handled. In (26), the signer wants to sign that the bottle of milk is placed onto the table. Instead, he perseverates the classifier morpheme 'CL_egg' of the noun El 'egg' that had preceded the noun MILCH 'milk' and spells it out on the predicate denoting the placing of the 'CL_bottle' on the table. The predicate 'to put on' is only expressed as a movement path in DGS that, depending on which CL it is combined with, is translated as 'to place onto (the table)' (e.g., for 'bottle') or 'to lay on' (e.g., for 'egg').

(26) Perseveration of a handle classifier

SCHUSSEL, EI-AUFSCHLAGEN, MILCH

bowl to crack an egg milk

VERB-CL_EI //VERB-CL_FLASCHE

verb-CL_EGG//verb-CL_BOTTLE

'(The woman) cracks an egg on the rim of the bowl and places the milk bottle on the table.'

The slip in (26) is in accordance with the findings of Emmorey and Herzig (2003) in that a classifier handshape behaves like a discrete morpheme that can be substituted by another classifier handshape. Last, we also found errors where the loci of two signs were exchanged (see Leuninger et al. 2004).

[FIGURE 14 OMITTED]

These findings from sign language production yield evidence for a morphemic interpretation of SASS and handle classifiers (handshape) as well as for (some) locative morphemes. The classifiers behave like morphemes in that they form a closed class of elements which stand in a systematic paradigmatic relation with each other and which compete during language production. If an incorrect member of the class is activated more strongly than the intended one, a slip occurs. This process is the very same as in spoken language morpheme errors. It would have been rather odd to postulate two different processes--one linguistic for spoken language, one templatic or gestural for sign language--which nevertheless yield the same result. The most parsimonious conclusion from these observed classifier errors is that the signed and spoken language processor works modality-independently.

8.2.4. Classifier constructions as different from ordinary lexical items. Sandler and Lillo-Martin (2006:271-272) have argued for a separation of classifier predicates from ordinary words on grammatical grounds. While ordinary words belong to the "lexical level", classifier predicates belong to the "nonlexical" level. They provide evidence that classifiers do not adhere to phonological constraints on regular words. Thus, they may violate the dominance and the symmetry condition on two-handed signs by realizing two different handshapes on the two hands or may show otherwise illicit movement patterns. While the authors do allow for a morphemic analysis of productive classifier predicates, they do not grant them full linguistic status. Classifiers function rather in the service of creative language, metaphor, storytelling, and poetry. However, in their anomaly, as opposed to ordinary words, Sandler and Lillo-Martin (2006) find a beneficial contrasting role:

The existence of the word is a necessary condition for any language, and its identification is essential for acquisition and for processing. In these languages that also have a nonlexical component, the formal constraints on words, and by extension the clear distinction between words and nonwords, provide a significant advantage for acquisition and processing of the words and sentences of sign languages. (Sandler and Lillo-Martin 2006: 272)

This difference, however, may be blurred by the fact that productive classifier constructions tend to lexicalize. As a result of this "drift towards lexicalization" (Sandier and Lillo-Martin 2006: 99) they become frozen and their original morphemic componential structure becomes opaque (for an example, see Section 6.5). Cogill-Koez (2000b) also notes "freezing" and "melting" of classifier predicates along the vertical axis that separates the linguistic "frozen sign system" (the lexicon) from the "Templatic Visual Representational system". In the case of "melting", a meanwhile frozen classifier predicate is reanalyzed into its original components which may become malleable according to an individual signer's concurrent communicative purposes. (30) As a consequence of recurrent "melting" and "freezing", "dual listing" of the individual components along with the frozen form ensues (Sandler and Lillo-Martin 2006: 102; Cogill-Koez 2000b). Signers obviously have no problem switching levels smoothly during discourse. The same should hold in language acquisition. The deaf child is obviously able to acquire both uses of signs, as frozen and as productive classifiers. (31)

This scenario is compatible with a flexible conception of linguistic items as proposed by Jackendoff (1997). In his conception, a full-blown lexical item is a triple <PS, SS, CS> of a piece of phonetic structure (PS), syntactic structure (SS), and conceptual structure (CS) (compare Bierwisch's (1999) layout in Section 3). However, there are also "defective" lexical items such as "hello" which clearly have a phonological form, PS, and transport some meaning, CS, but which lack SS, hence, their lexical structure would be <PS, [??], CS). (32) This lack of syntactic structure prevents them from being inserted in a syntactic derivation which would properly mediate the relation between form and meaning. How, then, could the PS of "hello" possibly map on its CS? In Jackendoff's lexical licensing theory, a direct mapping between the phonology of a word and its meaning, bypassing syntax, is possible since all interface conditions of the Phonetic Interface Level (PIL) and the Conceptual Interface Level (CIL) can be applied at once (Jackendoff 1997: 94f). Jackendoff's idea that PIL=CIL is also able to account for even more important topics such as language acquisition: while initially, infants construe a form-meaning relationship without syntax, i.e., they only have a lexicon with items that have a (PS, [??], CS) structure, they gradually grow into syntax and add SS to their lexical items. If we apply this line of reasoning to classifier constructions, they may also circumvent syntax and map directly from their PS to CS. The mapping between PIL and CIL would be motivated by iconicity rather than being established through formal principles. Importantly, Jackendoff does not see any problem conceiving of those defective lexical items as "language": "They are indeed outside the combinatorial system of (narrow) syntax, but not outside language" (Jackendoff, 1997: 94). With "narrow syntax" he means the syntactic calculus proper which is at the core of the Human Language Faculty (HLF), in the sense of Hauser et al. (2002). Language, however, is much more than just narrow syntax. It can, in Jackendoff's conception, also accommodate other elements and other relations between those elements. Classifier predicates may be another category in this broader area of language and could be treated under a more flexible account of lexical licensing.

8.2.5. Classifier constructions as similar to lexical items: evidence from handshape features and prosody. As classifier constructions have complex morphosyntax and semantics, a question arises as to what prosodic domain they have--a more wordlike (prosodic word) or a more phraselike one (intonational phrase)? In a crosslinguistic study on ASL, Hong Kong Sign Language (HKSL) and Swiss German Sign Language (Deutsch-Schweizer Gebardensprache, DSGS), Eccarius and Brentari (2007) compared two-handed type 3 (33) classifier constructions (CCs) to two-handed type 3 lexical items (simple signs where handshape and movement do not carry morphological information) in terms of (i) complexity of phonological features and (ii) prosodic domain. One argument for the supposed "abnormality" of CCs had been that they violated Battison's (1978) Dominance and Symmetry condition while lexical items adhered to them (Aronoff et al. 2003; Sandler and Lillo-Martin 2006; see 8.2.4). However, upon reanalyzing the phonological forms of those type 3 CCs in terms of the phonological features "selected fingers" and "joint" specification, instead of using Battison's original handshape analysis, the authors could show that in the overwhelming majority of cases, the CCs adhered to the Dominance and Symmetry condition. Moreover, single lexical items as well as CCs regulate complexity within the sign in identical ways: "Maximize symmetry and restrict complexity in the handshape features of the two hands." (Eccarius and Brentari 2007: 1198). Given these empirical results it seems that CCs are phonologically much better behaved than previously thought.

More relevant to the present topic is the timing behavior of those two-handed type-3 CCs. The authors examined whether they showed timing characteristics of prosodic words or larger prosodic units such as intonational phrases. They classified CCs according to four phonological properties that are relevant for prosodic units, namely (i) phonological handshape of the two hands, in particular whether the nondominant hand H2 had an unmarked handshape, (ii) simultaneous vs. sequential articulation of the two hands, (iii) type of movement of H2 (existential vs. contact), and (iv) presence of an eyeblink. Two-handed signs whose (i) H2 has an unmarked handshape, (ii) H1 and H2 are articulated simultaneously, (iii) H2 shows an existential movement and (iv) whose H2 and H1 movements are not interrupted by an eyeblink were considered as prosodic words. CCs not meeting these criteria, especially those that showed an eyeblink between the articulation of H2 and H1, were considered as intonational phrases. The authors found that across all three sign languages 60% of all CCs fell into the former category (group A), that is, they behaved like prosodic words comparable to simple lexical items. Of the 40% of CCs that did not behave like prosodic words one group (group B, which accounted for 25%) was in complementary distribution to the first one. In particular, H2 could have any handshape and was signed first, and it would stay in signing space while H1 would come in and act on H2 in some way. An optional but reliable eyeblink would separate the two articulations from each other. In terms of the four above-mentioned criteria, H2 could (i) have a marked handshape, (ii) the two hands could be articulated sequentially with H2 coming in first, followed by H1, (iii) H2 could move independently of H1 and (iv) the articulation of the two hands could be interpunctuated by an optional but reliable eyeblink. Thus, CCs fell into two broad groups, with respect to their prosodic behavior, one that behaved like prosodic words and one that behaved like intonational phrases. Besides the four phonological criteria there was a fifth criterion that captured the morphosyntactic type of the classifier on H1 and H2, namely (v) whether it was a whole entity classifier (as in intransitive structures with one argument) or a handling classifier (as in transitive structures with two arguments). With this criterion the authors were able to address the phonology-syntax interface in CCs. It turned out that this morphosyntactic criterion, too, distinguished between the two groups, A and B, in that in group B the nondominant H2 could be a handling classifier (thus implying a transitive structure, such as a verbal phrase, VP), but in Group A it had to be a whole entity classifier.

One of the most striking results of Eccarius and Brentari's study is the way sign languages regulate grammatical complexity within and across particular prosodic domains, here, the prosodic word and the intonational phrase. Whereas in two-handed lexical signs the nondominant hand H2, which has no morphological status but is rather a dependent unit (having no independent movement and being confined to an unmarked handshape), H2 in two-handed type 3 CCs can overcome these phonological constraints and have an independent movement or a marked handshape, both leading to branching of the phonological feature tree. As long as H2 does not exceed the limits of a single word phonologically, it may be realized with word-like prosody, that is, simultaneous with H1 and without an eyeblink. While for phonological reasons, a two-handed type 3 word with a handling classifier on H2 may well be a prosodic word, for morphosyntactic reasons, H2 is confined to being a (simple) entity classifier (in an intransitive structure) if H2 is to act as an independent morpheme. If H2 is a handling classifier (as in a transitive structure), sequential articulation is automatically triggered and the prosodic domain is enlarged to an intonational phrase. This makes sense since a transitive structure is a syntactic domain (e.g., a VP). (34) Morphosyntax thus takes precedence over phonology and imposes a restriction on the nondominant hand that prohibits the trade-off for phonological complexity between the two hands. Instead, it enforces an enlargement of the prosodic domain which otherwise would have been unnecessary, as can be seen from two-handed type 3 lexical signs that can still be prosodic words and from two-handed type 3 CCs that have a whole entity classifier on H2. This is the exact parting line between the lexicon and syntax. As we can see, CCs cut across this divide and morphosyntax (argument structure) decides on which side they fall. A different prosodic structure is assigned accordingly.

8.3. Sign language prosody: an embodied modality effect?

This survey has shown that in sign language the prosodic domain of the word is quite strictly constrained by [PW.sub.D] = 1 [less than or equal to] 2[??]. Moreover, due to the tendency or conspiracy of monosyllabicity signs typically consist of only a single syllable. Why preferably a single syllable, why maximally two syllables? As pointed out above, [PW.sub.D] = 1 [less than or equal to] 2[??] is an output constraint. Whereas Optimality Theory usually does not explain the motivations for such constraints, the notion of "user optimality" has been invoked by Haspelmath (1999) in an attempt to explain phonological constraints in terms of what is optimal for the producer of the language. If we adopt this line of reasoning, we may wonder if the monosyllable is the optimal output form in the gestural modality and if two syllables are the upper boundary of what can still be construed as a convenient prosodic unit of a medium size. However, McNeilage (1998) has claimed that there is no single oscillator in sign language as there is in spoken language, namely the mandibular oscillator which creates the prosodic frame into which the segmental content is inserted. In sign language there are various oscillators related to the various articulators, the hands, the forearm, the upper arm, the torso, and the head. Obviously, there is no single bodily bound oscillator in which the sign language's prosodic frame is grounded and that might account for the monosyllable as the optimal prosodic output unit. However, those various oscillators are far from being uncoordinated in sign language production. To the contrary, there is a precise synchronization between them through some smart coupling processes (Wilson 1998). The monosyllable might then be the optimal prosodic unit emerging from the dynamic interplay of the various interrelated oscillators. McNeilage could assume a stable oscillator for spoken language and infer from it the prosodic domains that are used for spoken language, most importantly, the syllable. For sign language, we might go the other way round and infer from the stable output pattern, the recurring monosyllabic pattern, a compound but nevertheless stable oscillator underlying it. As typical of sign language, there is not only a layered, tiered representation of signs, but also a tiered oscillator producing rhythmic, prosodic units onto which grammatical words are released like boats floating on a wavy current, carrying the meaning of the signs. The sign language as well as the spoken language oscillators are embodied since the articulators that produce them are given to us and have their own motor dynamics, in the sense of Kelso (1995). They are different in their timing behavior and in the modality they use. However, what is the same is the supervenience of language over these embodied articulators. Language rides those rhythms. It may even be the case that--in the absence of any single major oscillator--the influence of language on the bodily articulators in sign language is much greater than in spoken language and goes much more top-down rather than bottom-up. Cognitive constraints such as working memory (Baddeley 2003), spatial and visual cognitive capacities rather than motor constraints might restrict the prosodic domain of words. Irrespective of how the relation between language and articulation is construed, there is no reason to believe that these interface processes are somewhat "easier" or "more direct", as suggested by gestural accounts.

8.4. Representation and processing

In this paper, I have discussed the "word" in sign language as a content unit (the grammatical word) and as a structural unit (the prosodic or phonological word). Grammatical words have entries in the mental lexicon and are specified for lexical, grammatical, and form information. In language production and perception, they are the content units that are accessed in the mental lexicon (for a language production model, see Levelt et al. 1999; for a language comprehension model, see Cutler and Clifton Jr. 1999). Prosodic words are processed online during language perception and production, e.g., in production, they are created "on the fly" (Levelt et al. 1999; see Section 7.3). As we have seen in Section 6, during language change, sign combinations that initially violate constraints on the prosodic word over time come to obey them, as in compound (6.4) or classifier formation (6.5). This can sometimes happen very fast, as with the latter. Those prosodic words have lexicalized and received stable entries in the mental lexicon, hence, they have become content units. In this respect, one can readily speak of "lexicon optimization" in the sense of Optimality Theory. A hitherto fluid form crystallizes. Cogill-Koez (2000b) uses similar metaphors, namely "melting" and "freezing". The relation between fluid and crystallized forms demonstrates also the relation between processing and representation. Representations follow from processing, they are the same, only in a different aggregate. Such a view is more in the spirit of dynamical approaches to language which emphasize the process over the structure (van Gelder 1998; Elman 1990; among many others). Data from online processing show the interaction between lexical representations and processing constraints in vivo. The prosodic unit of the phonological word acts as a processing (output) constraint on content units that are to be produced online. I have shown ample evidence for this process, mainly from slips of the hand and monitoring in DGS (Section 7). It is through such data that the spontaneous emergence of word units under the prosodic constraints can be demonstrated most convincingly. Furthermore, in sign languages, we witness the situation that in parts of the lexicon there are more fluid forms than in the spoken language lexicon, due to the higher degree of iconicity of signs. Therefore, a process perspective on sign language is most warranted.

9. Conclusion

This study set out to investigate the linguistic unit "word" in sign language, especially under a processing account. From a semiotic point of view, the duality of patterning holds for signed and spoken words alike, despite obvious differences in iconicity, different degrees of analogue and digital content, and dimension of processing. Controversial constructions such as classifier constructions which may have gestural aspects besides linguistic ones and which may exceed the word unit by conveying phrasal content, are current topics in the discussion whether signed and spoken languages can be characterized by a common linguistic theory. After having gone through phases of purely linguistic and mainly gestural interpretations, contemporary sign language linguistics accepts hybrid accounts which are supported by psycholinguistic or neuroscientific evidence (Emmorey and Herzig 2003; Emmorey et al. 2005).

Further evidence for the psychological reality of the grammatical as well as for the prosodic word in sign language was provided by slips of the hand and their repairs. Words are the most common error units in spontaneous production errors. The results from the comparison of slips of the hand and tongue with respect to the word provide strong evidence for a universalist account which embraces all natural languages disregarding modality (Chomsky 1995; Lillo-Martin 1986; Crain and Lillo-Martin 1999; Sandler and Lillo-Martin 2006; Leuninger et al. 2004). From such an account, a line of research follows which seeks similarity on the representational and processing level behind modality-specific particulars. It is natural, on such an account, to parallel central phonological concepts, such as the vowel/consonant contrast or syllable structure on the representational side and to assume the same basic language processes in perception and production for both modalities. At the same time, modality-specific differences are acknowledged as necessitating adaptations to the primary channel of processing--aural-oral for spoken language and visual-gestural for sign language. Those differences are most pronounced with respect to the information flow, the sequencing, and the dimension of processing. This has profound implications for the complexity of words with respect to morphology, phonology, and prosody. In obeying the specific modality-dependent interface constraints on their size and complexity, words in sign language turn out to carry more information vertically, distributed over various articulators simultaneously than words in spoken language. In the same vein, sign language morphology and syntax is less concatenative but more fusional/simultaneous. There is ample evidence for restrictions of the complexity of syllables and the prosodic word in sign language. Each syllable may contain at most two x-slots, and each prosodic word may contain at most two syllables. Evidence was reviewed for the reality of the PWD = 1 [less than or equal to] 2[??] constraint from various cases of exceeding or falling below these limits, from regular phonological and morphological processes, lexicalization, and processing.

The convergent results foster the notion of the prosodic word in sign language as a central psycholinguistic unit in processing, both in perception (Brentari 2006), as well as in production. The greater homogeneity of sign languages with respect to processing and morphological type as compared to the greater heterogeneity of spoken languages hints at important differences in the pervasiveness of interface constraints within the two modalities. The pressure on sign languages to convey linguistic information in real time favors a nonconcatenative, more fusional/simultaneous morphology to a much stronger degree than for spoken languages. It is well in the spirit of the minimalist program (Chomsky 1995) to consider the form of the grammar as an emergent result of the interface constraints, with modality being an important parameter. Nevertheless, universal conditions such as lexical foundation, discrete representation, and recursive combination (Bierwisch 1999) equally hold for all natural languages. In this sense, sign language research yields crucial evidence for the view of Universal Grammar not as providing a set of substantive primitives but as providing a set of formal constraints which, by way of interacting with the interface domains of the conceptual-intentional system on the one hand and the articulatory-perceptual system on the other hand, derives such primitives (Bierwisch 2001; Hohenberger 2007). Besides this general claim, sign languages also provide evidence that those universal wellformedness conditions may not pertain uniformly to all phonological parameters in a sign. While handshape is discrete, binary, hierarchically organized and categorically perceived, movement and place of articulation may not be so to the same extent (Brentari 2006). Likewise, the greater preponderance of iconicity in certain parts of the sign language lexicon influences the processing of those signs and word formation processes such as in classifier constructions. It must be stressed, however, that a wide notion of language in the sense of Jackendoff (1997) embraces these modality effects. Under a more restricted notion of language in the sense of "narrow syntax", they can be accounted for interactively, namely in terms of interface conditions imposing modality-specific constraints of the perceptual and articulatory systems on the representational language system. Interface conditions are not generally modality-specific, though. The conceptual-intentional system is not affected by modality, hence, speakers and signers can entertain the same concepts and have the same beliefs, desires, doubts, etc. "about" those concepts. Where language does have an impact on thought, in the sense of Slobin's "thinking for speaking" (1987, 2003), this affects spoken and signed languages likewise.

Middle Eastern Technical University,

Ankara

Revised version received

Received 2 December 2004 3 November 2007

References

Aitchison, Jean (2003). Words in the Mind." An Introduction to the Mental Lexicon. Malden, MA: Blackwell.

Akinlabi, Akinbiyi (1996). Featural affixation. Journal of Linguistics 32, 239-289.

Aronoff, Mark; Meir, Irit; Padden, Carol; and Sandier, Wendy (2003). Classifier constructions and morphology in two sign languages. In Perspectives on Classifiers in Sign Languages, Karen Emmorey (ed.), 53 84. Mahwah, NJ: Lawrence Erlbaum.

Baddeley, Alan D. (2003). Looking back and looking forward. Nature Reviews Neuroscience 4, 829-839.

Baker, Anne; van den Bogaerde, Beppie; and Crasborn, Onno (eds.) (2003). Cross-Linguistic Perspectives in Sign Language Research: Selected Papers from TISLR 2000. Hamburg: Signum Press.

Battison, Robbin M. (2003 [1978]). Lexical Borrowing in American Sign Language. Silver Spring, MD: Linstok Press.

Berg, Thomas (2003). Die Analyse von Versprechern. In Enzyklopiidie der Psychologie: Themenbereich C. Theorie und Forschung: Serie III: Sprache: Band 1: Sprachproduktion, Theo Herrmann and Joachim Grabowski (eds.), 247-264. Gottingen: Hogrefe.

Bierwisch, Manfred (1999). Words as programs of mental computation. In Learning. Rule Extraction and Representation, Angela D. Friederici and Randolf Menzel (eds.), 3-35. Berlin and New York: Walter de Gruyter.

--(2001). Repertoires of Primitive Elements: Prerequisite or result of acquisition? In Approaches to Bootstrapping: Phonological. Lexical, Syntactic and Neurophysiological Aspects of Early Language Acquisition 2, Jurgen Weissenborn and Barbara Hohle (eds.), 281-307. Amsterdam and Philadelphia: Benjamins.

Boyes Braem, Penny and Sutton-Spence, Rachel (eds.) (2001). The Hands are the Head of the Mouth." The Mouth as Articulator in Sign Language. Hamburg: Signum Press. Brennan, Mary (1990). Word formation in British Sign Language. Unpublished doctoral dissertation, University of Stockholm.

Brentari, Diane (1995). Sign language phonology: ASL. In A Handbook of Phonological Theory, John Goldsmith (ed.), 615-639. Cambridge, MA: Basil Blackwell.

--(1998). A Prosodic Model of Sign Language Production. Cambridge, MA: MIT Press.

--(2002). Modality differences in sign language phonology and morphophonemics. In Modality and Structure in Signed and Spoken Languages, Richard P. Meier, Kearsey

Cormier and David Quinto-Pozos (eds.), 35-64. Cambridge: Cambridge University Press.

--(2006). Effects of language modality on word segmentation: an experimental study of phonological factors in a sign language. In Papers in Laboratory Phonology VIII, Louis Goldstein, Douglas H. Whalen, and Catherine T. Best (eds.), 155 164. Berlin and New York: Mouton de Gruyter.

Chomsky, Noam (1995). The Minimalist Program. Cambridge, MA: MIT Press.

Cogill-Koez, Dorothea (2000a). Signed language classifier predicates: linguistic structures or schematic visual representation? Sign Language and Linguistics 3 (2), 153 207.

--(2000b). A model of signed language 'classifier predicates' as templated visual representation. Sign Language and Linguistics 3 (2), 209-236.

Cormier, Kearsy A. (2004). Grammaticization of indexic signs: how American Sign Language expresses numerosity. Unpublished doctoral dissertation, University of Texas at Austin.

Crain, Steven and Lillo-Martin, Dianne (1999). An Introduction to Linguistic Theory and Language Acquisition. Malden, MA: Blackwell.

Croft, William (1990). Typology and Universals. Cambridge: Cambridge University Press.

Cutler, Anne and Clifton, Charles Jr. (1999). Comprehending spoken language: a blueprint of the listener. In The Neurolinguistics of Language, Colin M. Brown and Peter Hagoort (eds.), 123-165. Oxford: Oxford University Press.

Cuxac, Christian (1999). French Sign Language: proposition of a structural explanation by iconicity. In Gesture-Based Communication in Human-Computer Interaction: International Gesture Workshop, LNAI 1739, Annelies Braffort, James Richardson, Rachid Gherbi, Sylvie Giber, and Daniel Teil (eds.), 165-184. Berlin and New York: Springer.

Dixon, Robert M. W. and Aikhenvald, Alexandra Y. (2003). Word: a typological framework. In Word: A Cross-Linguistic Typology, Robert M. W. Dixon and Alexandra Y.

Aikhenvald (eds.), 1-41. Cambridge: Cambridge University Press. Elman, Jeffrey L. (1990). Finding structure in time. Cognitive Science 14, 179-211.

Eccarius, Petra, and Brentari, Diane (2007). Symmetry and dominance: a cross-linguistic study of signs and classifier constructions. Lingua 117, 1169-1201.

Emmorey, Karen (2002). Language, Cognition, and the Brain. Insights from Sign Language Research. Mahwah, N J: Lawrence Erlbaum Associates.

--(2003). Perspectives on Classifier Constructions in Sign Languages. Mahwah, N J: Lawrence Erlbaum.

Emmorey, Karen and Herzig, Melissa (2003). Categorical versus gradient properties of classifier constructions in ASL. In Perspectives on Classifier Constructions in Sign Languages, Karen Emmorey (ed.), 222 246. Mahwah, N J: Lawrence Erlbaum.

--Emmorey, Karen; Grabowski, Thomas; McCullough, Stephen; Ponto, Laura L. B.; Hichwa, Richard D.; and Damasio, Hanna (2005). The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals. Neurolmage 24, 832-840.

Fodor, Jerry A. (1975). The Language of Thought. Cambridge, MA: Harvard University Press.

--(1981). The present status of the innateness controversy. In Representations: Philosophical Essays on the Foundations of Cognitive Science, 257-316. Cambridge, MA: MIT Press.

Frishberg, Nancy (1975). Arbitrariness and iconicity. Language 51,696-719.

Gasser, Michael (2004). The origins of arbitrariness in language. In Proceedings of the Annual Conference of the Cognitive Science Society, 26, August 4-7 2004, Chicago, IL, Kenneth Forbus, Dedre Gentner, and Terry Regier (eds.), 434-439. Cognitive Science Society.

Gasser, Michael; Sethuraman, Nitya; and Hockema, Stephen (in press). Iconicity in expressives: an empirical investigation. In Empirical and Experimental Methods in Cognitive/ Functional Research, John Newman and Sally Rice (eds.). Conceptual Structure, Discourse, and Language 7. Stanford: CSLI.

Gee, James P., and Goodhart, Wendy (1988). American Sign Language and the human biological capacity for language. In Language Learning and Deafness, Michael Strong (ed.), 49-74. Cambridge: Cambridge University Press.

van Gelder, Timothy (1990). Compositionality: A connectionist variation on a classical theme. Cognitive Science 14, 355-384.

--(1998). The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences 21, 615-665.

Givdn, Talmy (1989). Mind, Code and Context: Essays in Pragmatics. Hillsdale, NJ: Lawrence Erlbaum.

Goldin-Meadow, Susan (2003). The resilience of language: what gesture creation in deaf children can tell us about how all children learn language. New York and Hove: Psychology Press.

Haiman, John (ed.) (1985). Iconicity, in Syntax. Amsterdam: Benjamins.

Happ, Daniela and Hohenberger, Annette (2001). DFG-Projekt: Vergebardler: phonologische und morphologische Aspekte der Sprachproduktion in Deutscher Gebardensprache (DGS). In Gebardensprachlinguistik 2000--Theorie und Anwendung, Helen Leuninger and Karin Wempe (eds.), 217-240. Internationale Arbeiten zur Gebardensprache und Kommunikation Gehorloser 37. Hamburg: Signum Press.

Haspelmath, Martin (1999). Optimality and diachronic adaptation. Zeitschrift fur Sprachwissenschaft 18, 180-205.

--(2006). Frequency vs. iconicity in explaining grammatical asymmetries. Unpublished manuscript, Max Planck Institute for Evolutionary Anthropology, Leipzig.

Hauser, Marc D.; Chomsky, Noam; and Fitch, W. Tecumseh (2002). The faculty of lan-guage: what is it, who has it, and how did it evolve? Science 298, 1569-1579.

Hildebrandt, Ursula, and Corina, Davi (2002). Phonological similarity in American Sign Language. Language and Cognitive Processes 17, 593-612.

Hohenberger, Annette (2007). The possible range of variation between sign languages: UG, modality, and typological aspects. In Visible Variation." Comparative Studies on Sign Language Structure, Pamela Perniss, Roland Pfau, and Markus Steinbach (eds.), 341-383. Berlin: Mouton de Gruyter.

Hohenberger, Annette and Happ, Daniela (2001). The linguistic primacy of signs and mouth gestures over mouthing: Evidence from language production in German Sign Language (DGS). In The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Language, Penny Boyes Braem and Rachel Sutton-Spence (eds.), 153-189. Hamburg: Signum.

Hohenberger, Annette; Happ, Daniela, and Leuninger, Helen (2002). Modality-dependent aspects of sign language production: evidence from slips of the hands and their repairs in German Sign Language (Deutsche Gebardensprache (DGS)). In Modality and Structure in Signed and Spoken Languages, Richard P. Meier, Kearsey Cormier, and David Quinto-Pozos (eds.), 112-142. Cambridge: Cambridge University Press.

Hohenberger, Annette; and Waleschkowski, Eva (2005). Language production errors as evidence for language production processes: the Frankfurt corpora. In Linguistic Evidence. Empirical, Theoretical and Computational Perspectives, Stephan Kepser and Marga Reis (eds.), 285-305. Berlin: Mouton de Gruyter.

van der Hulst, Harry (1996). On the other hand. Lingua 98, 121-143.

van der Hulst, Harry and Mills, Anne (1996). Issues in sign linguistics: Phonetics, phonology and morpho-syntax. Lingua 98, 3-17.

Jackendoff, Ray S. (1997). The Architecture of the Language Facilio,. Linguistic Inquiry Monographs 28. Cambridge, MA: MIT Press.

Keller, Jorg; Hohenberger, Annette; and Leuninger, Helen (2003). Sign language production: slips of the hand and their repairs in German Sign Language. In Cross-Linguistic Perspectives in Sign Language Research." Selected Papers from TISLR 2000, Anne Baker, Beppie van den Bogaerde and Onno Crasborn (eds.), 307 333. Hamburg: Signum.

Kelso, Scott (1995). Dynamic Patterns." The Self-Organization of Brain and Behavior. Cambridge, MA: MIT Press.

Klima, Edward S. and Bellugi, Ursula (1979). The Signs of Language. Cambridge, MA: Harvard University Press.

Langacker, Ronald W. (1978). Foundations of Cognitive Grammar: Volume 1: Theoretical Foundations. Stanford: Stanford University Press.

Laurence, Stephen and Margolis, Eric (2002). Radical concept nativism. Cognition 86, 25-55.

Leuninger, Helen (2001). Das Projekt RELEX: ein okumenisches Lexikon religioser Gebarden. In Gebardensprachlinguistik 2000--Theorie und Anwendung, Helen Leuninger and

Karin Wempe (eds.), 171 192. Internationale Arbeiten zur Gebardensprache und Kommunikation Gehorloser 37. Hamburg: Signum.

Leuninger, Helen; Hohenberger, Annette; Waleschkowski, Eva; Menges, Elke; and Happ, Daniela (2004). The impact of modality on language production: evidence from slips of the tongue and hand. In Interdisciplinary Approaches to Language Production, Thomas

Pechman and Christopher Habel (eds.), 219-277. Berlin and New York: Mouton de Gruyter.

Leuninger, Helen; Hohenberger, Annette; and Menges, Elke (2005). Zur Verarbeitung morphologischer Informationen in der Deutschen Gebardensprache (DGS). Linguistisehe Berichte 13, 325-358.

Leuninger, Helen; Hohenberger, Annette and Waleschkowski, Eva (2007). Sign language: typology vs. modality. MIT Working Papers in Linguistics 53,317-345.

Levelt, Willem J. M. (1989). Speaking: From Intention to Articulation. Cambridge, MA: MIT Press.

Levelt, Willem J. M.; Roelofs, Ardi; and Meyer, Antje S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences 22, 1-45.

Liddell, Scott K. (1993). Holds and positions: comparing two models of segmentation in ASL. In Phonetics and phonology, Vol. 3: Current Issues in ASL Phonology, Geoffrey R. Coulter (ed.), 189-211. San Diego, CA: Academic Press.

--(1995). Real, surrogate, and token space: grammatical consequences in ASL. In Language. Gesture, and Space, Karen Emmorey and Judy S. Reilly (eds.), 19-41. Hillsdale, N J: Lawrence Erlbaum.

--(2003). Grammar, Gesture and Meaning in American Sign Language. Cambridge, MA: Cambridge University: University Press.

Liddell, Scott K.; and Johnson, Robert E. (1986). American Sign Language compound formation processes, lexicalization, and phonological remnants. Natural Language and Linguistic Theory 4, 445-513.

Lillo-Martin, Diane (1986). Two kinds of null arguments in American Sign Language. Natural Language and Linguistie Theory 4, 415-444.

McNeilage, Peter F. (1998). The frame/content theory of evolution of speech production. Behavioral and Brain Sciences 21,499-546.

McNeill, David (1992). Hand and Mind: What Gestures Reveal about Thought. Chicago: Chicago University Press.

Merrell, Floyd (2006). Iconicity: Theory. In Encyclopedia of Language and Linguistics, 2nd edition, Keith Brown (ed.), 475-482. Amsterdam: Elsevier.

Nespor, Marina and Vogel, Irene (1986). Prosodic phonology. Dordrecht: Foris.

Newkirk, Don; Klima, Edward S.; Pedersen, Carlene C.; and Bellugi, Ursula (1980). Linguistic evidence from slips of the hand. In Errors in Linguistic Performance: Slips of the Tongue, Ear. Pen, and Hand, Victoria A. Fromkin (ed.), 165-198. New York: Academic Press.

Newmeyer, Frederick J. (1992). Iconicity and generative grammar. Language 68, 756-796.

Padden, Carroll (1998). The ASL lexicon. Sign Language and Linguistics 1, 39-64.

Peirce, Charles S. (1931-1958). Collected Papers, 8 vols. Charles Hartsthorne and Paul Weiss (Vols. 1-6) (eds.) and Arthur W. Burks (Vols. 7-8) (ed.). Cambridge, MA: Harvard University Press.

Perlmutter, David (1992). Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23, 407-442.

Perniss, Pamela; Pfau, Roland; and Steinbach, Markus (2007) (eds.). Visible, Variation: Comparative Linguistic' Studies on Sign Language Structure. Berlin: Mouton de Gruyter.

Pfau, Roland (1997). Zur phonologischen Komponente der Deutschen Gebhrdensprache: Segmente und Silben. Frankfurt Linguistische Forschungen FLF 20, 1-29.

Poulisse, Nanda (1999). Slips of the Tongue." Speech Errors in First and Second Language Production. Amsterdam: John Benjamins.

Sallandre, Marie-Anne and Cuxac, Christian (2002). Iconicity in sign language: a theoretical and methodological point of view. In Gesture and Sign Languages in Human-Computer Interaction, Imke Wachsmuth and Timo Sowa (eds.), 173-180. Lecture notes in computer science 2298. Berlin and New York: Springer.

Sandier, Wendy (1989). Phonological Representation of the Sign. Dordrecht: Foris. (1999). Cliticization and prosodic words in a sign language. In Studies on the Phonological

Word, Tracy A. Hall and Ursula Kleinhenz (eds.), 223-254. Amsterdam: Benjamins.

--(2000). The medium and the message: prosodic interpretation of linguistic content in Israeli Sign Language. Sign Language and Linguistics 2, 187-215.

--(2005). Prosodic constituency and intonation a sign language. Linguistische Berichte 13, 59-86.

--(2006). Phonology, phonetics, and the nondominant hand. In Papers in Laboratory Phonology 8: Varieties of Phonological Competence. Louis. Goldstein, D. H. Whalen, and Catherine T. Best (eds.), 185-212. Berlin and New York: Mouton de Gruyter.

Sandler, Wendy and Lillo-Martin, Diane (2006). Sign language and Linguistic Universals. Cambridge: Cambridge University Press.

Sapir, Edward (1921). Language: An Introduction to the Study of Speech. New York: Harcourt Brace. Saussure, Ferdinand de (1986 [1901]). Cours de linguistique generale. Payot: Paris.

Schembri, Adam; Jones, Caroline; and Burnham, Denis (2005). Comparing action gestures and classifier verbs of motion: evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners' gestures without speech. Journal of Deaf Studies and Deaf Education 10, 272-290.

Slobin, Dan I. (1977). Language change in childhood and in history. In Language Learning and Thought, J. MacNamara (ed.), 185-214. New York: Academic Press.

--(1987). Thinking for speaking. Proceedings of the Thirteenth Annual Meeting of the Berkeley Linguistics Society, Jon Aske, Natasha Beery, Laura A. Michaelis, and Hana Filip (eds.), 435-444. Berkeley: Berkeley Linguistics Society.

--(2003). Language and thought online: cognitive consequences of linguistic relativity. In Language in Mind." Advances in the Study of Language and Thought, Dedre Gentner and Susan Goldin-Meadow (eds.), 157-192. Cambridge, MA: MIT Press.

Stokoe, William (1960). Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf. Studies in Linguistics, occasional papers 8. Buffalo, NY: University of Buffalo.

Supalla, Ted (1982). Structure and acquisition of verbs of motion and location in American Sign Language. Unpublished doctoral dissertation, University of California, San Diego.

--(1986). The classifier system in American Sign Language. In Noun Classes and Categorization, Colette Craig (ed.), 181-214. Amsterdam: John Benjamins.

Sutton-Spence, Rachel (2005). Analyzing Sign Language Poetry. New York: Palgrave Macmillan.

Taub, Sarah (2001). Language in the Body: Iconicity and Metaphor in American Sign Language. Cambridge: Cambridge University Press.

Uyechi, Linda (1995). The geometry of visual phonology. Unpublished doctoral dissertation, Stanford University.

Voeltz, F. K. Erhard and Kilian-Hatz, Christa (2001) (eds.). Ideophones. Typological Studies in Language 44. Amsterdam: Benjamins.

Wallin, Lars (1983). Compounds in Swedish Sign Language in historical perspective. In Language in Sign. an International Perspective on Sign Language, (Proceedings of the Second International Symposium of Sign Language Research, Bristol, UK, July 1981), Jim G. Kyle and Bencie Woll (eds.), 56-68. London: Croom Helm.

Wilcox, Sherman (2004). Gesture and language. Cross-linguistic and historical data from signed languages. Gesture 4(1), 43-73.

--(2006). Iconicity, Sign Language. In Encyclopedia of Language and Linguistics, Keith Brown (ed.), 472-475, 2nd edition. Amsterdam: Elsevier Science.

Wilson, Frank (1998). The Hand: How Its Use Shapes the Brain, Language, and Human Culture. New York: Pantheon Books.

Woll, Bencie (2001). The sign that dares to speak its name: echo phonology in British Sign Language. In The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Language, Penny Boyes Braem and Rachel Sutton-Spence (eds.), 87-98. Hamburg: Signum.

--(2003). Modality, universality and the similarities among sign languages: an historical perspective. In Cross-Linguistic Perspectives in Sign Language Research: Selected Papers from TISLR 2000, Anne Baker, Beppie van den Bogaerde, and Onno Crasborn (eds.), 17-27. International Studies on Sign Language and Communication of the Deaf, Vol. 41. Hamburg: Signum.

Zeshan, Ulrike (2003). Towards a notion of 'word' in sign languages. In Word: A Cross-Linguistic Typology, Robert M. W. Dixon and Alexandra Y. Aikhenvald (eds.), 153-179. Cambridge: Cambridge University Press.

Notes

* Correspondence address: Informatics Institute, Middle East Technical University, Inonu Bulvari, 06531 Ankara, Turkey. E-mail: hohenberger@ii.metu.edu.tr.

(1.) The project Sprachliche Fehlleistungen und ihre Korrekturen in Abhangigkeit von der Modalitat: Deutsche Lautsprache vs. Deutsche Gebdrdenspraehe (DGS). Eine experimentelle psycholinguistische Studie (Language production errors and their repairs in connection with modality: Spoken German vs. German Sign Language (DGS). An experimental psycholinguistic study) was part of the focal program Language Production of the German Research Foundation (DFG). Three project grants within this program were awarded to Prof. Leuninger, University of Frankfurt (LE 596/6, l-3).

(2.) van Gelder (1990), in a confrontation of the classical and connectionist view on compositionality, also argues for multiple ways of implementing compositionality: strictly concatenative or merely functionally compositional. Interestingly, classical concatenativity is a more iconic implementation of compositionality as compared to functional compositionality where it is not required that the parts of a complex representation be literally composed of their constituent parts. Comparing signed and spoken languages, compositionality can be achieved simultaneously (through distribution of information in space) or sequentially (through distribution of information in time). Both seem equally good solution in their respective modality.

(3.) In the literature on ASL, hand orientation is often subsumed under handshape and not credited an independent dimension. There are cases, however, in which hand orientation is contrastive.

(4.) I use the abbreviation "ProsF" for "Prosodic Feature" here, since "PF" is already used for "Phonetic Form".

(5.) For a definition of IF and ProsF, see Brentari (1998: 22). Basically, IFs do not change while ProsFs can change during the lexeme's production.

(6.) Sandier and Lillo-Martin (2006: 234-235) doubt the analogy between static elements [approximately equal to] consonants and dynamic elements [approximately equal to] vowels. They object that (i) while in spoken languages consonant clusters are frequent, in sign language location clusters are absent, (ii) in the typical spoken language CVC syllable there is no similarity between the first and last consonant while in sign language they typically share the same POA, (iii) there are no spoken languages whose vowel inventory consists of mainly default vowels whereas in sign language the straight path movement is most common, and (iv) spoken languages, crosslinguistically, have varying syllable structures whereas sign languages universally have the same syllable structure.

(7.) Two x-slots are equally assigned to signs which are featurally described as having a setting change (two movements), a direction of the movement, the shape of the movement, or an orientation change (Brentari 1998: 183). The movement itself does not have segmental status anymore but is granted exactly two timing slots.

(8.) In normal discourse, at least, the transfer of meaning is primary and the form is secondary. Form, however, can become primary in communication, as in poetry, humor, or storytelling (cf. Sutton-Spence 2005).

(9.) For a comprehensive overview over the lexicon in sign language (in ASL), the reader is referred to Padden (1998). For a comprehensive set of constraints on signs in the ASL lexicon, see Brentari (1998: 292-303).

(10.) I use the name of the handshapes informally, here, according to the fingerspelled alphabet. Note that Sandier and Lillo-Martin speak of "hand configuration" which comprises handshape and hand orientation.

(11.) The syllable nucleus is constituted by a movement segment, per default. Hold segments can exceptionally fulfill this function when no movement is present. Only in this case can holds have handshape changes and secondary movement (cf. Perlmutter 1992).

(12.) Transitory movements have no representational but only phonetic status (Brentari 1998: 226). Reduplication of >2 (times) are not represented in the lexical specification of the sign, either.

(13.) As a consequence, the sign would hardly be pronounceable. I will not go into a discussion of whether considerations of production exert a pressure on representations. Such claims have recently been made, e.g., in terms of "user optimality", by Haspelmath (1999), see Section 8.3.

(14.) Although the numbers from 1-10 are a candidate domain for iconicity, sign languages differ as to which fingers are selected (which is also culturally dependent) and as to the range of iconic number signs. Number signs have also been studied with respect to language change and grammaticization (Cormier 2004).

(15.) The pictures are taken from the DGS Allgemeine Gebardenlexikon (General sign lexicon), available at http://www.sign-lang.uni-hamburg.de/ALex/Start.htm, at the University of Hamburg, Germany. The epenthetic movement have been added by the author.

(16.) The pictures are taken from the DGS Gebardenlexikon, Hamburg (see previous footnote).

(17.) Nonmanual prosodic features on the word level have to be distinguished from non-manual markings on the syntactic level, e.g., prosodic marking of sentential mode (yes-no questions, wh-questions, etc.). The scope of syntactic prosody is, of course, much wider than that of lexical prosody.

(18.) Note, however, that there are considerable language-specific differences as to the extent of mouthings between sign languages. The scenario described in the text is at best a possible scenario, which is compatible with the general trend of assimilating non-native language input to the native sign language phonological constraints.

(19.) This section provides results on sign language processing within the research project introduced in Section 1, see footnote 1.

(20.) For more details on the task and the results, the reader is referred to Hohenberger et al. (2002) and Leuninger et al. (2004).

(21.) For this reason, a detailed comparison for the slip categories was dispensed with here (but see Leuninger et al. 2004).

(22.) Fusions differ from blends in that blends require a paradigmatic relation between the two words and involve two words of the same grammatical category, as in

(i) die ganze linkere Ha// linke Halfte des Bildes [left arrow] linke x hintere

the whole left-rear ha// left half of-the picture [left arrow] left x rear

Fusions do not require such a relation. They simply require adjacency. For an indepth analysis of sign language blends and fusions, see Leuninger et al. (2007).

(23.) This scenario depicts the case of a lexical nonmanual component. Alternatively, the prosodic contour uberlegen 'to ponder' is not lexical but adverbial, as in example (1), where the adverb reluctantly is a separate nonmanual morpheme which adds to the manual classifier predicate. If this is the case, no further assumptions have to be made for the sequential planning of manual and nonmanual features within a lexical sign.

(24.) Note that one of the compound rules, the identical movement constraint (10), seems to have been applied in the 'conduit' part of the slip. While both the target word BUB and the semantic substitution VATER share the feature [male], the intermediate conduit TOCHTER has the feature [female]. This gender mismatch gives rise to the speculation that what was actually planned at this stage was the sign SOHN ('son'). The only relevant phonological difference between TOCHTER and SOHN in DGS is the direction of movement: upwards for SOHN and downwards for TOCHTER. Under the influence of the identical movement constraint, the planned sign SOHN might have surfaced as TOCHTER, as an epiphenomenon.

(25.) Iconicity is also a highly controversial topic in spoken language research. Traditionally, functional linguists are pro iconicity (Givon 1989; Croft 1990; Haiman 1985; Langacker 1978: among others) and generativist linguists are contra iconicity. For a critical survey on structural (or diagrammatical) iconicity, see Newmeyer (1992), for a confrontation of iconicity with frequency see Haspelmath (2006). On ideophones, see Voeltz and Kilian-Hatz (2001, and articles therein).

(26.) For more examples on the superiority of discreteness over iconicity in sign language morphology, see Emmorey (2002: 81-85).

(27.) Note that in Peirce's semiotic triangle, the sign (representamen) is related to its object through the meaning (interpretant). There is no primary relation between form and meaning, as traditionally assumed, but between the sign and the object it stands for. In an iconic relation there is no need for mediation between sign and object due to their inherent similarity. Neither is there in an indexical relation due to the real-world connection between sign and object. Only in symbolic (arbitrary) relations there is a need for mediation, and this is brought about by the meaning that is invoked by the sign in the subject's mind. This meaning is categorical, i.e., it refers to conceptual dimensions. Insofar, Peirce's theory does not exactly match Saussure's or any other semiotic system.

(28.) Strictly speaking, in iconicity, the form of the sign mimics aspects of the outer appearance of the referent, e.g., the roof of a house, which can be thought of as salient semantic features of the word house.

(29.) "Liminal" is used in the sense of being a borderline phenomenon between a purely linguistic and a purely gestural analysis.

The three approaches also mirror the changing pressures sign language researchers were exposed to in the course of the last 50 years (Woll 2003). Initially, sign language linguists were eager to prove that sign languages were equivalent to spoken languages in every respect; therefore, it was politically correct to say that signs are composed of discrete morphemes as words are. When the status of sign languages as natural languages on a par with spoken languages was accepted, the pendulum swung back and differences between the two came increasingly to the fore in sign language research. Today, it seems, we are in a phase of reconciliation, characterized by the joint attention to both sides, the linguistic and the gestural. This historical evaluation notwithstanding, the various approaches have their own merits and demerits.

(30.) Sandier and Lillo-Martin (2006: 96) give an illuminating example of such "'backformation" for the frozen Israeli sign WRITE where the place of articulation (the "flat hand") of the nondominant hand is re-analyzed as a classifier, namely as a letter the writer reconsiders critically.

(31.) As Cogill-Koez points out with recourse to McNeill (1992), in a similar vein, the hearing child has to relegate gestures and speech to different levels. Initially, both the manual and the speech channel alternatingly convey propositional information. Later on, however, the speech channel becomes dominant and the manual channel becomes integrated into and simultaneous with the linguistic channel.

(32.) There are even more defective lexical items, e.g., "tra-la-la", which have only a phonological structure (PS), but neither a syntactic structure (SS) nor a conceptual structure (CS). Their lexical structure is simply <PS, [empty set], [empty set]> (cf. Jackendoff 1997: 94).

(33.) Battison (1978) classified two-handed signs into three groups, the lasts of which, type 3, are two-handed signs where both hands, H1 (dominant hand) and H2 (nondominant hand) have different handshapes and HI is active while H2 is passive. H2 is restricted to a set of unmarked handshapes (B, S, 5, 1, and A (A with abducted thumb) for ASL) (Eccarius and Brentari 2007:1181).

(34.) In spoken languages, transitive structures can exceptionally be lexical rather than syntactic units, as in phrasal words such as "forget-me-not" or "touch-me-not". The newly formed lexical word is co-extensional to the prosodic word. It would be interesting to see whether such phrasal words derived from transitive structures also exist in sign language and whether they would have prosodic word status rather than intonational word status.
Table 1. Contrasting German Sign Language (DGS) and spoken German

 DGS Spoken German
 (as a specimen of sign (as a specimen of (a
 language) concatenative) spoken
 language)

Modality visual-gestural aural-oral
Articulators manual (both hands) speech organs
 non-manual (face, body)

Processing type vertical processing horizontal processing
 high spatial resolution high temporal resolution

Typology/ fusional/simultaneous concatenative
Word shape monosyllabic/ (predominantly)
pattern polymorphemic polysyllabic/polymorphemic

Production high information load low information load on
characteristics on few big chunks many small chunks

Table 2. Canonical word shape according to the number of syllables
and morphemes per word (Brentari 2002: 57; origanally proposed in
Brentari 1995)

Wordshape Monosyllabic Polysyllabic

Monomorphemic Chinese English
Polymorphemic Sign languages West Greenlandic

Table 3. The C:IF/V: ProsF analogy justified (cf. Brentari 2002: 45)

Consonants Vowels

IF branch carries more lexical ProsF branch carries fewer lexical
contrasts contrasts

Sign word can be parsed with ProsFs function as the medium of
only the IF the signal (perception, sonority)

 Movement functions as syllable
 nucleus

Table 4. Differences in the distribution of slip units in spoken
German and (DGS)

Units/language Spoken German German Sign Language

Segments/features rich phonological rich phonological
30% vs. 41% structure structure

Words rich lexical rich lexical structure
35% vs. 50% structure words are more
 prominent

Morphemes concatenative morphology, fusional/simultaneous
18% vs. 6% morphemes easily morphology, morphemes
 separable in processing hard to separate
 during processing

Phrases concatenative syntax syntax less
16% vs. 0% concatenative
COPYRIGHT 2008 Walter de Gruyter GmbH & Co. KG
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hohenberger, Annette
Publication:Linguistics: an interdisciplinary journal of the language sciences
Date:Mar 1, 2008
Words:23018
Previous Article:Phonological and morphological domains in Kyirong Tibetan *.
Next Article:Directional asymmetries in the morphology and phonology of words, with special reference to Bantu.
Topics:


Related Articles
Linguistic evidence; empirical, theoretical, and computational perspectives.
Cognitive and behavioral approaches to language acquisition: conceptual and empirical intersections.
Communication disorders in Spanish speakers; theoretical, research, and clinical aspects.
On interpreting construction schemas; from action and motion to transitivity and causality.
Focus strategies in African languages; the interaction of focus and grammar in Niger-Congo and Afro-Asiatic.
The longitudinal study of advanced L2 capacities.
Morphosyntactic issues in second language acquisition.
Business improvement districts; research, theories, and controversies.
Aspects of language contact; new theoretical, methodological and empirical findings with special focus on romancisation processes.
Content and language integrated learning; evidence from research in Europe.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters