Printer Friendly

The effect of gesture on speech production and comprehension.

INTRODUCTION

Cairncross (1997) coined the phrase "the death of distance," suggesting that distance may no longer be a limiting factor in people's ability to communicate. In fact, telecommunications systems are becoming ubiquitous in business and he military, allowing expert-novice, instructor-learner, or peer-peer communication to occur among interactants who are spatially separated. Various terms are used to describe this type of communications environment, including video-mediated communication, remote collaboration, and point-to-point teleconferencing; however, the core features include two or more remotely located people who send and receive audio, video, and data via a desktop computer or teleconferencing system. Although telecommunications systems can be used to support almost any type of communications, the emphasis in many applied settings is on technical communications such as training, job aiding and support, and information exchange.

Fussell and Benimoff (1995) viewed teleconferencing as an extension of normal communication and argued that the design of effective telecommunications systems is based on an understanding of the processes that facilitate face-to-face communication. Given the importance of gesture to communication, it is surprising that there is little consensus on the role of gesture in telecommunications. Fussell and Benimoff concluded that "the best video field for a desktop video system is one that communicates information conveyed by facial expressions and most gestures" (p. 244). Hayne, Pendergast, and Greenberg (1994) and Isaacs and Tang (1997) concurred that telecommunications systems that do not support gesture provide an impoverished communication environment. However, Doherty-Sneddon et al. (1997) reported that visual access to the upper body and gestural information was not critical to users in video-mediated communications.

At least in part, this disagreement regarding the value of gesture in telecommunications stems from the lack of a fundamental understanding of the role that gesture plays in normal communication. To what extent does gesture aid the listener in comprehending speech? To what extent does gesture aid the speaker in formulating speech? Is gesture more valuable for some types of speech content than for others? Whittaker and O'Conaill (1997) concluded that "We need more detailed understanding of the precise functions that visual information plays in communication" (p. 24), noting that prior work in designing telecommunications systems was based on the intuition that visual information would benefit interaction, without an understanding of how these benefits will come about. Before the role of gesture in design questions such as field of view and quality and size of the video image required for teleconferencing can be effectively discussed, the precise role that gestures play in communication needs to be more fully understood. Therefore, the purpose of this study was to examine the role that gesture plays in aiding the speaker and the listener in communication. More specifically, we examine the relationships among gesture, speech production, and listener comprehension.

GESTURES, SPEECH PRODUCTION, AND COMPREHENSION

Hand gestures play an integral role in communication. For example, even in brief conversation, gestures may be observed that are used to point out objects, coordinate speech, and express emotions; that serve as symbols (e.g., a thumbs-up sign); and that elaborate speech. A number of typologies have been offered to capture these various functions of gesture in conversation (Efron, 1941/1972; Ekman & Friesen, 1972; McNeill, 1985). Common to all of these classifications is the category of conversational hand gestures. Conversational hand gestures are hand movements that accompany speech. Although distinctions can be made among different types of conversational gestures, our concern is with that class of gestures that have been termed iconic, illustrative, or lexical (see Krauss, Chen, & Chawla, 1996; McNeill, 1985; Rime & Schiaratura, 1991). When a speaker cups his or her hands together when saying the word "globe," this represents a conversational gesture. Krauss et al. (1996) described three characteristics of conversational gestures: (a) they accompany speech, (b) they are temporally coordinated with speech, and (c) they express a meaning related to the semantic content of the speech they accompany. In this paper, we will refer to this class of iconic or lexical gestures by the broader term of conversational gestures.

There is considerable debate regarding the primary role of conversational gestures in communication. The traditional view is that gestures enhance communication by complementing speech, conveying information that augments the information provided by the speech channel. This view holds that gesture and speech combine to more fully convey the meaning intended by the speaker (see Beattie & Shovelton, 1999; Langton, O'Malley, & Bruce, 1996). For example, Kendon (1983) claimed that gestures represent aspects of the intended utterance that are not represented in speech. According to this perspective, gestures have a direct communicative function of conveying information to a listener, thus enhancing listener comprehension.

An alternative perspective is that gestures have little direct communicative function but, instead, that gesturing aids the speaker in producing or formulating speech (see Krauss, 1998; Krauss et al., 1996). Rauscher, Krauss, and Chen (1996) concluded that gestures assist speakers in formulating speech by aiding the retrieval of elusive words from lexical memory. Thus, according to this perspective, gestures have a primarily noncommunicative function, enhancing speaker effectiveness. Moreover, these researchers suggested that speech production plays a mediating role in the observed relationship between gesture and comprehension. That is, Krauss et al. argued that those studies finding that gestures enhance communication may have obtained this result because the researchers did not control for the possibility that speakers who are allowed to gesture produce more effective speech.

Therefore, one primary objective of this study was to examine the relationships among gesture, speech production, and listener comprehension. In doing so, we address two questions: First, do gestures enhance listener comprehension? Second, if gesture enhances comprehension, how does it do so? Does gesture have a direct effect on listener comprehension, or does gesture enhance listener comprehension only because it aids the speaker in producing more effective speech? Thus our first objective in this study was to examine the extent to which gesture enhances listener comprehension and the extent to which this relationship is mediated by the effect of gesture on speech production.

Gesture and Speech Content

Some studies have shown that when speakers use gestures, they gesture more on certain types of words or phrases. For example, Rauscher et al. (1996) found that gesturing was nearly five times more frequent on "spatial content phrases" (phrases containing spatial prepositions such as "under" and "on") than on nonspatial phrases. Moreover, they found that not being able to gesture was more damaging when the speaker attempted to convey spatial content. Therefore, a second objective of this study was to examine whether gesture (or not being able to gesture) is more important for some types of speech than for others.

There are a number of ways to describe spatial properties of objects, including their size, shape, orientation, position, and movement in space. Krauss (1998) reported a high rate of gesturing activity on spatial prepositions such as "under" and "adjacent" but also on terms such as "spin" or "cube." We distinguished among four different types of terms or referents: (a) spatial location terms, which describe the orientation or topography of an object in space, such as "under" and "on"; (b) spatial property terms, which describe the shape or form of an object, such as "square" and "short"; (c) manipulation/movement terms, which describe the motion or manipulation of an object, such as "open" and "hold"; and (d) nonspatial terms, which describe nondynamic characteristics of an object, such as "color" and "warm."

To the extent that gesturing aids in conveying information to the listener by supplementing speech, we would expect gestural movements to be relatively more helpful in describing the orientation, shape, or movement of an object than in describing nonspatial terms. Alternatively, to the extent that gestures aid in the process of speech production by providing an additional means of retrieving spatial/dynamic content from lexical memory (Krauss et al., 1996), we would expect gesturing to be relatively more useful in the access of spatial content, compared with nonspatial content. Thus a second objective of this study was to examine whether the value of gesture varied as a function of the type of speech content.

EMPIRICAL STUDY

The key characteristics of technical communication, as compared with more informal or social communication, are that it is problem oriented, undertaken for a specific purpose, with a key objective of conveying information. We designed an experiment as a basic analog of technical communications--to represent a setting in which the speaker knows something and is trying to convey it to the listener. The procedure called for the speaker to be presented with a target word, such as "cube," and attempt to convey this target word to the listener in as few clues as possible without using that exact word. In one condition of the study speakers were allowed to gesture, and in the other condition they were not. We obtained a measure of speaker effectiveness or speech production (the effectiveness of the clues given by the speaker) and a measure of listener comprehension (the number of attempts required by the listener to obtain the correct solution).

METHOD

Participants

Participants in this study were 80 U.S. Naval Reserve military personnel (both men and women) who volunteered to take part in a study of training effectiveness. The study was a 2 (speech condition: gesture vs. no gesture) x 4 (type of content) design. Participants were randomly assigned to the experimental conditions and to the speaker and listener roles.

Procedure

Two persons took part in the study at a time, one playing the role of the speaker and the other playing the role of the listener. The participants were seated across from each other, and a low table between them held a series of 20 flip cards facing the speaker, each containing a target word. The speaker's task was to attempt to convey the target word to the listener (with the restriction that he or she could not speak the actual word) in as few attempts as possible. The listener's task was to guess the exact word that the speaker was attempting to communicate in as few attempts as possible. They were instructed to speak in turn: The speaker was to give a one-word clue, and then the listener gave a one-word response. If that answer was incorrect, the speaker would provide another clue, followed by another listener response, until the correct answer was obtained.

Experimental Manipulations

Speech condition: Gesture versus no gesture. There were two speech conditions. In the gesture condition, the speakers were told to gesture freely in communicating the clues. In the no-gesture condition, the speakers were instructed that gesturing was not allowed on this task and that their hands must be placed in their lap. The experimenter monitored the procedure to ensure that these instructions were followed.

Speech content. To examine type of speech content, we distinguished among four types of terms or referents, as described previously.

The first task in developing a stimulus list was to generate a list of candidate words within each category. Second, in order to develop a relatively homogenous set of stimulus terms, we identified those words that were similar in terms of word frequency. Using the word frequency norms provided in Kucera and Francis (1967), we selected only those terms that were relatively common, with a usage rating of at least 50 occurrences per million.

Third, we wanted to ensure that the words in our stimulus set did not differ in terms of the strength of the primary associate. Word association norms provide a measure of the strength of associative response to a stimulus word. For example, given the target word "table," the most popular response, or primary associate, is "chair," given by 83.25% of respondents (Jenkins, 1970). However, the primary associate of the word "trouble" is "bad," yet this response was given by only 8.83% of respondents. Therefore, our goal was to choose a set of stimulus words that were similar in their strength of primary associate. Using the word association norms published in Keppel and Strand (1970) and Palermo and Jenkins (1964), from the list of relatively common terms obtained previously, we selected only those terms with a strength of primary response of 27% to 47%. This range was selected based on the overall mean associative response of 37.5% reported by Jenkins (1970). Based on this rigorous procedure, we obtained the stimulus set (N = 20) of terms shown in Table 1.

Measures

Comprehension. Our measure of communication effectiveness, or listener comprehension, was the number of responses required to obtain the correct solution for each word. Therefore, for each target word, we recorded the number of responses given by the listener before the correct response was elicited.

Speech production. Speaker effectiveness has been assessed in previous studies by measures such as speech rate or the extent of verbal dysfluencies (Krauss, 1998; Rauscher et al., 1996). However, the design of the current study allowed us to use a more direct measure of speaker effectiveness by the use of word association norms. Consider that the speaker is given the target word "crowd." According to the norms in Keppel and Strand (1970), the primary response or primary associate for the word "crowd" is "people," given by 35.7% of respondents. The next most popular response is "mob," given by 12.1% of respondents. Further down the response hierarchy is the response "mass," given by 1.6% of respondents, and so on. Thus if the target word that the speaker attempts to communicate is "crowd," then providing the clue "people" is more effective than providing the word "mob," which is more effective than the word "mass." Moreover, the associative strength of each clue to the target word provides a direct quantitative measure of speech effectiveness. Thus if gesturing benefits communication by enhancing speech production, then the clues generated by the speaker when allowed to gesture should be more closely associated with the target word. To assess speaker effectiveness, we assigned an associative strength score drawn from Keppel and Strand (1970) and Palermo and Jenkins (1964) for each clue given by the speaker and then averaged the scores for each target word.

RESULTS

To begin, it was deemed appropriate to determine whether there were significant differences in speech production and comprehension for the four different speech content areas. Accordingly, separate 2 (condition: gesture vs. no gesture) x 4 (type of speech content) analyses of variance were conducted on both listener comprehension and speech production, with repeated measures on the second factor. Significant condition main effects emerged for both listener comprehension, F(1, 38) = 12.32, p < .001, and speech production, F(1, 37) = 4.20, p < .05. Thus, overall, gesture aided both listener comprehension and speech production. Further, significant speech content main effects emerged for both listener comprehension, F(3, 114) = 4.85, p = .003, and speech production, F(3, 111) = 11.17, p < .001. The Condition x Content interaction was not significant for either listener comprehension, F(3, 114) = 0.967, p = .41, or speech production, F(3, 111) = 2.04, p = .11. These preliminary analyses reveal that there were significant effects of gesture on listener comprehension and on speech production and significant differences across speech content areas. Therefore, in the analyses to be reported, the effects of gesture on listener comprehension and speech production were examined separately within each content area (see Table 2).

Effects of Gesture on Listener Comprehension

For spatial location terms, there was a significant main effect of gesture on listener comprehension, F(1, 38) = 6.15, p = .009. When speakers were allowed to gesture, listeners reached the correct solution in fewer attempts (M = 3.26) than when speakers were not allowed to gesture (M = 5.03). For spatial property terms, there was a significant main effect of gesture on listener comprehension, F(1, 38) = 3.95, p = .027 (for gesture, M = 2.47; for nongesture, M = 3.23). For manipulation/movement terms, there was a significant main effect of gesture on listener comprehension, F(1, 38) = 5.32, p = .013 (for gesture, M = 2.91: for nongesture, M = 4.24). For nonspatial terms, there was a marginally significant main effect of gesture on listener comprehension, F(1, 38) = 1.83, p = .09 (for gesture, M = 2.74; for nongesture, M = 3.41). These analyses reveal that allowing the speaker to gesture did indeed improve listener comprehension. Not being able to gesture was most damaging to listener comprehension when the speaker was trying to convey terms related to spatial location and manipulation/movement. Gesture was also important, although somewhat less so, for spatial property terms and for nonspatial terms.

Effects of Gesture on Speech Production

For spatial location terms, there was a significant main effect of gesture on speech production, F(1, 37) = 4.56, p = .02. Speakers who were allowed to gesture conveyed higher-quality clues (M = .085) than did those who were not allowed to gesture (M = .049). There was a nonsignificant trend for gesture to improve speech production for spatial property terms, F(1, 37) = 1.47, p = .12 (for gesture, M = .061; for nongesture, M = .042), and for manipulation/ movement terms, F(1, 37) = 0.72, p = .20 (for gesture, M = .058; for nongesture, M = .046). For nonspatial terms, there was a significant main effect of gesture on speech production, F(1, 37) = 5.44, p = .01 (for gesture, M = .114; for nongesture, M = .067). These analyses reveal that allowing the speaker to gesture did indeed improve the quality of the cues generated by the speaker, most notably for spatial location terms and for nonspatial terms. The effects of gesture on speech production were somewhat less evident for spatial property terms and for manipulation/movement terms.

Mediating Role of Speech Production

We adopted the mediation model of Baron and Kenny (1986) as an analytic strategy to test the relationships among gesture, speech production, and comprehension. According to Baron and Kenny (1986), several conditions must be met to document evidence of a mediated relationship. First, there must be a significant relationship between the independent variable and the dependent variable. Second, there must be a significant relationship between the independent variable and the proposed mediator. Third, there must be a significant relationship between the mediator and the dependent variable, controlling for the independent variable. Finally, if these three conditions all hold, then the initial relationship between the independent variable and the dependent variable should be reduced.

The foregoing analyses reveal that gesture exerts predictable effects on listener comprehension and on speech production. A critical question is the extent to which the effects of gesture on listener comprehension are attributable to the more fundamental effects of gesture on speech production. In order to gauge the mediating role played by speech production, listener comprehension was regressed on both condition (gesture vs. no gesture) and speech production within each content area. For spatial location terms, there was a marginally significant effect of speech production on listener comprehension after the effects of gesture were partialled out, standardized beta = -.215, t(36) = 1.33, p = .09. There was still a significant, albeit reduced, effect of gesture on listener comprehension after the effects of speech production were partialled out, standardized beta = -.291, t(36) = 1.81, p = .04. The results of this regression analysis are presented in the path analysis shown in Figure 1a.

[FIGURE 1 OMITTED]

For spatial property terms, there was no significant effect of speech production on listener comprehension after the effects of gesture were partialled out, standardized beta = .013, t(37) = 0.08, p = .46. There was still a significant, albeit reduced, effect of gesture on listener comprehension after the effects of speech production were partialled out, standardized beta = -.309, t(37) = 1.94, p = .05 (see Figure 1b).

For manipulation/movement terms, there was no significant effect of speech production on listener comprehension after the effects of gesture were partialled out, standardized beta = .058, t(36) = 0.24, p = .40. There was still a significant, albeit reduced, effect of gesture on listener comprehension after the effects of speech production were partialled out, standardized beta = -.348, t(36) = 2.20, p = .02 (see Figure 1c).

For nonspatial terms, there was no significant effect of speech production on listener comprehension after the effects of gesture were partialled out, standardized beta = -.001, t(37) = 0.01, p = .50. There was a reduced, nonsignificant effect of gesture on listener comprehension after the effects of speech production were partialled out, standardized beta = -.215, t(37) = 1.26, p = . 11 (see Figure 1d). These analyses reveal that speaker gesturing tended to directly affect listener comprehension. This general summary is qualified somewhat by the finding that speaker gesturing tended also to indirectly affect listener comprehension through its effect on speech production for spatial location terms.

SUMMARY

The purpose of this study was to examine the fundamental roles that gestures play in communication, in order to provide a clearer understanding of how gestural information may benefit telecommunications. The results of this study have both theoretical and practical implications. This study sheds new light on the controversy over the communicative and noncommunicative functions that gestures play in communication. First, our results clearly indicate that gesture has a significant positive impact on listener comprehension: Comprehension was enhanced when the listener had access to both gesture and speech. Second, our results show that gesture has a significant positive impact on speech production: When speakers are allowed to gesture, they produce more effective speech. Third, some have questioned whether the effects of gesture on enhancing listener comprehension are largely attributable to the effects of gesture on aiding speech production. Our results provide only limited support for this mediation hypothesis. Overall, the effect of gesture on listener comprehension was reduced but remained significant when the effects of speech production were held constant. Thus gestures seem to have a direct communicative effect on listener comprehension, regardless of the impact that gesture has on speech.

However, some evidence supporting mediation was found for spatial location terms. For spatial location terms, gesture directly affected listener comprehension and also indirectly affected listener comprehension through its effects on speech production. This suggests that for spatial location terms, speech production serves as a mediator of the relationship between gesture and listener comprehension. This further indicates that this mediator is indeed potent but is neither necessary nor sufficient for this effect to occur (Baron & Kenny, 1986).

Overall, these results indicate that gesture plays both a direct communicative role in providing useful information to the listener and a noncommunicative role in assisting the speaker in formulating effective speech. In brief, the results indicate that gesture benefits the listener as well as the speaker and that gesture has a positive effect on listener comprehension, independent of the effects gesture has on speaker effectiveness.

Our results further illustrate the extent to which the effects of gesture vary as a function of the type of speech content. We argued that gestures, which are movements of the hands in space that accompany speech, would be most useful in conveying content that was itself spatial. Our data provide some support for this position. Although gesture aided listener comprehension across all types of speech content, the greatest impact of gesturing was observed when the speaker was trying to convey terms related to spatial location (e.g., "on," "under," "over") and manipulation/movement (e.g., "open," "hold," "sit"). Thus speech that involves spatial location and manipulation/movement may be more reliant on gesture to convey successfully. We also argued that gestures would aid speech production to a greater extent for spatial terms than for nonspatial terms. We found little support for this hypothesis. Gesture aided speech production to a similar degree for both spatial location terms and nonspatial terms. These data do not support the claim that the adverse effects on speech production of not being able to gesture are limited to speech with spatial content (Rauscher et al., 1996).

From a practical standpoint, our results have clear implications for further research on telecommunications. First, these results shed light on one question that we posed at the beginning: Should researchers be concerned about capturing gestural information in computer-mediated communication when some argue that gesture provides little communicative information in the first place? Our results indicate that gestures communicate useful information to the listener that aids comprehension. This supports Fussell and Benimoff's (1995) contention that the ideal visual field for a desktop video system should include information communicated via gesture. However, we have also found that gesture is more valuable for some types of speech content than for others. Thus gesture may be particularly valuable when conveying information related to spatial location and manipulation/ movement, and gesture may be less valuable for nonspatial content.

However, we note that many researchers have questioned whether the visual information provided in computer-mediated communications systems adds appreciably to simple audio-only communication (Anderson et al., 1997; Doherty-Sneddon et al., 1997; Sellen, 1995). In a recent review of this literature Whittaker and O'Conaill (1997) concluded, "Laboratory studies to demonstrate the benefits of adding a visual communication modality to voice have in general shown few objective improvements" (p. 24). Indeed, Heath and Luff (1992) argued that gestures may lose their interactional significance when abstracted from the environment in which they are produced and presented via a restricted video image. Further research is needed to examine conditions under which gesture contributes to listener comprehension in telecommunications.

Second, we have examined one topic that is not often considered in telecommunications research--that gestures also aid the speaker in producing more effective speech. To the extent that gesture enhances speaker effectiveness, it may be of value to support hands-free communication so that speakers are able to gesture. Moreover, one question that becomes relevant is whether speakers gesture less in computer-mediated communication, in which they are removed from the immediate presence of the listener, than in normal communication. On one hand, Cohen and Harrison (1973) found that speakers gestured less when speaking over an intercom than in face-to-face communications. On the other hand, Rime (1982) found few differences in nonverbal behavior whether people interacted face to face or were separated from one another. If, as some claim, gesturing is a function of general arousal level, then speakers may gesture less in computer-mediated environments, where the other is not immediately present, than in face-to-face communication. Our findings suggest that this may impair speaker effectiveness. Further research is needed to explore these questions.

Finally, it is prudent to note several limitations of this study. In order to examine the relationships among gesture, speech production, and listener comprehension, we designed an experimental procedure that maximized the precision with which we could measure these variables, at the cost of naturalness of conversation. Clearly, the constrained and patterned type of communication we examined differs considerably from normal, spontaneous conversation. Moreover, the gestures participants displayed when attempting to convey information in this manner may be more conscious and deliberate than in normal conversation. Further research is needed to examine the patterns observed in the laboratory in a more naturalistic setting.

It is also worthwhile to note that we examined one specific type of gesture: conversational hand gestures that co-occur with speech. Gestures can also serve to point out objects, to assist in conversational turn taking, and to provide evidence of the emotional state of the speaker. The current study does not address the role of these other types of gesture in communication.

The promise of telecommunications systems is to produce an environment that captures the richness of face-to-face interaction, but bandwidth limitations and cost make it difficult to transmit all sources of information available in normal communication. Thus trade-offs are required between communication fidelity and system characteristics such as field of view and the size and quality of the video image. To the extent that gesture aids communication, it should be incorporated within the visual field. Our results suggest that gesture plays an important role in communication and that in exploring design trade-offs in telecommunications systems, researchers should consider the value of gesture for the listener, for the speaker, and for specific types of speech content.
TABLE 1: Speech Content Categories and Terms

Type of Speech Content   Terms

Spatial location         Under, on, closer, over, near
Spatial property         Square, short, soft, circle, heavy
Manipulation/movement    Open, hold, stand, sit, look
Nonspatial               Color, sweet, dark, warm, dry

TABLE 2: Effects of Gesture on Listener Comprehension
and Speech Production by Type of Speech Content

                          Listener Comprehension

Speech Content           No Gesture      Gesture

Spatial location
  M                     [5.03.sub.a]   [3.26.sub.b]
  SD                       (2.48)         (2.01)
Spatial property
  M                     [3.23.sub.a]   [2.47.sub.b]
  SD                       (1.26)         (1.14)
Manipulation/movement
  M                     [4.24.sub.a]   [2.91.sub.b]
  SD                       (2.17)         (1.38)
Nonspatial
  M                     [3.41.sub.a]   [2.74.sub.a]
  SD                       (1.84)         (1.27)

                             Speech Production

Speech Content           No Gesture      Gesture

Spatial location
  M                     [.049.sub.a]   [.085.sub.b]
  SD                       (.027)         (.071)
Spatial property
  M                     [.042.sub.a]   [.061.sub.a]
  SD                       (.029)         (.058)
Manipulation/movement
  M                     [.046.sub.a]   [.058.sub.a]
  SD                       (.021)         (.063)
Nonspatial
  M                     [.67.sub.a]    [.114.sub.b]
  SD                       (.037)         (.079)

Note. Means for the listener comprehension and speech production
scores with different subscripts differ significantly at p < .05.


REFERENCES

Anderson. A. H., O'Malley. C., Doherty-Sneddon, G., Langton, S., Newlands. A., Mullin. J., et al. (1997). The impact of VMC on collaborative problem solving: An analysis of task performance, communicative process, and user satisfaction. In K. E. Finn. A. J. Sellen. & S. B. Wilbur (Eds.), Video-mediated communication (pp. 133-155). Mahwah, NJ: Erlbaum.

Baron. R. M., & Kenny. D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173-1182.

Beattie, G., & Shovelton. H. (1999). Mapping the range of information contained in the iconic hand gestures that accompany spontaneous speech. Journal of Language and Social Psychology, 18, 438-462.

Cairncross, F. (1997). The death of distance: The trendspotter's guide to new communications. Boston, MA: Harvard Business School.

Cohen, A. A., & Harrison, R. P. (1973). Intentionality in the use of hand illustrators in face-to-face communication situations. Journal of Personality and Social Psychology, 28. 276-279.

Doherty-Sneddon, G., Anderson, A., O'Malley. C., Langton, S., Garrod, S., & Bruce, V. (1997). Face-to-face and video-mediated communication: A comparison of dialogue structure and task performance. Journal of Experimental Psychology: Applied, 3, 105-125.

Efron, D. (1972). Gesture, race and culture, The Hague: Mouton. (Original work, titled Gesture and environment, published 1941)

Ekman, P., & Friesen, W. (1972). Hand movements. Journal of Communication, 22, 353-374.

Fussell, S. R., & Benimoff. N. I. (1995). Social and cognitive processes in interpersonal communication: Implications for advanced telecommunications technologies. Human Factors, 57, 228-250.

Hayne, S., Pendergast, M., & Greenberg, S. (1994). Implementing gesturing with cursors in group support systems. Journal of Management Information Systems, 10, 43-61.

Heath, C., & Luff, R (1992). Media space and communicative asymmetries: Preliminary observations of video-mediated interaction. Human-Computer Interaction, 7, 315-546.

Isaacs, E. A., & Tang, J. C. (1997). Studying video-based collaboration in context: From small workgroups to large organizations. In K. E. Finn, A. J. Sellen, & S. B. Wilbur (Eds.). Video-mediated communication (pp. 173-197). Mahwah, NJ: Erlbaum.

Jenkins, J. J. (1970). The 1952 Minnesota word association norms. In L. Postman & G. Keppel (Eds.). Norms of word association (pp. 1-8). New York: Academic.

Kendon, A. (1983). Gesture and speech: How they interact. In J. M. Weimann & R. R Harrison (Eds.), Nonverbal interaction (pp. 13-45). Beverly Hills, CA: Sage.

Keppel, G., & Strand. B. Z. (1970). Free-association responses to the primary responses and other responses selected from the Palermo-Jenkins norms. In L. Postman & G. Keppel (Eds.), Norms of word association (pp. 177-187). New York: Academic.

Krauss, R. M. (1998). Why do we gesture when we speak? Current Directions in Psychological Science, 7, 54-60.

Krauss, R. M., Chen. Y., & Chawla, P. (1996). Nonverbal behavior and nonverbal communication: What do conversational hand gestures tell us? Advances in Experimental Social Psychology, 28, 389-450.

Kucera, H., & Francis, W. N. (1967), Computational analysis of present-day American English. Providence, RI: Brown University Press.

Langton, S. R. H., O'Malley, C., & Bruce. V. (1996). Actions speak no louder than words: Symmetrical cross-modal interference effects in the processing of verbal and gestural information. Journal of Experimental Psychology: Human Perception and Performance, 22, 1357-1375.

McNeill. D. (1985). So you think gestures are nonverbal? Psychological Review, 92. 350-371.

Palermo, D. S., & Jenkins, J. J. (1964). Word association norms: Grade school through college. Minneapolis: University of Minnesota Press.

Rauscher, F. H., Krauss, R. M.. & Chen, Y. (1996). Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychological Science, 7, 226-231.

Rime, B. (1982). The elimination of visible behaviour from social interactions: Effects on verbal, nonverbal, and interpersonal variables. European Journal of Social Psychology. 12, 113-129.

Rime, B., & Schiaratura, L. (1991). Gesture and speech. In R. S. Feldman & B. Rime (Eds.), Fundamentals of nonverbal behavior (pp. 239-281). New York: Cambridge University Press.

Sellen, A. J. (1995). Remote conversations: The effects of mediating talk with technology. Human-Computer Interaction, 10, 401-444.

Whittaker, S., & O'Conaill, B. (1997). The role of vision in face-to-face and mediated communication. In K. E. Finn, A. J. Sellen, & S. B. Wilbur (Eds.), Video-mediated communication (pp. 23M-9). Mahwah, NJ: Erlbaum.

James E. Driskell is president and senior scientist of Florida Maxima Corporation, Winter Park, Florida. He received his Ph.D. from the University of South Carolina in 1981.

Paul H. Radtke is a research psychologist at the Naval Air Warfare Center Training Systems Division, Orlando, Florida. He received his M.A. in political science from Northern Illinois University in 1972.

Address correspondence to James E. Driskell, Florida Maxima Corp., 507 N. New York Ave., R-1., Winter Park, FL 32789; james.driskell@rollins.edu.

Date received: July 25, 2001

Date accepted: December 16, 2002
COPYRIGHT 2003 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Driskell, James E.; Radtke, Paul H.
Publication:Human Factors
Date:Sep 22, 2003
Words:5608
Previous Article:Effects of deliberate practice on crisis decision performance.
Next Article:Determining the effectiveness of the usability problem inspector: a theory-based model and tool for finding usability problems.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |