Printer Friendly

Team learning in technology-mediated distributed teams.

1. INTRODUCTION

Technology-mediated learning (TML) has been defined as a learning process (i.e., information exchange, interpretation, and encoding into a mental model) among peers and/or instructors that are mediated through the use of advanced information and communications technologies (ICT) (Alavi & Leidner, 2001). ICT has been used to support core teaching and learning activities in universities implementing distance learning (Chang, 2004; Saw et al. 2008) and organizations conducting projects via geographically dispersed teams (Gibson & Gibbs, 2006; Nunamaker, Reinig & Briggs, 2009). In both contexts, technology-mediated teams engage in team learning that gives rise to knowledge that is then subsequently used in task execution.

Realizing the role of learning in collaborative team work, Wilson, Goodman, and Cronin (2007) called for a process analysis approach (i.e., assessment of discourse and behavior) in investigating team learning. By directly observing team learning behaviors, one could identify exactly how the team learning process is either enhanced or constrained. The purpose of this study is to extend current research on technology-mediated team performance by examining the effects of collaboration mode (collocated or technology-mediated non-collocated) on team interactions during project-based teamwork. The following research questions are addressed in this study:

(1) How does technology-mediated collaboration impact team learning behaviors?

(2) Does team learning involve both technical and social process?

To answer these questions, an empirical interpretive research approach using direct observation (Bakeman, 2000) is used to interpret, evaluate and rate observable manifested behaviors and qualitative content (i.e. discussions) associated with team learning. Direct observation can provide more accurate descriptions of actual behavior at the time and place of its natural occurrence. In addition, direct observation can provide measures of responses that most subjects cannot accurately describe or recall, such as behavior rates, intensity of behavioral responses, and some thoughts that subjects may be unwilling to report or may distort. Ratings of task-related and affect-related communication exchanges by three trained observers are used.

The remainder of the paper is organized as follows. Section 2 presents a review of the relevant literature on team learning and potential influences on team member behavior. This is followed with a discussion of the research model and hypotheses in section 3. Section 4 and section 5 follow with a discussion of the methodology and results, respectively. Next, section 6 presents key findings, theoretical and methodological contributions, and implications. Concluding remarks follow in section 7.

2. LITERATURE REVIEW

Execution of project-based tasks requires information exchange, team learning and collaborative interactions to identify and assess the problem domain and to generate alternative solutions. In addition, team learning is a key factor and is essential to improved problem-solving, decision making and task performance. Kirschner et al. (2004) adaptation of Gibson's (1977) theory of affordance for the learning context along with social impact theory (Latane, 1981) provide a framework that can be used to explain how facilitative and motivational aspects of collaboration mode (collocated versus technology-mediated non-collocated) can either constrain or enhance team learning behaviors and the social context in which team learning and problem solving take place.

2.1 Affordances for Technology-Mediated Learning

Using Gibson's (1977) theory of affordances, Kirschner et al. (2004) suggested that the effectiveness of a collaborative learning process is contingent upon the technological, educational (or learning), and social affordances present in the task environment. According to Kirschner et al. (2004), affordances are those artifacts of an environment that determine if and how the environment can be appropriated to successfully complete a learning task. The technological affordances of the environment must facilitate the task execution and social interactions typical to collaborative learning and problem solving. Technological affordances refer to the 'presence' of specific tools and artifacts (e.g., videoconferencing, workgroup support system, shared whiteboard etc.) that support task accomplishment. Educational (or learning) affordance refers to the task environment's ability to stimulate, facilitate and maintain collaborative participation, information exchanges and interactions typical to the team learning process. For example, educational/learning affordance is realized through the use of collaborative tools such as videoconferencing and shared whiteboards that enable geographically dispersed team members to exchange and experiment with ideas during the design of a database. In other words, educational/learning affordance is the ability to 'derive utility' (i.e. adaptive structuration and appropriation) from a technology or procedure to learn to execute a specific task. Social affordance refers to the ability of a collaborative environment to allow the occurrence of the social dynamics (e.g., trust, mutual support, cohesion, cooperative goals, etc.) and collaborative interactions (e.g., team-wide participation, information exchange, joint problem solving, etc.) needed for collaborative problem solving. Collectively, these three affordances are properties of the task/learning environment that structures and determines the effectiveness of team learning processes.

2.2 Social Impact Theory

Social Impact Theory (SIT) is defined as changes in feelings, motivations, and behavior that occur in an individual as a result of the real, implied, or imagined presence or actions of other individuals (Latane, 1981). In a group context, SIT suggests that team members' affect and behavioral outcomes would be a function of three dimensions that define interpersonal interaction sessions--strength, immediacy, and number. Strength refers to influence that one or more members can exert on others. The strength dimension is contingent upon the importance attributed to one or more group members and their ability to induce either positive (e.g., increased motivation, conformance, cooperation) or negative (e.g., disincentive, noncompliance, shirking) outcomes in others. For example, team members tend to accept suggestions supported by an accepted expert within a group or an inept manager can induce diminished motivation in employees. Immediacy refers to the influence of time lapse between team member exchanges or spatial proximity (i.e. physical distance between team members). Immediacy suggests that group member contributions and affective responses will take on increasingly less importance as greater time, spatial, or interpersonal distance is experienced (Chidambaram & Tung, 2005). Immediacy also suggests that information provided by collocated team members would be more influential than information obtained from more distant sources (e.g., dispersed virtual team members). Consequently, there would be a tendency to readily consider local ideas and contributions and question the credibility of ideas and contributions from distant sources (Latane, 1981). Further, non-collocated members could experience less motivation to participate in the task execution process thereby reducing the number and quality of alternative solutions that could be assessed for relevancy (Blaskovich, 2008; Chidambaram & Tung, 2005). The numbers dimension expresses the quantity of influential sources. For example, as the number of individuals adopting a specific opinion or perspective greatly increases, others will be influenced to assimilate to the majority consensus.

3. RESEARCH MODEL AND HYPOTHESES

The research model is depicted in Figure 1 below. The model draws on the theory of affordances and the social impact theoretical framework to explain the effects of collaboration mode on team learning behaviors and their subsequent impact on team performance and interaction quality.

3.1 Collaboration Mode and Team Learning

Dennis, Fuller, and Valacich (2008) noted that face-to-face collaboration can be superior to technology-mediated collaboration because of its ability to facilitate the construction of verbal information or messages that are supplemented with physical gestures or nonverbal cues (e.g., postures, facial expression, eye gaze, tone of voice, and conversation pauses). These nonverbal cues function as a form of synchronizing feedback that confirms or disconfirms understanding and controls turn-taking. For example, verbal information combined with nonverbal cues inherent in face-to-face collaboration help to facilitate efficient turn-taking, questioning and feedback sequences among team members

[FIGURE 1 OMITTED]

SIT suggests that a collaboration mode will impact a team's ability to create and maintain a shared focus and sense of awareness of team members. For example, high numbers of participating members exhibiting unwillingness and limited motivation to engage in the learning process can further reinforce patterns of diminished participation among remaining team members and ultimately lead to poor task process execution. For example, Kapur and Kinzer (2007) observed that during collaboration, inequities in member participation patterns lead to tendencies to suffer from 'groupthink' or get "locked-in" early in the discussion, ultimately lowering the quality of discussion and, in turn, group performance. Recent research has also shown that relative to collocated teams, non-collocated technology-mediated teams inherently exhibit lower immediacy (i.e. lower spatial and psychological proximity) and therefore encounter more negative team process behaviors such as withdrawal from participation (e.g., Bonito, 2004; Chidambaram & Tung, 2005) diminished communication/information exchange (e.g., DeLuca & Valacich, 2006), lack of shared understanding (e.g., Cramton, 2001; Miranda & Saunders, 2003), and intra-team conflict (e.g., Hinds & Mortensen, 2005; Kankanhalli, Tan & Wei, 2007). Given the relation of these processes to effective team learning, the following hypothesis is proposed:

H1: Groups working in a face-to-face collaboration setting should exhibit more effective team learning behaviors than in a technology-mediated setting.

3.2 Team Learning and Performance

An important outcome of team learning behaviors is the construction of a team mental model (i.e. shared understanding of task requirements, solution content, team member roles, and task execution strategy, etc., see Miranda & Saunders, 2003; Rico, Sanchez-Manzanares, Gil, & Gibson, 2008) and team situation model (i.e. shared understanding of task status, task environment constraints, team member affective state, etc., see Cooke et al. 2003). Accurate shared team mental models provide declarative knowledge that facilitates task execution, minimizes duplication of effort and facilitates synergy and efficiency which in turn promotes greater productivity (Rico et al. 2008). Team situation models provide a form of implicit coordination (i.e. not verbally or explicitly stated) that allows team members to anticipate team member needs and appropriate future task state. This ability to anticipate appropriate action and needs of others reduces task interdependency delays, minimizes communication overhead typical to explicit coordination efforts and promotes greater productivity. Thus, the following hypothesis is proposed.

H2: Team learning behaviors will be positively associated with team productivity.

Teams that do not acquire an adequate team situation model during the team learning process are likely to experience process losses, frustration, conflict, and distrust (Bain, Mann & Pirola-Merlo, 2001; Hoegl, Weinkauf, & Gemuenden, 2004). In contrast, the ability to accurately learn both task-related content and assess the psychological climate of a team setting is likely to induce team efficacy, process satisfaction, and improved coordination. Klein and Kleinhanns (2003) noted that the ability to learn new approaches to work helps to create a collective orientation and attentiveness that gives rise to mutual support. Here, it is argued that collocated team settings, by virtue of a better team learning process, should experience a better social climate (e.g., encouragement, positive feedback, mutual respect, consideration of ideas offered) that produces higher quality team member interactions as compared to technology-mediated non-collocated teams. On the basis of this argument, the following is proposed.

H3: Team Learning behaviors will be positively associated with team interaction quality.

4. RESEARCH METHODOLOGY

4.1 Experimental Design

To test the research model and hypotheses, a laboratory experiment was conducted to examine the effects of two different modes of team collaboration--face-to-face and technology-mediated collaboration. Four person teams were used throughout both conditions. The technology-mediated collaboration setting was configured as a pair of dispersed collocated dyads seated at a table and communicated with the other dispersed dyad via a videoconferencing system (i.e. a fully integrated microphone, speaker and large video display system). In the face-to-face collaboration mode, all four subjects sat across from each other at a conference table. No content sharing technology options (e.g., shared whiteboard or shared desktop) were needed to complete the experimental task by either of the collaboration modes.

4.2 Participants

In this study, the 48 participants were drawn from a population of Management Information Systems undergraduate students familiar with the Systems Development Life Cycle approach to software design and knowledge of structured programming. Previous research has noted that novice programmers exhibit skills that are comparative to expert programmers, when the program functional requirements are of moderate complexity and when the problem domain is well understood (Balijepally et al., 2009; Yoo & Alavi, 2001 ). For their participation, extra credit was awarded and each design team was eligible to receive a $100 award for the highest team productivity score under each of the experimental conditions (i.e., face-to-face and videoconferencing).

4.3 Task and Procedure

The teams were required to enhance the functionality of a hypothetical university information system. The experimental task required each team to construct software design documentation that included (1) a hierarchy chart, (2) a list of function prototypes, and (3) pseudocode for each function identified as part of a solution to the problem. These activities are typical of software design and coding activities conducted within organizations and exhibit the same form of team collaboration, communication, and decision-making requirements (Khatri et al., 2006; Kumar & Benbasat, 2004). The experimental task duration was 2.5 hours. The teams were given a handout detailing the task objective and required design deliverables and were instructed to complete the task in a timely manner. In order to ensure the manipulation of a demand for team-wide communication, each subteam possessed a unique set of half of the task instructions and was required to share their unique instruction set with the other subteam.

4.4 Measures

The behavioral observation approach was used in assessing team learning by using three trained observer ratings of task-related and affect-related behaviors. The observers/raters underwent a 2-hour training session that reviewed construct definitions and relevant behavioral indicators to provide a rating (Bakeman, 2000). Observer training was concluded with two practice rating sessions. On completion of the practice sessions, the raters discussed their rationale so that they could get a full understanding of the construct definitions and the relevant behavioral indicators related to team learning. Overall team ratings comprised the sum of ratings of one twenty-minute interval at the midpoint and the last twenty minute interval of the overall 2.5 hour session.

Mitchell and James (2001) noted that decisions about when to measure and how frequently to measure a phenomenon of interest require consideration of when events occur, when they change, and how quickly they change. The two intervals were chosen to assess team learning during initial interactions and during what a previous study (Andres, 2006) revealed to be typical peak activity. Further, more than two assessment intervals were deemed unnecessary because rate of change was not of interest (Mitchell & James, 2001). Finally, to control for rater drift (i.e., tendency for change in interpretation of constructs and behavioral indicator identification), constructs and relevant behaviors were reviewed between rating sessions. Given that the final ratings were averaged among the raters, the [a.sub.wg(j)] interrater agreement index was used to assess inter-rater reliability (Brown & Hauenstein, 2005). The data was analyzed using PLS (partial least squares) because of its suitability to causal path modeling and analysis of data from small samples (Chin, 1998; Chin & Newsted, 1999).

4.4.1 Treatment Variable--Collaboration Mode: The collaboration modes utilized to form the experimental conditions were face-to-face collaboration and technology-mediated collaboration via videoconferencing. Recent research has shown that both collaboration modes differ in the capacity to which they impact communication efficiency, shared understanding and team interactions (Blaskovich, 2008; Chidambaram & Tung, 2005; Furumo, 2009).

4.4.2 Team Learning: According to Slavin (1996) and Edmondson (1999), team learning behaviors are an iterative cycle of information exchange, asking questions, seeking feedback, and elaboration via experimentation with alternative ideas. Based on the team learning literature, the team learning rating scale developed for this study was comprised of five items that that reflected the degree of 1) team-wide information exchange, 2) progressive elaboration on ideas, 3) usefulness of ideas proposed, 4) experimentation and evaluation of alternatives, and 5) confirmed consensus on proposed ideas. Three trained raters assessed relevant behavioral indicators of these items in real time to provide scale item ratings using a 7-point scale ranging from 1 (very low) to 7 (very high). The interrater agreement index for the team learning scale was [a.sub.wg(j)] = 0.98 indicating very good interrater agreement.

4.4.3 Team Productivity: The team productivity measure was determined by assessing the completeness of the required design documentation against a defined rubric. A research assistant, unaware of the study's objectives, computed the team productivity as a combined score on the completeness of file design (i.e. appropriate data fields), specification of function prototypes (i.e. function name, parameters and return type), and pseudocode for each function. A point was awarded for each correct specification of any data value of a specific data file, correct output and input data value of a program module (i.e., function or subroutine), and correct specification of program statement needed in a specific program module.

4.4.4 Team Interaction Quality: After task completion, a team interaction quality questionnaire elicited individual team member responses regarding the extent to which, while executing the task anyone 1) felt frustrated or tense about the behavior of another member, 2) personally expressed negative opinions of another member, and 3) personally received negative opinions form other members. This scale was adapted from the set of questionnaire items used by Green and Tabor (1980). Because our model focuses on the team level, aggregation of individual interaction quality responses required demonstration of within group agreement. In order to justify aggregation, the [a.sub.wg(j)] interrater agreement index was computed to assess the convergence of responses among team members. The interrater agreement index for the interaction quality scale was [a.sub.wg(j)] = 0.85 indicating good interrater agreement.

4.4.5 Control Variables: Programming ability is an important variable that can influence a participant's performance. To minimize the influence of this variable on performance, programming ability was measured and used as a covariate in the analysis. The index of programming ability was determined for each subject by grade received in an upper level programming course (Balijepally et al., 2009; Quigley, Tekleab & Tesluk, 2007).

4.5 Analysis

Measurement model validation and structural model testing was conducted using PLS-Graph version 3.00 where regression is performed on only a portion of the model at any one time (Chin, 1998). The research model has no more than one structural path that leads into to any one construct. Thus, the sample size of 12 four-person teams conforms to the sample size recommendation of 5 to 10 times the largest number of structural paths leading into to any one construct given the construct is measured with reflective indicators (Chin & Newsted, 1999). Recent studies have attested to the ability of PLS to obtain robust estimates where small samples were used (Belanger & Allport, 2008; Majchrzak, Beath, Lim & Chin, 2005).

5. RESULTS

5.1 Measurement Model

To assess internal consistency reliability, convergent validity and discriminant validity of the construct measurements, the construct's composite reliabilities (CR) and the average variance extracted (AVE) were calculated. Regarding internal consistency (reliability), composite reliability scores for both constructs (0.871 and 0.975), as shown in Table 1 below are well above 0.70, which is the suggested benchmark for acceptable reliability (Chin, 1998; Majchrzak et al. 2005). Table 1 below indicates with the exception of one item-to-construct loading of 0.641 all of the items have loadings at 0.700 or above and the t-statistic for the item to construct loadings are all significant at p [less than or equal to] 0.01. These results indicate that the measurement model has displayed both item internal reliability and item convergent validity.

Discriminant validity is evidenced when all the loadings of the scale items on their assigned latent variables or construct are larger than their loading on any other latent variable. Table 2 below provides the correlations of each item to its intended latent variable (i.e., loadings) and to all other constructs (i.e., cross loadings). Although there is some cross-loading, all items load more highly on their own construct than on other constructs and all constructs share more variance with their measures than with other constructs. It is not uncommon to encounter cross-loading among constructs because behavior and affective responses are rarely partitioned into neatly packaged units that function fully independently of one another (Majchrzak et al. 2005; Schaupp, Belanger, & Weiguo, 2009).

The second procedure necessary to show discriminant validity is the AVE analysis. The square root of the AVE of each construct must be larger than any correlation between this construct and any other construct (Gefen & Straub, 2005). In table 3 above, all the AVE square roots that appear in the diagonal are larger than the correlation between the team learning and interaction quality latent variables. This AVE analysis result and the item to latent variable loadings suggest that the measurement model displays discriminant validity.

[FIGURE 2 OMITTED]

5.2 Structural Model

In PLS analysis, a structural model can be evaluated on the basis of strong indicator to construct loadings, [R.sup.2] values, and significance of the structural path coefficients (Chin, 1998). Figure 2 above shows that all of the paths are significant at the level of 0.01. In addition, the model accounts for 52 to 67 percent of the variances ([R.sup.2] scores). Tenenhaus et al., (2005) suggested that the geometric mean of the average communality (i.e., variable variance explained by a factor structure) and average [R.sup.2] (variable variance explained by model) of latent variables could provide a global fit measure (called GoF) for PLS path modeling. Three GoF measures in line with effect sizes have been defined as GoF-small = 0.1, GoF-medium = 0.25, and GoF-large = 0.36 (Wetzels, Odekerken-Schroder, & van Oppen, 2009). For this study's model GoF, a value of 0.62 was obtained and it exceeds the cut-off value of 0.36 for large effect sizes of [R.sup.2]. The resulting [R.sup.2] and GoF values suggest that overall the data provides a good fit to the model and also indicates good model predictability. In addition, the hypothesized model provided the best fit to the data (i.e. largest explained variance) than all other alternative causal path configurations among the variables.

The PLS analysis results (Figure 2 above) show that all the hypotheses are supported. Collaboration mode was shown to increase team learning (b = 0.781, t = 10.634, p [less than or equal to] .01) thereby supporting hypothesis 1. Team learning lead to increases in both productivity (b = 0.820, t = 9.980, p [less than or equal to] .01) and interaction quality (b = 0.720, t = 7.656, p [less than or equal to] .01) indicating support for hypothesis 2 and hypothesis 3 respectively.

Mediation was assessed following the procedure recommended by Baron and Kenny (1986). To establish mediation, three conditions must hold. First, the independent variable must affect the dependent variable; second, the independent variable must affect the intervening variable; and third, the intervening variable must affect the dependent variable. All of the mediation paths in the model were shown to conform to the requirements indicating that team learning mediated the impact of collaboration mode on productivity and interaction quality.

6. DISCUSSION

6.1 Key Findings

The main objective of this research was to extend prior work on technology-mediated distributed teams by using a behavioral observation research approach to assess the role of team learning behaviors on task outcomes. Results demonstrated that relative to collocated teams, technology-mediated collaboration experienced greater instances of communication breakdowns, misunderstandings, and difficulty moving forward with task execution as compared to face-to-face conditions. Apparently, the lower levels of team learning affordances (i.e. technological, educational/learning, and social) structured team processes through mechanisms such as 1) extent of team-wide participation, 2) clarity of information exchanges, and 3) ability to maintain a persistent and coherent shared focus.

Results and observer notes also indicated that diminished immediacy (i.e. greater spatial distance between team members) associated with technology-mediated distributed teams exhibited a form of 'psychological distance' that limited the teams' ability to assess the social context (e.g., awareness of team member need for encouragement, confirmed or disconfirmed understanding, level of solution satisfaction etc.). In contrast, the greater inherent immediacy associated with face-to-face settings made it easier to provide clarification during questioning and feedback during peer-to-peer exchanges. This likely enabled the face-to-face teams to clear up any misunderstandings and to develop more alternative ideas that could be evaluated more quickly and thoroughly so that the team could readily move forward onto the next task activity. In summary, active participation and a collaboration mode that provides technological, educational (or learning), and social affordance are essential to the team learning process and ultimately greater productivity and interaction quality during project-based teamwork.

6.2 Contributions and Implications

The study offers a methodological contribution to the technology-mediated learning research stream by addressing an observed gap in the literature. The present study fills a void by analyzing observable behaviors of the learning process in action to identify specific mechanisms that enhance or constrain these specific behaviors during technology-mediated collaboration. The use of alternate methodologies can help to validate a research stream by demonstrating similar findings that are not subject to a 'method' bias (Patton, 1990; Yin, 2003). A second contribution lies in the integration of Latane's (1981) social impact theory (SIT) and the affordance framework of Kirschner et al. (2004) to shed light on technological, educational, and social factors that are simultaneously at work during technology-mediated collaboration.

Results of the study suggest that managers and educators should be aware of not only the communication efficiency issues that may arise but must also consider potential motivational and efficacy issues that may diminish participation critical to team-based problem solving. Proactive interventions could involve selective team configuration aimed at assuring a sense of outcome interdependence which has been shown to encourage teamwide participation and maintenance of shared focus (De Dreu, 2007; Hollingshead, 2001; Okhuysen & Eisenhardt, 2002). In addition, training to create greater awareness of collaboration can promote individual understanding of principles and behaviors associated with effective teamwork (Heath et al., 2002). Such training should promote understanding of the need for shared goal adoption and awareness of collaborative work processes (e.g., communication, interpersonal interactions, negotiated decision making, coordination, and adaptability).

6.3 Limitations and Future Research

One limitation of the study lies in the use of 'newly formed' teams. Consequently, generalization to other contexts, such as ongoing or longer-term geographically dispersed teams, should be done with caution. The use of a laboratory experiment also presents a limitation of the study with regard to generalization to actual field settings. However, an experiment is an appropriate research methodology when establishing internal validity is a critical issue and when testing theories in a new context (Wilson & Sheetz, 2008). The use of student subjects presents a potential limitation to the study. However, previous studies have demonstrated that there is little difference between using students and using professionals in decision-making situations and problems solving tasks such as software development (Balijepally et al., 2009; Yoo & Alavi, 2001). Finally, research should incorporate methodological triangulation (i.e. multimethods) within a single study as a form of additional validity to the study's findings. Future studies should also examine team learning differences associated with different team types and configurations such as temporary versus long-standing intact teams, team size, or different task types.

7. CONCLUSION

The overall finding obtained from the study is that technology-mediated collaboration is comprised of both a technology and social context. The technology context refers to the facilitation of task execution at a procedural or technical level (e.g., communication exchanges, evaluation of alternative ideas, creation of relevant documents or other artifacts via videoconferencing, workgroup support system, shared whiteboard etc.). The social context refers to facilitation of task execution at a psychological level (e.g., motivation to participate, mutual support, negotiation of consensus, cohesion). Consequently, when planning assignments and team projects, educators and organizations should view team learning as not just the technical or procedural acquisition of knowledge but also as a social process requiring team-wide participation, cooperation, and reflection.

8. REFERENCES

Alavi, M., & Leidner, D. E. (2001). "Technology Mediated Learning: A Call for Greater Depth and Breadth of Research." Information Systems Research, Vol. 12, No. 1, pp.1-10.

Andres, H. P. (2006). "The Impact of Communication Medium on Virtual Team Group Process." Information Resources Management Journal, Vol. 19, No. 2, pp. 117.

Bain, P. G., Mann, L., & Pirola-Merlo, A. (2001). "The Innovation Imperative: The Relationship Between Team Climate, Innovation, and Performance in Research and Development Teams." Small Group Research, Vol. 32, No. 1, pp. 55-73.

Bakeman, R. (2000). "Behavioral Observations and Coding." In H. T. Reis & C. K. Judd (Eds.), Handbook of Research Methods in Social Psychology (pp. 138-159). Cambridge University Press, New York.

Balijepally, V. G., Mahapatra, R. K., Nerur, S., & Price, K. H. (2009). "Are Two Heads Better Than One for Software Development? The Productivity Paradox of Pair Programming." MIS Quarterly, Vol. 33, No. 1, pp. 91-118.

Baron, R. M., & Kenny, D. A. (1986). "The Moderator-Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic and Statistical Considerations." Journal of Personality and Social Psychology, Vol. 51, No. 5, pp. 1173-1182.

Belanger, F., & Allport, C. D. (2008). "Collaborative Technologies in Knowledge Telework: An Exploratory Study." Information Systems Journal, Vol. 18, No. 1, pp. 101-121.

Blaskovich, J. L. (2008). "Exploring the Effect of Distance: An Experimental Investigation of Virtual Collaboration, Social Loafing, and Group Decisions." Journal of Information Systems, Vol. 22, No. 1, pp. 27-46.

Bonito, J. A. (2004). "Shared Cognition and Participation in Small Groups: Similarity of Member Prototypes." Communication Research, Vol. 31, No. 6, pp. 704-730.

Brown, R. D., & Hauenstein, N. M. A. (2005). "Interrater agreement reconsidered: An alternative to the [r.sub.wg] indices." Organizational Research Methods, Vol. 8, No. 2, pp. 165-184.

Chang, S. (2004). "High Tech vs. High Touch in Distance Education." International Journal of Distance Education Technologies. Vol. 2, No. 2, pp. i-iii.

Chidambaram, L., & Tung, L.L. (2005). "Is Out of Sight, Out of Mind? An Empirical Study of Social Loafing in Technology-Supported Groups." Information Systems Research, Vol. 16, No. 2, pp. 149-170.

Chin, W. W. 1998. "Issues and Opinions on Structural Equation Modeling." MIS Quarterly, Vol. 22, No. 1, pp. 7-16.

Chin, W. W., & Newsted, P. R. (1999). "Structural Equation Modeling Analysis with Small Samples Using Partial Least Squares." In R. Hoyle (Ed.), Statistical Strategies for Small Sample Research (pp. 307-341). Sage Publications, Thousand Oaks, CA.

Cooke, N. J., Kiekel, P. A., Salas, E., Stout, R., Bowers, C., & Cannon-Bowers, J. (2003). "Measuring Team Knowledge: A Window to the Cognitive Underpinnings of Team Performance." Group Dynamics: Theory, Research, and Practice, Vol. 7, No. 3, pp. 179-199.

Cramton, C. (2001). "The Mutual Knowledge Problem and its Consequences for Dispersed Collaboration." Organization Science, Vol. 12, No. 3, pp. 346-371.

De Dreu, C. K. W. (2007). "Cooperative Outcome Interdependence, Task Reflexivity and Team Effectiveness: A Motivated Information Processing Approach." Journal of Applied Psychology, Vol. 92, No. 3, pp. 628-638.

DeLuca, D., & Valacich, J. S. (2006). "Virtual Teams in and Out of Synchronicity." Information Technology & People, Vol. 19, No. 4, pp. 323 -344.

Dennis, A., Fuller, R. M., & Valacich, J. S. (2008). "Media, Tasks and Communication Processes: A Theory of Media Synchronicity." MIS Quarterly, Vol. 32, No. 3, pp. 575-600.

Edmondson, A. (1999). "Psychological Safety and Learning Behavior in Work Teams." Administrative Science Quarterly, Vol. 44, No. 2, pp. 350-383.

Furumo, K. (2009). "The Impact of Conflict and Conflict Management Style on Deadbeats and Deserters in Virtual Teams." Journal of Computer Information Systems, Vol. 49, No. 4, pp. 66-73.

Gefen, D., & Straub, D. W. (2005). "A Practical Guide to Factorial Validity Using PLS-Graph: Tutorial and Annotated Example." Communications of the Association for Information Systems, Vol. 16, No. 5, pp. 91-109.

Gibson, C. B., & Gibbs, J. L. (2006). "Unpacking the Concept of Virtuality: The Effects of Geographic Dispersion, Electronic Dependence, Dynamic Structure, and National Diversity on Team Innovation." Administrative Science Quarterly, Vol. 51, No. 3, pp. 451-495.

Gibson, J. J. (1977). "The Theory of Affordances." In R. Shaw & J. Bransford (Eds.), Perceiving, Acting and Knowing (pp. 67-82). Erlbaum, Hillsdale, NJ.

Green, S. & Taber, T. (1980). "The Effects of Three Social Decision Schemes in Decision Group Performance." Organizational Behavior and Human Performance, Vol. 25, No. 1, pp. 97-106.

Heath, C., Svensson, M. S., Hindmarsh, J., Luff, P., & vom Lehn, D. (2002). "Configuring Awareness." Computer Supported Cooperative Work: Journal of Collaborative Computing, Vol. 11, No. (3-4), pp. 317-347.

Hinds, P. J., & Mortensen, M. (2005). "Understanding Conflict in Geographically Distributed Teams: The Moderating Effects of Shared Identity, Shared Context, and Spontaneous Communication." Organization Science, Vol. 16, No. 3, pp. 290-309.

Hoegl, M., Weinkauf, K., & Gemuenden, H. G. (2004). "Interteam Coordination, Project Commitment, and Teamwork in Multiteam R&D Projects: A Longitudinal Study." Organization Science, Vol. 5, No. 1, pp. 38-55.

Hollingshead, A. B. (2001). "Cognitive Interdependence and Convergent Expectations in Transactive Memory." Journal of Personality and Social Psychology, Vol. 81, No. 6, pp. 1-10.

Kankanhalli, A., Tan, B. C. Y., & Wei, K. (2007). "Conflict and Performance in Global Virtual Teams." Journal of Management Information Systems, Vol. 23, No. 3, pp. 237-274.

Kapur, M. & Kinzer, C. (2007). "Examining the Effect of Problem Type in a Synchronous Computer-Supported Collaborative Learning (CSCL) Environment." Educational Technology Research & Development, Vol. 55, No. 5, pp. 439-459.

Khatri, V., Vessey, I., Ram, S., & Ramesh, V. (2006). "Cognitive Fit Between Conceptual Schemas and

Internal Problem Representations: The Case of Geospatio-Temporal Conceptual Schema Comprehension." IEEE Transactions on Professional Communication, Vol. 49, No. 2, pp. 109-127.

Kirschner, P., Strijbos, J., Kreijns, K., & Beers, P. J. (2004). "Designing Electronic Collaborative Learning Environments." Educational Technology Research & Development, Vol. 52, No. 3, pp. 47-66.

Klein, J., & Kleinhanns, A. (2003). "Closing the Team Gap in Virtual Teams." In C. B. Gibson & S. G. Cohen (Eds.), Virtual Teams that Work: Creating Conditions for Virtual Team Effectiveness (pp. 381-100). Jossey-Bass, San Francisco.

Kumar, N., and Benbasat, I. (2004). "The Effect of Relationship Encoding, Task Type, and Complexity on Information Representation: An Empirical Evaluation of 2D and 3D Line Graphs." MIS Quarterly, Vol. 28, No. 2, pp. 255-281.

Latane, B. (1981). "The Psychology of Social Impact." American Psychology, Vol. 36, No. 4, pp. 343-356.

Majchrzak, A., Beath, C., Lim, R., & Chin, W. (2005). "Managing Client Dialogues during Information Systems Design to Facilitate Client Learning." MIS Quarterly, Vol. 29, No. 4, pp. 653-672.

Miranda, S. M., & Saunders, C. S. (2003). "The Social Construction of Meaning: An Alternative Perspective on Information Sharing." Information Systems Research, Vol. 14, No. 1, pp. 87-107.

Mitchell, T. R., & James, L. R. (2001). "Building Better Theory: Time and the Specification of When Things Happen." Academy of Management Review, Vol. 26, No. 4, pp. 530-547.

Nunamaker, J. F., Reinig, B. A., & Briggs, R. O. (2009). "Principles for Effective Virtual Teamwork." Communications of the ACM, Vol. 52, No. 4, pp. 113-117.

Okhuysen, G. A., & Eisenhardt, K. M. (2002). "Integrating Knowledge in Groups: How Formal Interventions Enable Flexibility." Organization Science, Vol. 13, No. 4, pp. 370-386.

Patton, M. Q. (1990). Qualitative evaluation and research methods. SAGE Publications, Newbury Park, CA.

Quigley, N. R., Tekleab, A. G., & Tesluk, P. E. (2007). "Comparing Consensus- and Aggregation-Based Methods of Measuring Team-Level Variables: The Role of Relationship Conflict and Conflict Management Processes." Organizational Research Methods, Vol. 10, No. 4, pp. 589-606.

Rico, R., M. Sanchez-Manzanares, F. Gil, F., C. Gibson. (2008). "Team Implicit Coordination Processes: A Team Knowledge-Based Approach." Academy of Management Review, Vol. 33, No. 1, pp. 163-184.

Saw, K. G., Majid, O., Abdul Ghani, N., Atan, H., Idrus, R. M., Rahman, Z. A., & Tan, K. E. (2008). "The Videoconferencing Learning Environment: Technology, Interaction and Learning Intersect." British Journal of Educational Technology, Vol. 39, No. 3, pp. 475-485.

Schaupp, L. C., Belanger, F., & Weiguo, F. (2009). "Examining the Success of Websites beyond E-Commerce: An Extension of the IS Success Model." Journal of Computer Information Systems, Vol. 49, No. 4, pp. 42-52.

Slavin, R. E. (1996). "Research on Cooperative Learning and Achievement: What We Know, What We Need to Know." Contemporary Educational Psychology, Vol. 21, No. 1, pp. 43-69.

Tenenhaus, M., Vinzi, V. E. Chatelin, Y-M., & Lauro, C. (2005). "PLS Path Modeling." Computational Statistics and Data Analysis, Vol. 48, No. 1, pp. 159-205.

Wetzels, M., Odekerken-Schroder, G., & van Oppen, C. (2009). "Using PLS Path Modeling for Assessing Hierarchical Construct Models: Guidelines and Empirical Illustration." MIS Quarterly, Vol. 33, No. 1, pp. 177-195.

Wilson, J. M., Goodman, P. S., & Cronin, M. A. (2007). "Group Learning." Academy of Management Review, Vol. 32, No. 4, pp. 1041-1059.

Wilson, V. E., & Sheetz, S. D. (2008). "Context Counts: Effects of Work versus Non-Work Context on Participants' Perceptions of Fit in E-Mail versus Face-to-Face Communication." Communications of AIS, Vol. 22, No. 1, pp. 311-338.

Yin, R. K. (2003). Case Study Research: Design and Methods (3rd ed.). Sage Publications, Thousand Oaks, CA.

Yoo, Y. and Alavi, M. (2001). "Media and Group Cohesion: Relative Influences on Social Presence, Task Participation, and Group Consensus." MIS Quarterly, Vol. 25, No. 3, pp. 371-390.

Hayward P. Andres

Belinda P. Shipps

School of Business & Economics

North Carolina A&T State University

Greensboro, North Carolina, 27411

hpandres@ncat.edu bpshipps@ncat.edu

AUTHOR BIOGRAPHIES

Hayward Andres is an Associate Professor of Management Information Systems in the Department of Management at North Carolina A&T State University. He earned his Ph.D. in Management Information Systems at Florida State University. His current research focuses on virtual teams, technology-mediated collaboration, human-computer interaction, organizational computing, and project management. His research has been published in Journal of Management Information Systems, the Information Resources Management Journal, Team Performance Management, and the Journal of Educational Technology Systems.

Belinda Shipps is an Assistant Professor in the area of Management Information Systems at North Carolina A&T State University. She earned a Ph.D. from the University of Wisconsin-Milwaukee. Her current research interests include: agile IT staffing strategies, health care and IT, multimedia, social networking and education. Her research has been published in the Journal of STEM Education.
Table 1: Composite Reliability, AVE, and Indicator Loadings

Construct and Item Level Values loading

Team Learning (Composite Reliability = 0.975; AVE = 0.885)

TeamLearn1 Some team members were just listening without 0.924
 providing any verbal input

TeamLearn2 Ideas were easily developed and improved 0.913
 through team-wide discussion

TeamLearn3 All team members provided useful verbal input 0.976

TeamLearn4 Ideas were thoroughly discussed and evaluated 0.925
 among all team members

TeamLearn5 Team-wide consensus was confirmed before 0.964
 moving forward with an idea

Interaction Quality (Composite Reliability = 0.871; AVE = 0.697)

IntQual1 Felt frustrated or tense about another team 0.641
 member's behavior

IntQual2 Expressed negative opinion about another's 0.897
 team member's behavior

IntQual3 Observed others express a negative opinion 0.936
 about your behavior

Table 2: Indicator Loadings

 Latent Variable Item Loadings

 Team Interaction
Item Learning Quality

TeamLearn1 .924 .639
TeamLearn2 .913 .646
TeamLearn3 .976 .727
TeamLearn4 .964 .664
TeamLearn5 .925 .712
InteractionQual1 .612 .897
InteractionQual2 .369 .641
InteractionQual3 .748 .936

Table 3. Latent Variable correlations and square root of AVE

 Team Learning Interaction Quality

Team Learning .941
Interaction Quality .720 .835

Note: square root of the constructs' AVE appear in the diagonal
COPYRIGHT 2010 Journal of Information Systems Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Andres, Hayward P.; Shipps, Belinda P.
Publication:Journal of Information Systems Education
Article Type:Report
Geographic Code:1USA
Date:Jun 22, 2010
Words:6478
Previous Article:Are men more technology-oriented than women? The role of gender on the development of general computer self-efficacy of college students.
Next Article:A call to IS educators to respond to the voices of women in information security.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |