Printer Friendly

Team task analysis: identifying tasks and jobs that are team based.

INTRODUCTION

The primary objective of the present paper is to present information on the development of three team task analysis scales. A goal was to develop these scales in such a manner that they could be easily incorporated into existing job and task analysis systems to identify team-based tasks and quantify the degree of team interdependency. The scales also allow for a quantitative assessment of the degree to which a job is team based. Initial validation data for the scales from a lab study are presented. The results of the lab data are supplemented with descriptions of applications of the scales in operational field settings.

Job and Task Analysis Scales

Task analysis entails the description of jobs in terms of identifiable units of activities. Although the level of specificity of analysis and description may vary, job and task analysis techniques are typically focused at the task level. Hence, job and task analysis is the process by which the major work behaviors and associated knowledge, skills, and abilities (KSAs) that are required for successful job or task performance are identified. Thus it is recognized, from both a professional and legal perspective, that job analysis is the critical foundation for most, if not all, human resource functions (Binning & Barrett, 1989; Equal Employment Opportunity Commission et al., 1978).

Procedurally, at some point in the job analysis process, task ratings on a number of dimensions are obtained from subject matter experts. These dimensions or scales typically encompass but are not limited to importance, frequency, time spent, criticality, difficulty of performing, difficulty of learning, time to proficiency, and consequences of errors (Arthur, Doverspike, & Barrett, 1996). Although these scales have generally been studied in the context of tasks performed by individuals (e.g., Sanchez & Fraser, 1992; Sanchez & Levine, 1989), they are clearly applicable to tasks performed by teams. However, because teams consist of two or more individuals who have specific role assignments, perform specific tasks, and must interact and coordinate to successfully achieve common goals or objectives (Baker & Salas, 1997), team tasks have an additional element of complexity that is not present in the analysis of individual tasks. These differences between individual and team tasks and the resultant need for additional task analysis scales to describe and obtain information about team tasks are described in the next section.

Overview and Summary of the Team Task Analysis Literature

Although there has been an increased amount of attention paid to teams in recent years (e.g., Artman, 2000; Bartone, Johnsen, Eid, Brun, & Laberg, 2002; Brannick, Prince, & Salas, 1997; Dyer, 1984; Hackman, 1987; Rasker, Post, & Schraagen, 2000; Salas, Burke, Bowers, & Wilson, 2001), team task analysis has received very little of this attention. For instance, a comprehensive search of the published literature identified only a small number of team task analysis papers, such as Bowers, Baker, and Salas (1994), Bowers, Morgan, Salas, and Prince (1993), Dieterly (1988), and Swezey, Owens, Bergondy, and Salas (1998; see also Baker, Salas, & Cannon-Bowers, 1998). A review of this literature highlights the differences and commonalities between individual and team tasks and, subsequently, the need for task analysis scales to describe team tasks.

First, as noted, there are some commonalities between team task analysis and individual job and task analysis. Thus rating scales such as importance, frequency, time spent, and time to proficiency are equally applicable and relevant to both individual and team tasks. In addition, individual and team task analysis share a commonality of data collection methods, such as the use of questionnaires, critical incident techniques, observation, interviews, expert judgments, and archival data. Also, as with individual job and task analysis, the use of multiple methods is strongly recommended in the implementation of team task analyses along with the use of multiple rating sources, including incumbents and supervisors, who should be selected to ensure a representative sample.

Second, in spite of these commonalities, there are important differences between individual and team task analysis. For instance, the team performance literature (e.g., Glickman et al., 1987; Morgan, Glickman, Woodward, Blaiwes, & Salas, 1986) draws a strong distinction between taskwork and teamwork--one that is by definition not germane to individual tasks. Teamwork refers to the team's efforts to facilitate interaction among team members in the accomplishment of team tasks. In addition, the associated team process KSAs are generally generic rather than task or job specific. Taskwork, however, refers to the team's efforts to understand and perform the requirements of the job, tasks, and equipment to be used. So, unlike Bowers et al. (1994), who focused on task analysis indices for coordination, a teamwork variable, we present scales focused on taskwork variables.

Third, taskwork or team tasks can vary in terms of their degree of team interdependency or "teamness." In the team task analysis approach that we present, team interdependency is operationalized in terms of team relatedness and team workflow pattern. Team relatedness represents the extent to which tasks cannot be performed by any one individual alone, and team workflow represents the paths by which work or information flows through the team in order to allow the team to complete the task. Both of these metrics can be used to empirically and quantitatively represent the extent to which a task (or job) is team based. There would also appear to be different ways of operationalizing or describing the interdependency of a team's taskwork.

An implication of the varying levels of task-level team interdependency is that in the context of team-based tasks or jobs, a distinction can be made between team performance and individual performance. Thus it is important to distinguish tasks and task elements that are dependent on more than one individual for their successful performance from those that are not. Both are essential to the success of a team, and omitting either class will result in an incomplete and deficient analysis of the team. Information about the nature of team interdependency is important because the level of interdependency at which the team is operating has implications for its selection, training, composition, work design, motivation, compensation, and leadership needs (Tesluk, Mathieu, Zaccaro, & Marks, 1997). For instance, individual-based rewards may be most appropriate when the level of interdependency is low; in contrast, team-based rewards may be most appropriate when the level of team interdependency is high. Likewise, high levels of interdependency might increase the criticality of team-based KSAs, such as teamwork knowledge and skills in selection (Stevens & Campion, 1994). Finally, remedial and developmental interventions directed at team performance, such as team building and process consultation, may be misplaced if the level of interdependency is so low that team performance resides primarily at the individual and not the team level. Thus an effective team task analysis guides researchers and practitioners to the critical team tasks and associated behavioral requirements. So, like traditional individual job and task analysis, team task analysis can and should serve as the foundation for pertinent human resource functions.

In summary, the critical issue is that in the context of teams and team taskwork, team members have some specified level of interdependency. The centrality of interdependence in distinguishing individual from team tasks is highlighted by the fact that it is the feature that is used to distinguish teams from groups (e.g., Morgan et al., 1986). Consequently, describing the nature of this interdependence is essential to an effective team task analysis. So, in the context of this framework, the objective of the present paper was to develop and provide initial validation data on new task analysis scales for assessing the team interdependency or "teamness" of tasks and jobs. Team interdependency was operationalized using metrics of team relatedness and team workflow.

Developing the Team Task Analysis Scales

The team task analysis approach presented in this paper is based on the premise that in a population of tasks that constitute a job, the percentage of tasks that are team based can range from 0% to 100%. Furthermore, the percentage of team-based tasks proportionately covaries with the extent to which the job can be described as team based. Two examples of jobs that might fall at the ends of this continuum are an avionics troubleshooting technician and a C5 (or C130) transport flight crew, with the former having few or no team-based tasks and the latter actually having distributed teams at two levels: the flight crew (operators--i.e., pilot, copilot, and navigator) and other aircrew (e.g., load master, air medica technicians, pararescuers [Pls], and flight nurses). Other jobs, such as medical technicians and crew chiefs, may fall in the midrange of this continuum. Specifically, medical technicians function autonomously and individually when performing intake tasks such as taking temperature and blood pressure readings, but they are members of triage or operating room teams when performing those specified tasks. The same is true for crew chiefs, who may launch aircraft by themselves but lead teams in the processing of incoming and landing aircraft. The advantage of the team task analysis scales presented here is that instead of relying on judgmental, rational estimates, they can be used to quantify the degree to which a job is team based, thus allowing empirical comparisons across jobs.

We had two main goals in the development of the team task analysis scales. First, they had to lend themselves to being easily incorporated into existing job and task analysis systems to readily and effectively identify team-based tasks and also quantify their degree of team interdependency. Second, in addition to providing task-level information, they had to also permit a quantitative assessment of the extent to which a job is team based. Based on reviews of the traditional job analysis, task analysis, and team task analysis literatures, our approach focused on scales of team relatedness and team workflow.

Ratings of team relatedness and team workflow. Information about the team relatedness of tasks can be obtained by asking respondents (e.g., incumbents and supervisors) to rate tasks and activities on the extent to which they can be performed alone. This allows one to identify, assess, and quantify the team relatedness of tasks and, ultimately, the job. Sample task analysis items for assessing team relatedness for our lab task are presented in Table 1.

In addition to team relatedness, ratings of teamwork or information flow can also be obtained at the task level. One model of workflow variation is Tesluk et al.'s (1997) dimensions of interdependence, which posit that the degree of team interdependency for specified tasks (or jobs) can differ along four levels: pooled/ additive interdependence, sequential interdependence, reciprocal interdependence, and intensive interdependence. The dimensions are listed from the lowest to highest in terms of the integration and interdependence of team members. Tesluk et al.'s (1997) four patterns of team taskwork processes (i.e., work team arrangements or workflow) serve as the basis for the workflow scale presented here. Specifically, the four levels of team interdependency are used as response options to obtain ratings describing the level of interdependency. We added a fifth level to capture tasks that are performed by the individual outside the context of a team. Thus these tasks would represent those performed by the individual alone, not in a team. Sample task analysis items from our lab task for team workflow are presented in Table 2. In validating the team task analysis scales, we assessed the relationship between team relatedness and team workflow ratings for the same items to investigate their convergence and address the issue of the extent to which they could be used interchangeably. We also investigated their comparative relationship with team performance.

Assessing the degree of team relatedness and team workflow of jobs. As noted, the team task analysis approach presented here is based on the premise that in the population of tasks constituting a job, the proportion of tasks that are team based can range from 0% to 100%. This range represents the extent to which a job can be described as team based. Therefore, at one extreme, if 0% of the tasks are team based, then the job in question is not a team-based job. In its simplest form, assessing the extent to which a job is team based can be accomplished by asking ratees to indicate the team workflow pattern that is required for the effective performance of the job as a whole. Table 3 presents an example of a rating form used to assess the workflow pattern of a job. This indicator serves as one metric of the degree of team interdependency or the extent to which a job is team based.

Alternatively, a job-level team-relatedness rating could be obtained to describe or represent the extent to which a job is team based. We chose not to obtain this rating in the present study because, as described in the Method section, the nature of our tasks and team combat missions were such that they precluded all ratings except a rating of "I can definitely not perform this task alone," and so trying to obtain job-level team-relatedness ratings would not have had any relevance for the research participants. However, the implication of this omission for future research is noted in the Discussion section.

The extent to which a job is team based can also be determined by using the composite of the task-level ratings. Thus means of either the task-level team workflow or team-relatedness ratings could be used as indicators of the "teamness" of a job. In addition, the extent to which a job is team based can also be operationalized as the ratio of number of tasks that cannot be performed alone to the total number of tasks that constitute the job (Dieterly, 1988), with higher scores indicating that the job is more team based; in the present paper, we describe this metric of teamness as the team-task ratio. Our analyses focused on a comparative evaluation of these three operationalizations of teamness.

Initial Validation of the Team Task Analysis Scales

To permit an evaluation of the accuracy with which the team task analysis scales can be used to differentiate individual from team tasks, specific individual tasks (which team members could effectively and indeed were required to perform alone within the context of the team) and team tasks (those that team members could not effectively perform alone) were developed and incorporated into the combat mission and associated tasks used in the present lab study. These predetermined task categories served as "true scores" against which the ratings obtained from the research participants could be compared.

In addition, we investigated the relationship between the team task analysis scale ratings and team performance. Consistent with the conceptual and theoretical basis for the positive relationship between team performance and shared mental models or knowledge structures (Day, Arthur, & Gettman, 2001; Edwards, Day, Arthur, & Bell, in press; Klimoski, & Mohammed, 1994; Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000), the posited relationship between the team task analysis scale ratings and performance was based on the reasoning that team members assigned to a highly interdependent mission, who recognized and subsequently rated the mission as such, would perform better on the mission because their approach to the mission would be compatible with the task demands and requirements needed for successful performance. In contrast, a team that rated the mission as low in team interdependency would perform poorly because such a rating would reflect the failure to recognize the team-related demands and requirements needed for successful performance. Consequently, they would perform poorly on the mission because their approach to the mission would not be compatible with the team demands required by the mission tasks.

In an effort to provide initial validation data for the team task analysis scales presented here, we sought to answer the following questions:

* Research Question 1: To what extent do individual team members agree about the team interdependency of the tasks as operationalized by the team task analysis scales?

* Research Question 2: What is the relationship between task-level ratings of team relatedness and team workflow ratings of the same tasks? Can these two scales be used interchangeably as measures of team interdependency?

* Research Question 3: Do the team-relatedness scale, team workflow scale, and the team-task ratio operationalization (number of tasks reported as team based divided by the total number of tasks) differentiate between predetermined individual and team tasks?

* Research Question 4: What is the relationship between job-level ratings of the degree to which a job is team based and composites of task-level scale ratings?

* Research Question 5: Are team members' perceptions of team interdependency, as operationalized by the team task analysis scale ratings, related to team performance?

In summary, we present new scales for assessing the team interdependency of tasks and jobs. Team interdependency was operationalized in terms of team relatedness and team workflow. In addition, we present initial data on the validity and efficacy of these scales. The preceding research questions were answered using the team task analysis scale ratings and performance of 13 4-person teams on Steel Beasts, a PC-based, highly interdependent tank simulation requiring complex information processing. Thus this is a small sample study with the primary objective of introducing three new team task analysis scales, and our use of correlations in the Results section should be interpreted descriptively instead of inferentially (Jaccard & Becker, 2001; Rosenberg, 1990).

METHOD

Participants

An initial sample of 100 male volunteers from a large southwestern university were recruited (from the undergraduate psychology subject pool and upper-level psychology courses) to attend a screening and scheduling session. Of this initial sample, 60 men were selected and assigned to 15 4-person teams based on their availability to participate in a 2-week training protocol. Data for each team were collected over the 2-week period, and participants were paid $90 to attend 10 sessions, held Monday through Friday for 2 consecutive weeks. Participants also had the opportunity to receive an individual bonus of $50 or $20 if their team was the highest or second highest performing team, respectively (i.e., the sum of all test games across the 10 sessions). Because of missing data on some variables, the final data presented here are limited to the 52 participants (mean age = 19.63, SD = 1.12) for whom complete data were available on all variables. Team-level analyses were consequently based on 15 teams.

Measures

Mission team task analysis questionnaire. The mission team task analysis questionnaire consisted of two parts. Part 1 asked participants to rate the team relatedness (1 = I can easily perform this task alone; 5 = I can definitely not perform this task alone) of 54 mission tasks and activities. Sample task statements are presented in Table 1. In Part 2, participants indicated which of five workflow patterns (i.e., not a team task/activity, pooled/additive interdependence, sequential interdependence, reciprocal interdependence, and intensive interdependence) best characterized the performance of the 34 mission tasks and activities. Sample task statements are presented in Table 2. Although there were no such tasks in our task list, the "not a team task/ activity" response option, which participants could use to indicate that the specified task was an individual task that was performed outside the context of the team, was included for the sake of completeness because it is a distinct possibility in some situations and settings.

The list of 34 tasks and activities was generated using a cognitive task analysis (Arthur, 1998) conducted by two members of the research team who were very familiar with Steel Beasts, the performance task used in the study. These individuals also developed the combat scenario used in the study. As noted, in designing the combat scenario, we developed two types of tasks--specifically, individual tasks (which team members could effectively and indeed were required to perform alone; n = 18) and team tasks (those that team members could not effectively perform alone; n = 16)--and incorporated them into the combat mission.

Feedback and reaction questionnaire. This measure was constructed to assess participants' perceptions and reactions to certain characteristics of the lab task and protocol. The only data from this measure used in the present paper were from the job-level team workflow item (see Table 3).

Steel Beasts. The team performance task was Steel Beasts (eSim Games, 2000), a PC-based, highly interdependent tank simulation requiring complex information processing. The simulation allows several players to be networked together and cooperatively work on the same mission. The simulator uses highly accurate replicas of U.S. M1A1 and German Leopard 2A4 tanks in a modern warfare environment. At our request, the program developers made some modifications to the Steel Beasts simulation to facilitate its use in the study.

We created a mission to meet specified criteria for the study. The game scenario consisted of a two-tank platoon of U.S. M1A1 tanks controlled by the participants working interdependently to destroy 13 enemy German Leopard 2A4 tanks. In terms of the lab setup, the two-tank platoon was represented by four networked computers, with two computers (tank commander/driver and gunner) representing each tank. The participants' mission was to protect a small farming village from attack by enemy forces. Participants were instructed to follow a road north and engage the enemy platoon. The mission objectives were to destroy all enemy tanks and protect their own tanks from being destroyed. The scenario ended when (a) all enemy tanks were destroyed (i.e., the mission was completed), (b) the participants' tanks were destroyed, or (c) a 10-min time limit was reached. Training sessions, which were 1 hr long, consisted of one practice mission and two test missions.

For each of the two tanks, 1 participant was assigned the role of gunner and the other was assigned the role of tank commander/driver. Team interdependency was at two levels: participants working together to operate a single tank and the two tanks working together as a platoon to complete the mission objectives. The tank commander/driver was responsible for driving the tank, creating and following routes, identifying enemy tanks for the gunner, and strategically positioning the tank (e.g., using the terrain to protect the tank from enemy fire). The gunner was responsible for identifying, aiming, lasing, and firing at enemy tanks. Therefore the tank could not be operated successfully without the combined effort of both participants. The mission difficulty level was such that a single tank could not complete the mission (i.e., 1 tank vs. 13 enemy tanks) without the assistance of the other tank. Participants in each tank were encouraged to verbally communicate to facilitate the team's performance of the mission. In addition, the actions of the tanks had to be coordinated to avoid friendly fire and subsequently destroy each other on the battlefield.

Performance scores were obtained at both the level of the 2-person tank and the 4-person team (platoon). Participants received scores for the gunners' hit percentage, number of enemy tanks destroyed (maximum of 13), number of friendly tanks destroyed by the enemy (maximum of 2) and by each other (maximum of 1), and whether or not the mission objectives were completed (score 0 or 1) within the 10-min time limit. Each of these performance scores for each game was scaled to 100 and summed to obtain the 4-person team's total score, which ranged from 0 to 500.

Design and Procedure

The initial sample of 100 men volunteered to attend a 1-hr screening and scheduling session, during which they were given additional information about the study, including details about the task and protocol. Participants were asked if they had previous experience with Steel Beasts, with the intention of eliminating anyone who had reported using the PC simulation; however, no one reported ever having used Steel Beasts, and therefore no one was eliminated from the sample for this reason. Based on their availability, 60 individuals were randomly selected and assigned to one of 15 4-person teams. Assignment of tank partners and teams was random within the context of the participants' availability, and participants retained the same partner across all 10 sessions.

The laboratory consisted of four networked computer stations positioned against a wall, each consisting of a desktop computer, keyboard, mouse, and joystick. Although each participant was assigned to his own computer station, a pair of computers (tank commander/driver and gunner) represented a tank. If one tank of the two-tank platoon was destroyed during a mission, participants in the destroyed tank had an external view of the operational tank, which continued play until the game was over. A detailed breakdown of the research protocol is presented in Table 4. During all test and practice missions, participants had access to a brief written description of the mission scenario and objectives.

RESULTS

Research Question 1

The first research question pertained to the extent to which team members agreed about the team interdependency of the tasks, as reflected in their ratings on the team task analysis scales. This was assessed by computing intraclass correlations for each rating scale for both individual and team tasks. Overall, the results, which are presented in Table 5, indicate reasonably high levels of interrater agreement for all scales. The lowest level of interrater agreement, .80, was obtained for team-relatedness ratings for individual tasks. No interrater agreement is presented for the team-task ratio because this operationalization is computed from ratings obtained from the team-relatedness scale. Specifically, for individual tasks, the team-task ratio was computed as the number of "true" individual task statements that received a team-relatedness rating of 3 ("I can perform this task alone with some difficulty") or greater, divided by the total number of "true" individual task statements (i.e., 18). Likewise, for team tasks, the team-task ratio was computed as the number of "true" team task items that received a team-relatedness rating of 3 or greater, divided by the total number of "true" team task statements (i.e., 16).

Research Question 2

The second research question pertained to assessing the relationship between the task-level ratings of the team relatedness and team workflow. This was accomplished by computing the correlation between the team-relatedness ratings (see Table 1) and the team workflow ratings (see Table 2) for each task statement. The mean for these 34 correlations was .25 (SD = .15, minimum = -.11, maximum = .64). This moderate correlation between the team relatedness and team workflow ratings suggests that they may represent different facets of team interdependency and therefore should not be used interchangeably as measures of team interdependency.

Research Question 3

The third research question pertained to the extent to which the team task analysis scales accurately differentiate between predetermined individual and team tasks. Table 5 presents the means, standard deviations, and standardized mean differences (ds) for the two types of tasks. These results show that team tasks were rated as being higher in terms of both team relatedness (d = 1.08) and team workflow (d = 1.71). Also, for the team-task ratio operationalization, the proportion of team-based tasks that were rated as team based was higher than the proportion of individual-based tasks that were rated as team based (d = .98). For comparative purposes Cohen (1992) described ds of .20, .50, and .80 as small, medium, and large effect sizes, respectively. Thus the results indicate that both the team relatedness and team workflow scales, along with the team-task ratio operationalization, effectively differentiated individual from team tasks, with the team workflow scale appearing to be the most effective of the three.

Research Question 4

The fourth research question pertained to the relationship between the job-level and composites of task-level ratings of team interdependency. Table 6 presents the correlations between the job-level rating of the degree to which the mission was team based (i.e., team workflow rating) and composites of the task-level ratings of team relatedness and team workflow, along with the team-task ratio. Composites were computed for both individual and team tasks. The results indicate that the task-level composites displayed weak to moderate relationships with the job-level rating of mission team workflow. Consistent with the results reported for Question 3, the results in Table 6 also provide some additional preliminary convergent and discriminant information for the team task analysis scales. Specifically, as would be expected, the team task composites were generally positively moderately related to the job-level interdependency rating, and conversely, the relationships with individual task composites were negative to zero.

Research Question 5

The fifth research question pertained to the relationship between the team members' perceptions of team interdependency and team performance. To investigate this relationship, we averaged the task scale ratings across the four team members to obtain a team-level rating (Day et al., 2004), which was subsequently correlated with team performance. As posited, the results presented in Table 7 generally indicate that teams that rated the individual tasks as team based performed worse than those that rated them as individual based. Conversely, teams that rated the team tasks as team based performed better than those that rated them as individual-based tasks. So, in summary, our results generally suggest that teams that recognized the team-based demands of the mission and subsequently performed the tasks and mission as such (as reflected in their team task analysis ratings) were more effective (as reflected in their higher team performance scores) than those that did not. In addition, the strongest effects were obtained for the job-level team workflow scale. (An exception to this general finding is the team-task ratio result for team tasks.)

DISCUSSION

The objective of the present study was to introduce and present initial data on the validity and efficacy of three team task analysis scales. Our results generally indicate that (a) these scales demonstrate high levels of interrater agreement; (b) the team relatedness and team workflow scales, along with the team-task ratio operationalization, appear to represent different facets of team interdependency and should therefore not be used interchangeably, although they can and should probably be used in conjunction; (c) the scales appear to be very effective at differentiating individual-based tasks from team-based tasks, with the team workflow scales being the most effective; (d) the relationship between job-level and composite task-level ratings of teamness are at best moderate; and (e) teams that correctly rated the task and the mission as being more team-based performed better on the team task than did teams that rated the mission and tasks as being lower in terms of team interdependency.

The efficacy of these scales and the approach to team task analysis are reflected by their use in applied operational field settings. For instance, the team task analysis scales are being used successfully in two U.S. Air Force operational environments. The first involves assessing the team requirements associated with a set of training tasks for undergraduate pilots. The scales help to identify those tasks with greater and lesser interdependence for training and performance. This information is critical to the eventual evaluation of alternative training technologies and collaborative tools to improve the quality of individual and team training.

The second operational setting involves using the scales to identify the interdependence associated with a set of knowledge, skills, and experiences that expert pilots have identified as important for combat mission performance. The scales provide researchers with indicators of "teamness," which are then used to identify the most logical opportunities for connecting distributed teams together to achieve common training objectives. For example, if it is known that a skill such as airspace management has strong interdependence in that it requires a fighter pilot to interact with other aircraft and with an airborne air traffic controller, and if it is also known that other fighter pilots and the air traffic controllers need to develop the same skill, one can then develop a common training scenario that permits all of the constituent trainees to get training on the same skill at the same time. This provides significant economies of scale in terms of common training across multiple operators. It further provides an additional rationale for establishing connectivity among the training environments in order to achieve training success and realize the savings resulting from the economies of scale.

The flexibility and transportability of the scales to various settings, such as the two described here, is a result of their "generic" nature because they are fundamentally response options that can be used with preexisting task statements, as illustrated by the sample task statements for general air traffic control presented in Table 8. This flexibility also means that these scales and approach can be easily incorporated into existing occupational, job, and task analysis systems.

Another advantage of this approach is its potential use as a criterion measure of team training effectiveness. Specifically, the strong relationship between the team task analysis scale ratings and team performance indicates that higher performing teams were those that recognized the team-based nature of the task (as reflected in their ratings) and subsequently performed the task accordingly. Consequently, if this finding is replicated in future studies, it would suggest that these team task analysis scales could potentially serve as another operationalization or criterion measure of team training effectiveness. Specifically, better-trained teams are those that will recognize the interdependencies of their tasks, with said recognition being reflected in their team task analysis ratings. Related to this, the team task analysis scale ratings could also be used as predictors of team performance.

Our findings also have additional implications for team-level interventions. First, the low correlations between team relatedness and team workflow suggest that they should be used in conjunction, instead of interchangeably, to obtain a more complete assessment of the teamness of specified tasks. Thus, although a number of tasks could share the same level of team relatedness (e.g., "I can perform this task alone only with great difficulty"), the workflow patterns required to perform these tasks could vary substantially (e.g., could range from sequential to intensive interdependence). Second, in the context of training, with the establishment of specified cutoffs, information from the team task analysis scales could be used in a needs analysis framework to identify when and which teams may need training. A related issue, again within the context of specified cutoff scores, is that the team task analysis scales can provide guidance on when team rather than individual training is most appropriate. An important empirical question is, at what point should the cutoff scores be set to trigger these team-level interventions? Like the different approaches to aggregating and weighting importance and frequency ratings to arrive at specified task criticality cutoffs, the specific cutoff will, we think, probably be a function of the particular situation. Also, as with any cutoff score, its development and justification should be logically and rationally defensible.

Limitations and Suggestions for Future Research

A limitation of the present study is the small sample size for the team-level performance analyses. Consequently, our results are best viewed as descriptive data that provide an initial preliminary assessment of the efficacy of the team task analysis scales presented here. Thus larger team-level sample size studies are needed to permit more robust conclusions regarding the generalizability of our findings. On a related note, the sample demographics were artificially constrained by the research design, such that only male participants were included in the study. Although this controls for sex effects a priori, it may plausibly further limit the generalizability of the results, although we think it is unlikely that the effects obtained here are male specific (Sanchez-Ku & Arthur, 2000). In addition, we also acknowledge limitations associated with the fact that the present study was laboratory based. Thus, given the complexity of real-world tasks, it is plausible that different results may have been obtained had this study been conducted in a field setting. Nevertheless, the advantage of "true" scores in the lab, in terms of predetermined individual and team tasks, makes this an informative introductory study of the team task analysis scales.

The present paper is intended to provide a descriptive account of our initial development and validation of the team task analysis scales presented here, and to this end, the noted limitations also provide insight into potentially fruitful areas of research. For instance, because of the level of specificity, the number of ratings required, and the increased opportunity for information distortion (Pine, 1995), the moderate relationship between the job-level and the composite task-level ratings is not particularly surprising. However, there are clearly instances in which the task-level ratings may be of more informational value. Thus additional research investigating the boundary conditions under which using one set of ratings may be advantageous than the other is warranted.

The present study also assessed the comparative efficacy of two operationalizations of team interdependency--namely, team relatedness and team workflow--and the extent to which they could be used interchangeably. Our results suggest that team relatedness and team workflow may represent different facets of interdependence, as reflected in their conceptualization. Specifically, team relatedness represents the extent to which tasks cannot be performed by any one individual alone, whereas team workflow represents the paths by which work or information flows through the team in order to allow the team to complete the task. Thus, with the exception of the high and low extremes of both scales, it is conceivable that the levels of these conceptualizations of team interdependency do not necessarily proportionally covary--as was reflected in our relatively low task-level correlations--so they may be best used in conjunction instead of interchangeably. However, because we did not collect job-level ratings of team relatedness, our data do not speak to the relationship between these two measures of interdependency in terms of job-level ratings, so this issue calls for additional future research.

In conclusion, although the observed effects were consistent with our expectations, we acknowledge some potential limitations with the study. Nevertheless, we believe this paper introduces a sound methodology for quantifying the extent to which a group of tasks or job is team based, and we encourage researchers and practitioners to engage in additional tests of the efficacy of the scales and approach.

ACKNOWLEDGMENTS

This research was sponsored under Contract Numbers F41624-97-C-5000 and F41624-97-C5030 to Winfred Arthur, Jr., from the U.S. Air Force Research Laboratory, Human Effectiveness Directorate, Warfighter Training Research Division, Mesa, Arizona. The views expressed herein are those of the authors and do not necessarily reflect the official position or opinion of their respective organizations.

REFERENCES

Arthur. W., Jr. (1998). Cognitive task analysis: Can we increase the efficiency of current methods? (INTTECH Contract F41624-97-C-5030). San Antonio, TX: Metrica.

Arthur, W., Jr., Doverspike. D., & Barrett, G. V. (1996). Development of a job analysis-based procedure for weighting and combining content-related tests into a single test battery score. Personnel Psychology, 49, 971-985.

Arthur W., Jr., Villado. A. J., & Bennett, W., Jr. (in press). Innovations in team task analysis: Identifying task elements, tasks, and jobs that are team-based. In W. Bennett, Jr. (Ed.), The future of job analysis. Mahwah, NJ: Erlbaum.

Artman, H. (2000). Team situation assessment and information distribution. Ergonomics, 43. 1111-1128.

Baker, D. E. & Salas, E. (1997). Principles for measuring teamwork: A summary and look toward the future. In M. T. Brannick. E. Salas, & C. Prince (Eds.). Team performance assessment and measurement: Theory, methods, and applications (pp. 351-355). Mahwah. NJ: Erlbaum.

Baker, D. P., Salas, E., & Cannon-Bowers, J. (1998). Team task analysis: Lost but hopefully not forgotten. Industrial and Organizational Psychologist. 55(3), 79-83.

Bartone, P. T., 32, Johnsen. B. H., Eid. J., Brun, W., & Laberg, J. C. (2002). Factors influencing small-unit cohesion in Norwegian Navy officer cadets. Military Psychology, 4, 1-22.

Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 74. 478-494.

Bowers, C. A., Baker, D. P., & Salas, E. (1994). Measuring the importance of teamwork: The reliability and validity of job/task analysis indices for team-training design. Military Psychology, 6, 205-214.

Bowers, C. A., Morgan. B. B., Salas. E., & Prince, C. (1993). Assessment of coordination demand for aircrew coordination training. Military Psychology, 5, 95-112.

Brannick, M. 32. Prince, C., & Salas, E. (1997). Team performance assessment and measurement: Theory, methods, and applications. Mahwah, NJ: Erlbaum.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159.

Day, E. A., Arthur. W., Jr., Edwards. B. D., Miyashiro, B., Tubre, T. C., & Tubre, A. H. (2004). Criterion-related validity of different operationalizations of group ability as a function of task-type: Comparing the mean, maximum, and minimum. Journal of Applied Social Psychology, 54, 1521-1549.

Day, E. A., Arthur, W., Jr., & Gettman, D. (2001). Knowledge structures and the acquisition of a complex skill. Journal of Applied Psychology, 86, 1022-1033.

Dieterly, D. L. (1988). Team performance requirements. In S. Gael (Ed.). The job analysis handbook for business, industry, and government (pp. 766-777). New York: Wiley.

Dyer, J. L. (1984), Team research and training: A state of the art review. In F. A. Muckler (Ed.), Human factors review (pp. 285-323), Santa Monica, CA: Human Factors and Ergonomics Society.

Edwards, B. D., Day, E. A., Arthur, W., Jr., & Bell. S. T. (in press). Relationships among team ability composition, team mental models, and team performance, Journal of Applied Psychology.

Equal Employment Opportunity Commission, Civil Service Commission. Department of Labor, Department of Justice. (1978). Adoption by four agencies of uniform guidelines on employee selection procedures. Federal Register, 43, 38290-58315.

eSim Games. 120001. Steel Beasts [Computer software]. Mountain View, CA: Author.

Glickman, A. S., Zimmer, S., Montero, R. C., Guerette, P. J., Campbell. W. J., Morgan. B. B., et al. (1987). The evolution of team skills: An empirical assessment with implications for training (NTSC Tech. Rep. No. 87-016). Arlington. VA: Office of Naval Research.

Hackman. J. R. (1987). The design of work teams. In J. W. Lorsch (Ed.), Handbook of organizational behavior (pp. 315-542). Englewood Cliffs, NJ: Prentice-Hall.

Jaccard. J., & Becker. M. A. (20011. Statistics for the behavioral sciences (4th ed). Belmont, CA: Wadsworth.

Klimoski, R. J., & Mohammed, S. (1994). Team mental model: Construct or metaphor? Journal of Management, 20, 403-437.

Mathieu, J. E., Heffner, 32 S., Goodwin. G. F., Salas, E., & Cannon-Bowers, J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85, 275-285.

Morgan. B. B., Glickman. A. S., Woodward. E. A., Blaiwes. A. S., & Salas. E. (1986). Measurement of team behaviors in a navy environment (NTSC Tech. Rep. No. 86-014). Orlando, FL: Naval Training Systems Center.

Pine. D. E. (1995). Assessing the validity of job ratings: An empirical study of false reporting in task inventories. Public Personnel Management, 24, 451-460.

Rasker, P. C., Post. W. M., & Schraagen, J. M. C. (2000). Effects of two types of intra-team feedback on developing a shared mental model in command and control teams. Ergonomics, 43. 1167-1189.

Rosenberg. K. M. (1990). Statistics for behavioral sciences. Dubuque, IA: William C. Brown.

Salas, E., Burke, S. C., Bowers, C. A., & Wilson, K. A. (2001). Team training in the skies: Does crew resource management (CRM) training work? Human Factors, 43, 641-674.

Sanchez, J.I., & Fraser, S. L. (1992). On the choice of scales for task analysis. Journal of Applied Psychology, 77, 545-553.

Sanchez, J. I., & Levine, E. L. (1989). Determining important tasks within jobs: A policy-capturing approach. Journal of Applied Psychology, 74, 336-342.

Sanchez-Ku, M. L., & Arthur, W., It. (2000). A dyadic protocol for training complex skills: A replication using female participants. Human Factors, 42, 512-520.

Stevens, M. J., & Campion, M. A. (1994). The knowledge, skill, and ability requirements for teamwork: Implications for human resource management. Journal of Management, 20, 503-530.

Swezey, R. W., Owens, J. M., Bergondy, M. L., & Salas, E. (1998). Task and training requirements analysis methodology (TTRAM): An analytic methodology for identifying potential training uses of simulator networks in teamwork-intensive task environments. Ergonomics, 41, 1678-1697.

Tesluk. P., Mathieu, J. E., Zaccaro. S. J., & Marks, M. (1997). Task and aggregation issues in the analysis and assessment of team performance. In M. T. Brannick, C. Prince, & E. Salas (Eds.), Team performance assessment and measurement: Theory, methods, and applications (pp. 197-224). Mahwah. NJ: Erlbaum.

Winfred Arthur, Jr., is a professor in the Psychology Department at Texas A&M University. He received his Ph.D. in industrial/organizational psychology in 1988 from the University of Akron.

Bryan D. Edwards is an assistant professor in the Psychology Department at Tulane University. He received his Ph.D. in industrial/organizational psychology in 2003 from Texas A&M University.

Suzanne T. Bell is an assistant professor in the Psychology Department at DePaul University. She received her Ph.D. in industrial/organizational psychology in 2004 from Texas A&M University.

Anton J. Villado is a Ph.D. student and research assistant in the Psychology Department at Texas A&M University. He received his M.S. in industrial/organizational psychology in 2001 at California State University, San Bernardino.

Winston Bennett, Jr., is senior research psychologist for training systems and performance at the U.S. Air Force Research Laboratory, Human Effectiveness Directorate, Mesa Research Site, Mesa, Arizona. He received his Ph.D. in industrial/organizational psychology in 1995 from Texas A&M University.

Date received: May 19, 2003

Date accepted: October 15, 2004

Winfred Arthur, Jr., Bryan D. Edwards, Suzanne T. Bell, and Anton J. Villado, Texas A&M University, College Station, Texas, and Winston Bennett, Jr., U.S. Air Force Research Laboratory, Mesa, Arizona

Address correspondence to Winfred Arthur, Jr., Department of Psychology, Texas A&M University, College Station, TX 77843-4235; wea@psyc.tamu.edu. HUMAN FACTORS, Vol. 47, No. 3, Fall 2005, pp. 6544-69. Copyright [c] 2005, Human Factors and Ergonomics Society. All rights reserved.
TABLE 1: Example of Team-Related Task Analysis Rating Form

For each task/activity presented below, please shade the number
corresponding to the:

(a) IMPORTANCE of the task/activity to the performance of your job, and

(b) TEAM RELATEDNESS of the task/activity = The extent to which
    successful team performance requires you to work with members of
    the team in order to optimally perform the specified task.

IMPORTANCE

(1) = Not at all important

(2) = Of little importance

(3) = Somewhat important

(4) = Very important

(5)= Of highest importance

TEAM RELATEDNESS

(1) = Not required to work with team members at all for optimal
      performance

(2) = Required to work with team members very little for optimal
      performance

(3) = Somewhat required to work with team members for optimal
      performance

(4) = Required to work with team members quite a bit for optimal
      performance

(5) = Very much required to work with team members for optimal
      performance

Tasks/Activities                                  Importance

 1. Aiming                           (1)    (2)    (3)    (4)    (5)
 2. Lasing                           (1)    (2)    (3)    (4)    (5)
 3. Buttoning the tank               (1)    (2)    (3)    (4)    (5)
 4. Visually locating targets        (1)    (2)    (3)    (4)    (5)
 5. Timing tank movement with the
    gunner's shots                   (1)    (2)    (3)    (4)    (5)
 6. Entering battle position         (1)    (2)    (3)    (4)    (5)
 7. Emitting smoke screen            (1)    (2)    (3)    (4)    (5)
 8. Slewing the turret               (1)    (2)    (3)    (4)    (5)
 9. Destroying enemy tanks           (1)    (2)    (3)    (4)    (5)
10. Completing the mission           (1)    (2)    (3)    (4)    (5)

Tasks/Activities                               Team Relatedness

 1. Aiming                           (1)    (2)    (3)    (4)    (5)
 2. Lasing                           (1)    (2)    (3)    (4)    (5)
 3. Buttoning the tank               (1)    (2)    (3)    (4)    (5)
 4. Visually locating targets        (1)    (2)    (3)    (4)    (5)
 5. Timing tank movement with the
    gunner's shots                   (1)    (2)    (3)    (4)    (5)
 6. Entering battle position         (1)    (2)    (3)    (4)    (5)
 7. Emitting smoke screen            (1)    (2)    (3)    (4)    (5)
 8. Slewing the turret               (1)    (2)    (3)    (4)    (5)
 9. Destroying enemy tanks           (1)    (2)    (3)    (4)    (5)
10. Completing the mission           (1)    (2)    (3)    (4)    (5)

Note. This is a team relatedness scale for a sample of the Steel Beasts
combat mission task statements and activities. Items 1, 2, 3, 7, and 8
are individual tasks, and the rest are team tasks. The team relatedness
options presented here reflect the most current generation of the team
relatedness scale (Arthur, Villado, & Bennett, in press). For the data
presented in this article, the team-relatedness response option anchor
labels were an earlier generation in which respondents rated the extent
to which they could perform the task alone (1) to the extent to which
they could definitely not perform the task alone (5).

TABLE 2: Example of Team Workflow Rating Form

The chart below [note, in the operational measure, the chart from Table
3 was inserted below] presents five TEAM WORKFLOW PATTERNS as well as a
description and illustration of each pattern. For each mission
task/activity, please indicate in the WORKFLOW PATTERN column (by
shading in the appropriate response) the workflow pattern that best
characterizes the performance of the task/activity.

TEAM WORKFLOW PATTERN

(1) = NOT a team task/activity

(2) = Pooled/Additive Interdependence

(3) = Sequential Interdependence

(4) = Reciprocal Interdependence

(5) = Intensive Interdependence

Tasks/Activities                            Team Workflow Pattern

 1. Aiming                           (1)    (2)    (3)    (4)    (5)
 2. Lasing                           (1)    (2)    (3)    (4)    (5)
 3. Buttoning the tank               (1)    (2)    (3)    (4)    (5)
 4. Visually locating targets        (1)    (2)    (3)    (4)    (5)
 5. Timing tank movement with the
      gunner's shots                 (1)    (2)    (3)    (4)    (5)
 6. Entering battle position         (1)    (2)    (3)    (4)    (5)
 7. Emitting smoke screen            (1)    (2)    (3)    (4)    (5)
 8. Slewing the turret               (1)    (2)    (3)    (4)    (5)
 9. Destroying enemy tanks           (1)    (2)    (3)    (4)    (5)
10. Completing the mission           (1)    (2)    (3)    (4)    (5)

Note. This is a team workflow scale for a sample of Steel Beasts combat
mission task statements and activities. Items 1, 2, 3, 7, and 8 are
individual tasks, and the rest are team tasks.

TABLE 3: Example of Rating Form Assessing Team Workflow at the Level of
the Job

The chart below presents five TEAM WORKFLOW PATTERNS as well as a
description and illustration of each pattern. Please indicate in the
RESPONSE column (by shading in the appropriate response) the workflow
pattern that is most descriptive of the work activities in your job.

   Team Workflow
      Pattern                     Description

1. Not a Team          Work and activities are NOT per-
   Task/Activity.      formed as a member of a team;
                       they are performed alone outside
                       the context of the team. Work
                       and activities are performed by
                       an individual working ALONE,
                       NOT In a team.
2. Pooled/Additive     Work and activities are performed
   interdependence.    separately by all team members
                       and work does not flow between
                       members of the team.
3. Sequential          Work and activities flow from one
   interdependence.    member to another in the team,
                       but mostly in one direction.
4. Reciprocal          Work and activities flow between
   interdependence.    team members in a back-and-
                       forth manner over a period of
                       time.
5. Intensive           Work and activities come into
   Interdependence.    the team and members must
                       diagnose, problem solve, and/
                       or collaborate as a team in order
                       to accomplish the team's task.

   Team Workflow
      Pattern               Illustration         RESPONSE

1. Not a Team          [ILLUSTRATION OMITTED]      (1)
   Task/Activity.
2. Pooled/Additive     [ILLUSTRATION OMITTED]      (2)
   interdependence.
3. Sequential          [ILLUSTRATION OMITTED]      (3)
   interdependence.
4. Reciprocal          [ILLUSTRATION OMITTED]      (4)
   interdependence.
5. Intensive           [ILLUSTRATION OMITTED]      (5)
   Interdependence.


Note. Alternatively, ratees could be asked to rank the five patterns.
Under certain conditions, ranking may be more informative because it
allows ratees to indicate the existence of both individual and team
tasks, which are present in most jobs. If ranks are used, then the
instruction set should read "The chart below presents five team
workflow patterns as well as a description and diagram for each
pattern. Please rank order the five patterns (1 = high, 5 = low) in
terms of the extent to which they are descriptive of the work
activities in your job. That is, the pattern most descriptive of the
work activities in your job would be ranked number 1 and the pattern
least descriptive of your work activities would be ranked number 5."
Adapted from "Task and Aggregation Issues in the Analysis and
Assessment of Team Performance" (p. 201) by P. Tesluk, J. E. Mathieu,
J. E. Zaccaro, & M. Marks, in M. T Brannick, E. Salas, & C. Prince
(Eds.), Team Performance Assessment and Measurement: Theory, Methods,
and Applications, 1997, Mahwah, NJ: Erlbaum. Copyright 1997 by Lawrence
Erlbaum Associates. Adapted with permission.

TABLE 4: Overview of Training and Data Collection Procedures

Day                      Activity

Screening/scheduling     Consent forms
                         Contact and demographic form
                         Video/computer game experience measure

First Week of Training

Session 1 (Monday)       Introduction
                         Assignment to 4-person teams
                         3 gunner tutorials (all trainees)
                         2 driving tutorials (all trainees)
Session 2 (Tuesday)      2 tank commander tutorials (all trainees)
                         Assigned to tank commander and gunner roles
                         2 team test games
Session 3 (Wednesday)    Review of role-specific tutorials
                         2 team test games
Session 4 (Thursday)     1 team practice game
                         2 team test games
Session 5 (Friday)       1 team practice game
                         2 team test games

Second Week of Training

Session 6 (Monday)       1 team practice game
                         2 team test games
Session 7 (Tuesday)      1 team practice game
                         2 team test games
Session 8 (Wednesday)    1 team practice game
                         2 team test games
Session 9 (Thursday)     1 team practice game
                         2 team test games
Session 10 (Friday)      1 team practice game
                         2 team test games
                         Team task analysis questionnaire and feedback
                         questionnaire

Note. All sessions were 1 hr long.

TABLE 5: Interrater Agreement and Descriptive Statistics for Team Task
Analysis Scale Ratings

                          Individual Tasks

Team Task           Intraclass
Analysis Scale      Correlation    Mean     SD

Team relatedness        .80        2.17    0.67
Team workflow           .92        2.70    0.54
Team-task ratio          --        0.34    0.22

                         Individual Tasks

Team Task           Intraclass
Analysis Scale      Correlation    Mean     SD       d

Team relatedness        .92        2.97    0.81    1.08 *
Team workflow           .96        3.57    0.49    1.71
Team-task ratio          --        0.57    0.25    0.98 *

Note. N (number of raters) = 52. There were 18 individual task
statements and 16 team task statements.

* p < .001.

TABLE 6: Correlation Between Job-Level Team Workflow and Specified Team
Interdependency Variables

                                         Individual Tasks:
                                       Task-Level Composite

                                  Team          Team      Team-Task
                               Relatedness    Workflow      Ratio

Job-level team workflow (a)       -.12          .09         -.09

                                            Team Tasks:
                                       Task-Level Composite

                                  Team          Team      Team-Task
                               Relatedness    Workflow      Ratio

Job-level team workflow (a)        .17         .35 *         .20

Note. N = 52. Individual and team task level composites are means of
the 18 and 16 individual and team task ratings, respectively.

(a) Job-level team workflow mean = 4.25 (SD = 0.54).

* p < .01.

TABLE 7: Relationship Between Team Task Analysis Scale Ratings and Team
Performance

                          Individual Tasks

                   Team          Team       Team-Task
                Relatedness    Workflow       Ratio

Team
performances      -0.59 *        0.02         -.29
Mean               2.18          2.72          .58
SD                 0.44          0.34          .10

                             Team Tasks                    Job-
                                                          Level
                   Team          Team       Team-Task      Team
                Relatedness    Workflow       Ratio      Workflow

Team
performances       -0.14        0.62 *       -.54 *      0.74 **
Mean                2.97        3.57          .34        4.25
SD                  0.37        0.24          .13        0.54

Note. N = 13 (teams).

(a) Team performance mean = 286.57 (SD = 41.92).

* p < .05, ** p < .01.

TABLE 8: Example of a Task Analysis Rating Form Incorporating
Importance, Time and Team Work-flow Spent,Scales for a Sample of
General Air Traffic Control Activities The chart below [note, in the
operational measure, the chart from Table 3 would be inserted below]
presents five TEAM WORKFLOW PATTERNS as well as a description and
illustration of each pattern. For each mission task/activity, please:

(a) shade in the number corresponding to the importance of the
    task/activity to the performance of your job,

(b) record the amount of time in hours and minutes that you spend
    performing the task/activity in your typical work period (e.g.,
    shift or work day), and

(c) shade in the workflow pattern that best characterizes the
    performance of the task/activity.

IMPORTANCE

(1) = Not at all important

(2) = Of little importance

(3) = Somewhat important

(4) = Very important

(5) = Of highest importance

TEAM WORKFLOW PATTERN

(1) = NOT a team task/activity
(2) = Pooled/Additive Interdependence
(3) = Sequential Interdependence
(4) = Reciprocal Interdependence
(5) = Intensive Interdependence

Tasks/Activities                                 Importance

 1. Activate backup communica-      (1)    (2)    (3)    (4)    (5)
    tions systems.
 2. Adjust radar scopes.            (1)    (2)    (3)    (4)    (5)
 3. Coordinate search and rescue    (1)    (2)    (3)    (4)    (5)
    operations with appropriate
    agencies.
 4. Coordinate use of airspace      (1)    (2)    (3)    (4)    (5)
    with other agencies or
    facilities.
 5. Coordinate or control           (1)    (2)    (3)    (4)    (5)
    aircraft surge launch and
    recovery (ASLAR) operations.
 6. Monitor aircraft operations     (1)    (2)    (3)    (4)    (5)
    in Class G airspace.
 7. Participate in simulated        (1)    (2)    (3)    (4)    (5)
    crash, alert, or disaster
    control exercises
 8. Perform facility evacuation     (1)    (2)    (3)    (4)    (5)
    procedures.
 9. Provide special handling for    (1)    (2)    (3)    (4)    (5)
    special operations aircraft
10. Relay aircraft emergency        (1)    (2)    (3)    (4)    (5)
    information.

Tasks/Activities                       Time Spent

 1. Activate backup communica-      -- hrs. -- min.
    tions systems.
 2. Adjust radar scopes.            -- hrs. -- min.
 3. Coordinate search and rescue    -- hrs. -- min.
    operations with appropriate
    agencies.
 4. Coordinate use of airspace      -- hrs. -- min.
    with other agencies or
    facilities.
 5. Coordinate or control           -- hrs. -- min.
    aircraft surge launch and
    recovery (ASLAR) operations.
 6. Monitor aircraft operations     -- hrs. -- min.
    in Class G airspace.
 7. Participate in simulated        -- hrs. -- min.
    crash, alert, or disaster
    control exercises
 8. Perform facility evacuation     -- hrs. -- min.
    procedures.
 9. Provide special handling for    -- hrs. -- min.
    special operations aircraft
10. Relay aircraft emergency        -- hrs. -- min.
    information.

                                               Team Workflow
Tasks/Activities                                  Pattern

 1. Activate backup communica-      (1)    (2)    (3)    (4)    (5)
    tions systems.
 2. Adjust radar scopes.            (1)    (2)    (3)    (4)    (5)
 3. Coordinate search and rescue    (1)    (2)    (3)    (4)    (5)
    operations with appropriate
    agencies.
 4. Coordinate use of airspace      (1)    (2)    (3)    (4)    (5)
    with other agencies or
    facilities.
 5. Coordinate or control           (1)    (2)    (3)    (4)    (5)
    aircraft surge launch and
    recovery (ASLAR) operations.
 6. Monitor aircraft operations     (1)    (2)    (3)    (4)    (5)
    in Class G airspace.
 7. Participate in simulated        (1)    (2)    (3)    (4)    (5)
    crash, alert, or disaster
    control exercises
 8. Perform facility evacuation     (1)    (2)    (3)    (4)    (5)
    procedures.
 9. Provide special handling for    (1)    (2)    (3)    (4)    (5)
    special operations aircraft
10. Relay aircraft emergency        (1)    (2)    (3)    (4)    (5)
    information.
COPYRIGHT 2005 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Arthur, Winfred, Jr.; Edwards, Bryan D.; Bell, Suzanne T.; Villado, Anton J.; Bennett, Winston, Jr.
Publication:Human Factors
Geographic Code:1USA
Date:Sep 22, 2005
Words:9570
Previous Article:Limitations in drivers' ability to recognize pedestrians at night.
Next Article:Augmented, pulsating tactile feedback facilitates simulator training of clinical breast examinations.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters