Printer Friendly

Steps toward building mathematical and computer models from cognitive task analyses. (Special Section).

INTRODUCTION

To understand the role of a human in a task, one frequently performs thorough task analyses. In contrast, mathematical and computer models are constructed infrequently and incompletely, perhaps because the perceived benefit of a model does not exceed the perceived cost of constructing it. We propose that task analyses often contain most of the information needed to construct a working model of a task, which can then be used to investigate task completion time. Some task analysis techniques, such as CPMGOMS (John, 1990; John & Gray, 1992), are so well developed as techniques that they directly result in computer models. Other techniques, such as hierarchical task analysis, produce detailed diagrams of operations and their relations but usually stop short of making predictions about task completion time. More can be accomplished with a little extra effort, because much of the work in making a model has already been done when a task analysis is completed.

Kieras and Meyer (2000) noted that there is great difficulty in using cognitive modeling in human factors to make predictions. Specifically, in basic cognitive modeling research, a model is tuned to fit data from successive experiments in an iterative process, but for engineering applications the current version of the model must make useful predictions about performance of a target system, which may not be built yet. This led Kieras and Meyer to say, "What is missing, and badly needed, is a demonstration that one can start with a conventional task analysis such as HTA [hierarchical task analysis] and then proceed systematically to a usefully accurate computational cognitive model, with no 'hand-waving' in between" (Kieras & Meyer, 2000, p. 258). In the present study, we propose a method that does just this. We show how a conventional task analysis can readily be extended, leading to the construction of a model suitable for computing the task completion time.

TASK ANALYSIS

Task analysis refers to a set of techniques for describing and evaluating the actions that people engage in for satisfactory performance of a task. According to Kirwan and Ainsworth (1992), "Task analysis can be defined as the study of what an operator (or team of operators) is required to do, in terms of actions and/or cognitive processes, to achieve a system goal" (p. 1). Task analysis is used, among other things, to allocate functions to humans and machines, to specify characteristics of personnel, to design the task and interfaces, and to determine training requirements.

The result of a task analysis is a detailed description of all aspects of the task, based on observation, interviews, and inference. The purpose of this article is to show how to use this description to construct a model useful for finding the components of the task most critical for determining the completion time of the task. If a system is being designed, simulations with the model can provide estimates of what the task completion times will be with various proposed designs. Users are sensitive to small differences produced by different systems and will modify the organization of their behavior to take advantage of features of a system that shorten task completion time (Gray & Boehm-Davis, 2000).

One major technique for describing a system is hierarchical task analysis (Shepherd, 2001), developed initially by Annett and Duncan (1967). Because of its popularity and its handiness for determining task completion time, we will explain what we are doing in terms of hierarchical task analysis, but other forms of task analysis would be suitable starting points as well. Hierarchical task analysis is a specific method in which tasks are represented in terms of hierarchies of goals and subgoals. A very simple example is in Figure 1. The goal is to select an option from a pop-up menu that appears somewhere on a computer screen. The goal is achieved by carrying out operations according to plans. Operations are the actions people perform. Plans describe how the subgoals and operations are organized. The end result of a hierarchical task analysis is a diagram of the operations in which people must engage to attain specified goals, along with the plans specifying their relationships.

Performance of any task involves cognitive components that are hidden from view. The increasing dependence on automation to support human performance has resulted in a larger part of the task being unavailable to direct observation (e.g., Diaper, 1989). The term cognitive task analysis has come to be used to describe the extension of task analysis to provide explicit information about the knowledge, information processing, and goal structures that underlie task performance (Chipman, Schraagen, & Shalin, 2000).

Task analysis has its roots in the work of Gilbreth and Gilbreth (1921), who recorded elementary operations called therbligs. A related development from this early work is the representation of tasks in network models for purposes such as estimating the completion time of the task and determining the allocation of resources (e.g., Elmaghraby, 1977; Modor & Phillips, 1970; Pritsker, 1979; Wiest & Levy, 1977). A rough way to characterize the difference in the task analysis and network model approaches is to say that task analysis emphasizes analysis -- that is, a detailed description of the components of the task and their relationships. Network modeling emphasizes synthesis -- that is, calculations of how the components fit together to determine quantities such as task completion time. In that sense, this paper discusses going from analysis to synthesis.

NETWORK REPRESENTATIONS

We discuss two kinds of network models here: activity networks and order-of-processing diagrams. Further discussion and examples are available on our Web page, http://www.psych.Purdue.edu/Research/CognitiveTaskAnalysisNetworks/ca ll.htm.

Activity Networks

Figure 2 gives an example of an activity network for the task of selecting an option from a menu. This task is the one illustrated with a hierarchical task analysis in Figure 1.

The terminology used to describe activity networks is slightly different from that used to describe hierarchical task analyses. The operations in hierarchical task analysis are often called activities in activity networks. In the network, each node represents an activity. The arrow drawn from "search menu" to "double click on chosen option" indicates that searching the menu must be completed before clicking on the chosen option. Likewise, the activity "move cursor toward menu" must be completed before clicking on the chosen option. Because searching the menu can be done concurrently with moving the cursor toward the menu, no arrow is drawn joining the nodes for these activities to each other.

In this network there are two paths from the start of the task to the end. The duration of a path is the sum of the durations of all the activities on it. Because all of the activities in the network must be completed for the task to be completed, the time required to complete the task equals the duration of the longest path through the network, called the critical path. The kind of network illustrated in Figure 2 is sometimes called a critical path network and sometimes a program evaluation and review technique (PERT) network. Further examples and useful discussion of critical path networks used to model human-computer interaction tasks are found in Baber and Mellor (2001), Gray and Boehm-Davis (2000), and John (1996).

Order-of-Processing Diagrams

Here we show how to represent a simple series and parallel critical path network as an order-of-processing (OP) diagram. This makes clear the formal relation between the two representations. More information about OP diagrams will be given in later sections.

To begin, consider a simple serial network. To continue with the example in Figure 2, assume that by chance the cursor is already over the option. The task then consists of only two processes: say, process s (searching a menu) and process d (double clicking on an option). In this case, the activity network used to represent the task would have three nodes, the nodes along the top path of Figure 2 -- that is, those labeled "start" (a dummy state with no processes), "search menu," and "double click on chosen option."

An OP diagram can also be used to represent the same task. The OP diagram, like the activity network, consists of a set of directed arcs and nodes. Each node in an OP diagram represents a state -- that is, the set of processes that are currently active at some point in the execution of an activity network. So, for example, there are three states in the OP diagram used to represent the simple serial activity network. In one state a user is searching a menu (s is placed inside the state), in another state a user is double clicking on an option (d is placed inside the state), and in a final state all processes have completed. These three states are represented in the top portion of Figure 3. A directed arc is drawn between two states if the current set of processes in the successor state is consistent with the completion of exactly one of the processes in the current set of the predecessor state. So, for example, a user can double click on an option only after having searched the menu. Thus an arc connects the s tate in which searching the menu is the current process to the state in which double clicking on an option is the current process. The process that completes when a transition is made between two states is listed beside the arc connecting the pair of states. The last state is a dummy state and simply represents the completion of all processing.

Not much seems to have been gained in the foregoing example. In fact, the OP diagram and activity network used to represent a simple serial task are almost identical. However, let us now consider the OP diagram used to represent a simple parallel task - in particular, the complete task diagrammed in the activity network in Figure 2. Specifically, we no longer assume that the cursor is located anywhere near the popup menu in which the target option is located. Thus there are now three processes, searching the menu (s), moving the cursor toward the menu (in), and double clicking on an option (d). The first two processes (s, in) can be executed in parallel. Only after both processes have completed can the user double click on the chosen option (d).

Unlike the first OP diagram, the bottom portion of Figure 3 shows that there are now two processes in the current set of the first state. Note the clear difference between the activity network and the OP diagram at this point. If the user finishes searching the menu before moving the cursor toward the menu, then a transition is made to the middle state along the bottom path through the OP diagram. If the user finishes moving the cursor toward the menu before searching the entire menu for the target option, then a transition is made to the middle state along the top path through the OP diagram. We assume that the activity durations are random variables. Thus on some trials the user will finish moving the cursor first and on other trials the user will finish searching the menu first. However, on any one trial only one path is taken through the OP diagram. The reader may ask why there is no path out of the first state in which both processes finish at the same time. The answer is that we assume here, and elsewhe re, that the probability that any two or more processes will finish at exactly the same time is zero. This assumption need not be made, but it is reasonable for mental processes.

Building a Model

There are three steps in moving from a task analysis to a model. First, the task analysis is used to construct a network representation for the task. Second, estimates of the durations of the activities are found in the literature or, if they are unavailable, obtained through multidimensional scaling. Third, the network model is implemented with equations or a computer program is written for simulations. We now give more details about Step 1, constructing a network representation.

CONSTRUCTING A NETWORK REPRESENTATION FROM A TASK ANALYSIS

Almost any type of task analysis could be used as a starting point for a network model. Our explanations are based on hierarchical task analysis because it is one of the most familiar. In a hierarchical task analysis, operations can be related in several ways. The ways they are related are called plans. In this section we explain how to represent each plan in a network. The two kinds of network we consider are activity networks (with critical path networks as a special case) and order-of-processing networks.

Plans in Hierarchical Task Analysis

According to Shepherd (2001, p. 42), the operations in most hierarchical task analyses are organized with six basic types of plans: fixed sequences, concurrent operations, optional completion, cycles, choices, and contingent sequences.

Representing plans in activity networks. Here we discuss each of the six types of plan. In a fixed sequence of operations, each operation is performed following completion of the previous one. The sequence of operations is represented in a critical path network in the obvious way -- that is, a node is drawn to represent each operation in the hierarchical task analysis as an activity, and an arrow is drawn from each activity to its immediate successor. Concurrent operations can be performed at the same time. Again, concurrent operations are represented in a critical path network in the obvious way, as activities represented by nodes not connected by arrows.

Optional completion refers to plans in which the order for carrying out the operations is optional, the only constraint being that they all be completed. In principle, one could draw a separate critical path network for each possible order of the optional completion operations. With more than a few operations this is impractical. Fortunately, as Shepherd (2001, p. 49) pointed out, to reduce errors or simplify training it is often advantageous to prescribe a particular order for the operations, even if there is no logical constraint on the order. That is, often an optional completion plan is replaced by a fixed sequence. For the purpose of estimating the completion time of the task, the order of the operations in an optional completion plan does not ordinarily matter, and the operations can be represented as activities in the network in an arbitrary order. If the order of the operations affects the duration of the operations, one can represent the set of operations involved in the optional completion plan as a single node in the activity network. The duration of this single activity is a weighted sum, in which each term is the total duration of the operations in a particular order multiplied by the probability of that order occurring.

Cycles refer to plans that require an operation or set of operations to be repeated until a certain condition is met. For the purpose of estimating the completion time of the task, the operations in a cycle can be represented as a single activity joined by arrows to its immediate predecessors and immediate successors. The difficulty introduced by operations in a cycle is not in the network representation but in estimating the duration of the activity representing the cycle. The duration of this activity is the duration of a single repetition of the activities in the cycle multiplied by the number of times the cycle is repeated. Cycles can be represented in more detail in activity networks that are more general than critical path networks (see, e.g., Elmaghraby, 1964, 1966, 1977; and Pritsker, 1979).

(The word cycle is sometimes used in network theory with another meaning, to refer to a path from a node back to itself. A cycle in this sense in an activity network would indicate that some activity cannot start until it is completed. This is impossible, so cycles of this kind cannot occur in either a hierarchical task analysis or an activity network.)

A choice in a plan specifies a point at which a decision-making operation must occur. A single critical path network cannot represent a choice, because only one of the options is used, not all of them. The way a choice is represented depends on the purpose of the model. If the purpose is to estimate task completion time separately for each option available in the choice, then a separate critical path network is drawn for each option. With several options, this is impractical. If the purpose is to estimate task completion time averaged over all the options that can be chosen, one approach is to use an orderof-processing diagram (Fisher & Goldstein, 1983), as described in the next section. Another approach is to use a general form of an activity network, which allows for choices. For this approach, see Elmaghraby (1964, 1966, 1977) and Pritsker (1979). With either approach, for calculations or simulations, an estimate of the probability that each option is chosen will be needed.

A contingent sequence is one in which an operation is cued by something (e.g., an alarm) other than the finishing of the preceding operations. The situation is similar to that with choice. A single critical path network cannot represent a contingent sequence. If the purpose is to estimate completion time separately for each possible contingency, it may be possible to draw a separate critical path network for each contingency. If the purpose is to estimate completion time averaged over all contingencies, then often an order-of-processing diagram or a general form of an activity network can be used to represent the task. Again, to carry out calculations and simulations, one will need estimates of the probability that each contingency occurs.

Representing plans in order-of-processing diagrams. Order-of-processing diagrams are more general than critical path networks in the sense that if a critical path network has been constructed to represent the activities in a task, an OP diagram for the task can automatically be constructed. Four plan types -- fixed sequence, concurrent operation, optional completion, and cycles -- are all easily represented in critical path networks. Order-of-processing diagrams for these plans can be constructed based on our earlier discussion about forming an OP diagram from examination of a critical path network.

Some tasks that cannot be represented as critical path networks can be represented as OP diagrams. In particular, a single critical path network cannot be constructed from a hierarchical task analysis containing plans in the form of choices or contingent sequences. An OP diagram can be used in these situations.

A choice in a hierarchical task analysis is represented as a state in an OP diagram. The options from which the selection is to be made are listed as "activities" currently underway. We are assuming that the different options can take different amounts of time to evaluate and that these times reflect the probabilities that a particular choice is made. The option selected in the choice is considered to be the activity completed to exit the state. All other options are dropped at that point. That is, when the first option is completed, the system will enter a new state (the state entered may depend on the option selected), and none of the options that have yet to be completed will be carried forward. For each option an arrow is drawn leading to the corresponding succeeding state. The arrow is labeled with the option selected and is treated as an activity completed.

A plan in the form of a contingent sequence in a hierarchical task analysis indicates that some activity is started by something other than the completion of the preceding activity (or the completion of the last activity in a set of preceding activities). For example, a signal, such as an alarm, rather than the completion of some preceding activity, may be the stimulus for the initiation of some activity. Often, a task with this characteristic can be represented in an OP diagram by including an imaginary activity that generates the signal.

Consider, for example, an alarm that can go off at any time during performance of a task. We include an activity -- "alarm goes off" -- in the first state of the OP diagram. To this first state we add a transition to a state in which the alarm activity completes. We add to the current set of this new state whatever activities are instantiated once the alarm goes off. If the alarm activity does not complete in the first state, it is kept as a current activity in each succeeding state until it does complete, at which point a new transition must be added, just as was done in the first state when the alarm activity completed. In summary, one of the ways of exiting a state is for the alarm to go off -- that is, for the potential alarm activity to be "completed." For each state, this way of exiting is indicated by an arrow, labeled with the name of the potential alarm activity, leading to the state that would follow if the alarm went off.

Fixed sequence and concurrent operation plans have exact counterparts in critical path networks. Optional completion and cycle plans do not have exact counterparts but nonetheless can easily be represented in critical path networks, albeit with some loss of detail. Choice and contingent sequence plans can be represented in general activity networks or order-of-processing diagrams. Practical difficulties may arise in the network representation of a task or in estimating the durations of the activities, so sometimes a model may be infeasible. We do not claim that every task can be represented in a network model. In many cases, however, a simple visual inspection of a hierarchical task analysis leads directly to a network for the task.

Some information provided in a hierarchical task analysis does not play a direct role in construction of a network to represent the task, but it may be useful indirectly. For example, the goal structure is not directly needed for estimating the task completion time, but it is important for understanding what the operator is doing or is likely to do in an emergency. This background information could be quite useful when the operator cannot be observed performing the task -- for example, when a new system is being designed. All the information needed for constructing a network model would be given in a procedural task analysis, which shows what physical and mental operations must be done to complete the task. The form of task analysis perhaps most easy to work with for investigating task completion time is CPM-GOMS, because it is already in the form of an activity network (see, e.g., John, 1990).

CPM-GOMS. The GOMS (goals, operators, methods, selection rules) methodology was developed for human-computer interaction tasks.

By analyzing the organization of the task and specifying the times for the basic operations, a GOMS model can be used to predict times to execute the tasks. One successful member of the GOMS family of techniques is CPM-GOMS (John, 1990; John & Gray, 1992), a method for modeling concurrent activities. CPM stands for both cognitive-perceptual-motor and critical path method. When a CPM-GOMS analysis is carried out, the activities are represented in an activity network -- specifically, in a critical path network.

Gray, John, and Atwood (1993) used a CPMGOMS analysis for real-world performance of telephone company toll and assistance operators. We discuss it in detail because it illustrates a situation in which task completion time is important. Also, in a later section, we use this study to test our multidimensional scaling approach to approximating activity durations.

Because the toll and assistance operators handle hundreds of calls each day, a major concern is the average work time per call. The telephone company was considering replacing the workstations currently used by the operators with new ones. The existing workstation used a 300-baud character-oriented display paired with a keyboard on which keys were grouped by function and color coded. The proposed new workstation was better designed in some respects from a human-computer interaction perspective. It used icons and windows in a high-resolution display operating at 1200 baud, and the keyboard was designed to minimize travel distance between the most frequently used keys and to replace common two-key sequences with a single key.

Gray et al. (1993) used CPM-GOMS to predict the work time with the alternative workstations. For each call, a toll and assistance operator must determine who should pay for the call, what billing rate is appropriate, and when the connection is sufficiently complete to terminate the interaction with the customer. This requires conversing with the customer, keying in information, and reading information from the display screen of the workstation, with many of the activities being performed concurrently. Benchmark tasks were selected, and CPM-GOMS models were built for all tasks for both workstations. For the current workstation, expert toll and assistance operators performed the benchmark tasks, and tapes of the performance of these tasks were used to determine the activities necessary to perform the tasks, the durations of these activities, and the dependencies between them. CPM-GOMS models were constructed using the actual durations of observable activities and duration estimates from the literature for unobs ervable activities. For the new workstation, CPM-GOMS models were constructed from the specifications of the new workstation and the analysis of the operators' actions using the old workstation. Activity durations were revised to use the estimates of system response time supplied by the manufacturer. Note that the CPM-GOMS models were not constructed by observing operators using the proposed new workstation.

Despite certain advantages of the new workstation, the prediction from the CPM-GOMS models was that the average time per call would increase, not decrease. Briefly, the problem was that although the new workstation sped up certain activities, these were not on the critical path, and certain other activities that were not on the critical path with the old workstation were added to the critical path with the new workstation. The net result was an increase in completion time for the calls. This prediction was borne out in field studies done with the new workstations. Out of 15 call categories, the models correctly predicted the direction of the difference in all but 3 cases.

Because the work of Gray et al. (1993) generated a lot of attention, we will briefly mention an alternative model. Kieras, Wood, and Meyer (1997) constructed models based on the executive process-interactive control (EPIC) architecture to predict the performance of the toll and assistance operators in the Gray et al. data set. The architecture incorporates both a cognitive processor based on production rules and perceptual-motor processors. Within this architecture, Kieras et al. constructed several models intended to represent different strategies that the toll and assistance operators might use to coordinate the activities involved in performing their job. The models differed in whether the goal and method structure was hierarchical (composed of multiple subgoals and procedures) or nonhierarchical (flat); in the policies by which the cognitive processor controlled single motor processors and coordinated different motor processors; and in the presence or absence of advance preparation of motor features. All of the EPIC models predicted the total task execution time adequately, with the worst-fitting model having an average absolute error of 14.3%. The smallest prediction errors were produced by the hierarchical motor-parallel model, which is a relatively simple model that allows parallel preparation and execution of the motor processors but does not allow advance preparation of features for new movements.

This work exemplifies one approach to modeling that could be roughly characterized as "top down." It begins with a general architecture, in this case EPIC. To apply the architecture to a particular task, one proposes specific hypotheses specifying strategies and boundary conditions. Another approach (one could say more "bottom up") is to, construct a model from observation and conversation with the operator, together with informal reasoning about the constraints imposed by the task -- in short, from the information in a task analysis. In practice, of course, models are constructed using a combination of approaches. The reader is referred to Kieras et al. (1997) for more information about their approach, and we now return to constructing a model from a task analysis.

FINDING ACTIVITY DURATIONS AND DOING SIMULATIONS

After an activity network or OP diagram has been constructed by one means or another, the next step is to obtain means and standard deviations of activity durations. Sometimes estimates can be found through observation and in the literature. Because the CPM-GOMS model has been used for several years, tables with estlinated durations of many common activities have been developed (see, e.g., Baber & Mellor, 2001; Card, Moran, & Newell, 1983; Gray et al., 1993; Kieras, 1988, 1996; Olson & Olson, 1990; Williams, 2000). We propose that in other cases, mean activity durations can also be found through multidimensional scaling.

Although means and standard deviations of activity durations must be obtained before simulations can be done, it will be easier to explain our multidimensional scaling approach to activity durations after we explain the simulations for which they will be used. Accordingly, we return to the second step in building a model (finding activity durations) after discussing the third step (simulations).

Using an Activity Network for Simulations

We demonstrate how to do simulations using a sample telephone call discussed at some length by Gray et al. (1993; see also Atwood, Gray, & John, 1996). For this call, the investigators found estimates of activity durations from observation and the literature. One of our purposes in carrying out a simulation of this phone call is to construct a test case that we will use later for evaluating our multidimensional scaling approximations for activity durations.

Figure 4 shows a bar chart of activities in an actual phone call, and Figure 5 shows the corresponding activity network. Activity durations obtained from videotapes of operators are listed in Figure 5 of Gray et al. (1993). (More details are in their appendixes A and B. Technically, means were estimated with medians, but this does not affect our conclusions.)

Figure 4 here is Figure 6 of Gray et al. (1993). In the figure, the shading and vertical location classify activities according to whether they are system activities (system RT), perceptual operators (listen-to-beep), motor operators (enter-command), and so on. These aspects are not directly relevant here, but we use the labels on the vertical axis as labels for the activities. The operator waits for the system to respond on two occasions, and we call these system-RT1 and system-RT2. The operator listens to part of the customer's utterance before entering a command, so we divide the listen-to-customer activity into two parts, one before the onset of the first enter-command and the second after. We call these listen-to-customer1 and listen-to-customer2. The operator reads the screen on two occasions. Gray et al. (1993) considered the average time to be the same for the two occasions, so we call these read-screen a and readscreen b (that is, we use letters a and b for the two occasions, rather than numbers 1 an d 2). The operator carries out the enter-command four times, with the same mean duration each time, so we call these enter-command a, enter-command b, and so on.

The scheduling information in the figure is crucial for our purposes. The figure indicates, for example, that the activity listen-to-customer 1 begins as soon as the activity greet-customer finishes. This relation is illustrated in the activity network of Figure 5 by an arrow from greet-customer to listen-to-customer 1. Activities that overlap in time in the bar chart of Figure 4 are performed simultaneously. So, for example, the operator thanks the customer while system-RT2 goes on. In the activity network there is no directed path (i.e., no path following arrows from foot to head) from thank-customer to system-RT2, or the other way around. If two activities in the activity network are not joined by a directed path, neither is required to be finished before the other can start. They may be carried out literally simultaneously, although simultaneity is not necessary. The activity network does not indicate simultaneity, because activity durations are random, and when the task is carried out repeatedly, two act ivities not joined by a directed path might sometimes be simultaneous and sometimes not.

A quasi-realistic test case. As a test case, the activity network in Figure 5 is realistic in the sense that the activities, their arrangement, and their durations were determined from videotapes of actual phone calls. The probability distributions of the activity durations are needed in order to carry out simulations with the example, but these were not reported by Gray et al. (1993). Assumptions about the distributions must be made because the prediction of the expected task completion time depends crucially on the distributions of the activity durations as well their means. (Put slightly differently, we would incorrectly predict task completion time if we assumed all activity durations were constant.) Because of these assumptions about the distributions of the activity durations, our test case is not completely realistic.

We used gamma distributions for the activity durations. The distribution function for response times is not yet known, nor is the distribution function for the durations of elementary mental activities. However, there is evidence that gamma distributions are often reasonable approximations (Ashby & Townsend, 1980; Kohfeld, Santee, & Wallace, 1981; for discussion, see Luce, 1986). The gamma distribution is typically skewed to the right, as are human response times, and takes as extreme forms the exponential distribution and the normal distribution. (We use a single gamma distribution for each activity. Actually, each person doing the task would have a particular distribution, and the distribution for the collection of people would be a mixture of the individual distributions. The additional detail would be useful in some situations but is beyond the scope of this paper.)

The coefficient of variation of a random variable is its standard deviation divided by its mean. For human response times, standard deviations are sometimes about 1/10 of the mean, but rarely smaller, and standard deviations are sometimes about equal to the mean, but rarely larger (see, e.g., Luce, 1986, for examples). In other words, the coefficients of variation range from about .1 to 1.0. We randomly assigned a different coefficient of variation to each different activity from the set {.1, .2,..., 1.0} (see Table 1). Note that some activities occur more than once, making 14 activities in all. Reoccurrences of the same activity with the same mean (e.g., enter-command, denoted EC a, EC b,...) are assumed to have the same coefficient of variation each time in the simulation that follows. The parameters alpha and beta of a gamma distribution are determined when the mean and coefficient of variation are specified (see Table 1).

Criticality. A simulation of the quasi-realistic activity network resulting from this procedure was carried out in Excel (details are in the Appendix). For 10 000 simulated trials, the mean task completion time was 13.027 s. (The actual mean is proprietary information not reported by Gray et al., 1993, but this value appears reasonable.) The task completion time on a given trial was calculated by finding the critical path on the trial. The duration of a path through the network is the sum of the durations of all the activities on the path. The longest path from the start of the task to the finish is the critical path. On any given trial, the time required to complete the task equals the duration of whatever is the longest path on that trial, the critical path for that trial. The mean task completion time of 13.027 s is the mean duration of the individual critical paths averaged over all the trials.

There are eight paths through the network, and for each path we can calculate the mean duration. The mean duration of a path is the sum of the mean durations of the activities on it. The path with the largest mean duration consists of the activities <SRT1, RS a, GC, LTC1, LTC2, EC c, SRT2, RS b, EC d>. The mean duration of this path is 11.900 s. Note that this is a little less than the mean task completion time from the simulation. The reason for the discrepancy is that the longest path through the network is not always the same path. That is, the path with the longest mean duration is not the critical path on every trial. When a path is the critical path on a trial, it is longer than the other seven paths and, hence, tends to be longer than it would usually be. In other words, the mean duration of a path is an underestimate of the time that path contributes to the average completion time of the task.

A concrete example may be helpful. Consider rolling a single die. Over many trials, the average score will be 3.5. Now consider rolling two dice and taking the larger number as the score. The average score will be greater than 3.5.

Let the durations of the eight paths through the activity network be denoted D1,..., D8. The expected value of the task completion time is

E[max{[D1], [D2],..., [D8}] [greater than or equal to]

Max{E[D1],E[D2],..., E[D8]}, where E[X] denotes the expected value (theoretical mean) of the random variable X. As a consequence, the maximum of the expected path durations (right side of the inequality) is an underestimate of the expected task completion time (left side of the inequality).

This discrepancy may be part of the explanation for an underestimation reported by Gray et al. (1993, p. 263). The predicted task completion times for all the call categories were slight underestimates of the observed mean completion times. The CPM-GOMS model prediction was based on the one path with maximum mean duration, which would be slightly less than the mean maximum duration over all the paths. If the variability were taken into account and a simulation carried out, it is likely that Gray et al. (1993) would have obtained different critical paths on each trial and, hence, larger predicted task completion times.

Because a particular activity is on the longest path through the network on some trials but not on others, a key quantity for an activity is its criticality -- that is, the probability that the activity is on the critical path. Activities with high criticality are important, but an activity with high criticality may be of short duration, moderating its overall importance (Elmeghraby, 1977). Another measure of the importance of an activity is its criticality times its mean duration.

Some activities have criticality I by necessity because they are on every possible path through the network (see Figure 5). However, a simulation is not needed to learn this. The other activities in the sample phone call are listed in the first column of Table 2, ordered by their criticalities as determined from the simulation. (The second column of Table 2 will be discussed later.)

The first column of Table 3 lists activities in order according to the product of criticality and mean duration. (The second column of Table 3 will be discussed later.) This product is, of course, correlated with mean duration because of the way it is calculated. Having the ordered list would be useful for setting priorities in improving performance. The activity listen-to-customer2 has the highest mean duration and the highest product of criticality and mean duration. To improve performance, one would consider first whether an improvement could be made in this activity. Because the duration of this activity is mostly under the control of the customer, one would next consider how to improve performance on the activity system-RT2, and so on. The estimates in Tables 2 and 3 of criticality and Criticality x Duration for the test case will be used as standards for evaluating the results of the multidimensional scaling approach, to which we now turn.

Finding Approximations of Activity Durations through Multidimensional Scaling

We return now to the second step in building a model, finding estimates of the durations of the activities. This can be a hurdle. In a new application many activities will not be listed in the existing literature, and experts must estimate durations for these activities as best they can. An expert is given the task of estimating, say, the average number of seconds required to remove an ink cartridge from a printer or to highlight a block of text.

Here we consider asking the expert to make a different, easier kind of judgment, followed by multidimensional scaling to obtain activity duration estimates. We propose asking the expert to think about the differences in durations between pairs of activities and to rank order these differences. If the number of pairs is not too small, this rank ordering greatly constrains the possible duration values (Kruskal, 1964a, 1964b; Shepard, 1962a, 1962b, 1966). This rank order can then be input to a multidimensional scaling program to obtain scaled duration estimates. Finally, if estimates of absolute durations in seconds for a few activities are known, linear regression can be used to transform the scale values into estimates of absolute durations in seconds for the remaining activities.

Several aspects of this procedure will require further study. For example, there are various ways of obtaining the comparison judgments from the experts, and some may be more efficient than others. Before pursuing these questions, however, one would like to know whether, in principle, approximate durations obtained from multidimensional scaling would be sufficiently accurate to use. Would such approximate estimates be good enough to be employed in simulations to learn, for example, which activities are most likely to be on the critical path? That is what we consider here, leaving other questions for future research.

We will evaluate our approach by using multidimensional scaling to obtain approximate activity durations for the sample phone call. We will then repeat the simulations carried out earlier (in the section on criticality), using the approximate values. Then we will compare output based on the actual durations with output based on the approximate durations to see how closely they agree.

To use our approach, a mean and standard deviation for at least one activity must be available. The unknown durations of other activities are obtained with multidimensional scaling (MDS) (Cox & Cox, 1994; Kruskal, 1964a, 1964b; Shepard, 1962a, 1962b, 1966). To begin, the pairs of activities are listed. The null activity with duration zero is included with the other activities for reference. For the test case here, the list includes such pairs as (greet-customer, read-screen) and (enter-credit-card-no., null). In the test case there are 10 different activities plus the null activity, leading to 11!/2!9! = 55 pairs. The task we propose giving to an expert is to order the pairs according to the differences in durations between the two activities. For example, in the test case, the largest duration difference is for the pair (listen-to-customer2, null). We caution that at this point it is not known how accurate experts would be at making these judgments. Also, if the number of unknown activity durations is large, the number of pairs to be ordered will be very large. Note that the expert needs only to order the pairs of activities. The expert does not need, for example, to make second-order judgments about the differences between the differences.

We are not proposing that the ordering be done off the cuff; rather, literature should be consulted, interviews conducted, and observations made. However, the expert need only refine the estimations until the pairs are in order; there is no request for further, perhaps dubious, fine distinctions. The underlying principle of MDS is that the ordering of the pairs tremendously constrains the possible values the elements (durations) can have. The constraints are especially strong in this situation, in which the durations are to be ordered in one dimension, time.

The numerical values of the mean activity durations from Gray et al. (1993; see Table 1) lead to the rank orders in Table 4. This is the rank order that would be produced by an ideal expert making no errors in the test case. This matrix was input to the multidimensional scaling program ALSCAL in SPSS (SPSS Inc., Chicago, IL). (The program takes as input the complete symmetric matrix, so the triangle below the diagonal was filled in as the mirror image of the upper triangle.) In practice, an expert may have a difficult time rank ordering small differences. To take this into consideration, we supposed that only the largest half of the pairs could be ordered. That is, of the 55 ranks, all ranks of 27 or less were treated as missing. The scale values produced by SPSS are given in Table 5. These can be considered as relative durations, with unknown zero point and unknown unit of measurement.

To obtain absolute duration estimates, note that the scale values x from the MDS program will still be valid if they are all transformed as y = ax + b. Regression coefficients a and b can be obtained by regressing known activity durations against their corresponding multidimensional scale values. Then, by using the regression equation, one can obtain approximations for all the activity duration.

For the test case, we assumed that the duration of a single activity of medium duration, listen-to-customer1, was known. Its duration of 1000 ins, together with the duration of 0 ms for the null activity, determine the regression coefficients a and b. (We are considering the worst case. In practice, one would want to use more actual duration estimates in the regression.) The transformed MDS approximations are in Table 5.

We are now ready for the third step in building a model, to write equations for calculations or a program for simulations. At this point, we use the Excel (Microsoft, Redmond, WA) program developed before for simulations and described in the Appendix. To find approximations based on simulations, we assume the distributions of the activity durations can be approximated by independent gamma distributions. In an actual task, the various activities would not ordinarily have equal coefficients of variation. As a simplifying assumption, for our approximation we set the coefficient of variation to be the same for all activities. The value used was .8. This is the coefficient of variation for the one activity, listen-to-customer1, for which the mean and standard deviation were assumed to be known. (If means and standard deviations for more than one activity are known, linear regression could be used to estimate the coefficients of variation for the other activities.)

Evaluating the approximation. The question is, are these MDS duration estimates, together with the simplifying assumption that the coefficient of variation is the same for every activity, good enough to employ in simulations to obtain approximations useful for any purpose? One useful purpose of the simulations would be to sort the activities according to importance, defined in two ways. The first is in terms of criticality (the probability that a particular activity is on the critical path). The second is in terms of the product of criticality and mean duration.

As before, we used Excel to produce 10 000 simulated trials, using the MDS-approximated mean activity durations. We assumed each activity had a gamma distribution. With a gamma distribution, the mean is [alpha][beta] and the variance is [alpha][[beta].sup.2]. The parameters [alpha] and [beta] are determined by the mean and coefficient of variation of .8. From the simulation, for each activity, we calculated the criticality and the product of criticality and mean duration.

Results for criticality are in Table 2. One can ignore activities on every path through the network because their criticality is automatically 1 in both simulations. (These activities are enter-command c, listen-to-customer 1, and greet-customer.) For the remaining activities, agreement is quite good. A close look shows that the approximation is good enough to make a difficult discrimination. The longest activity, LTC2, is in parallel with a sequence made of the second longest activity, ECCN, plus two copies of one of the shortest activities, EC. These two path segments are quite close in actual mean duration: the former is 5280 ins, the latter 5110 ms (see Table 1 for activity durations). The MDS-approximated activity durations are good enough to produce the correct ordering for the approximate criticalities (i.e., a criticality of .538 vs. .462; see Table 2). In fact, the approximate criticalities are quite close to the target criticalities in absolute terms (e.g., compare .538, approximate, with .576, targ et). The [r.sup.2] between approximate criticalities and target criticalities is .996.

Results for Criticality x Mean Duration are in Table 3. Quantitatively, for one activity (systemRT1) the discrepancy between the target value and MDS approximation is considerable. Nonetheless, the rank ordering based on the MDS approximation is quite close to the target ordering. Some activities with slightly different actual mean durations are tied in their MDS-approximated means and, consequently, are incorrectly tied in Table 3. Otherwise, no activities are out of order. It may seem at first that nothing has been gained by calculating Criticality x Duration because it can be predicted quite well from mean duration itself. Indeed, the [r.sup.2] between Criticality x Mean Duration and mean duration is .85. The approximate values in Table 3 do better, however; the [r.sup.2] between the approximate values and the target values is .95.

To calibrate the multidimensional scale values for mean durations, we assumed that the mean and standard deviation of an activity of medium duration were known. To see whether successful approximation depends on using an activity with medium mean duration, we carried out two further simulations with extreme values. In one we used an activity with short mean duration, and in the other we used an activity with long mean duration.

The activity with longest actual duration is listen-to-customer2. We used its actual mean duration of 5280 ms and the mean duration of 0 ms for the null activity to linearly transform the raw multidimensional scale values in Table 5 so as to obtain approximate mean durations. These approximate mean durations, together with the coefficient of variation for listen-tocustomer2 (.3, Table 1), were used in a new simulation that was carried out in the same way as before. With the new simulation, the [r.sup.2] between target criticality and approximated criticality was .996 (activities on all critical paths through the network were omitted). The [r.sup.2] between target and approximate Criticality x Duration was .97. Clearly, calibrating with an activity of long mean duration leads to a good approximation.

The activity with the shortest actual mean duration is listen-to-beep. Our MDS procedure incorrectly produced a raw scale value for this activity equal to that for the null activity; in other words, our procedure produced a value of 0 ms for listen-to-beep (see Table 5). The value of 0 ms cannot be used to calibrate the other MDS scale values, so the attempt to use the activity with shortest mean duration fails. (Our MDS values for activities with short duration would be better if we did not handicap the procedure by treating the smallest half of the ranks in Table 4 as missing data.)

We turned to the activity with the second smallest actual mean duration, enter-command. Its actual mean duration of 320 ms and the mean duration of 0 ms for the null activity were used to obtain approximate mean durations by linearly transforming the raw MDS scale values in Table 5. These approximate mean durations, together with the coefficient of variation for enter-command (.5, Table 1), were used in a new simulation. With the new simulation, the [r.sup.2] between target criticality and approximated criticality was .9997 (activities on all critical paths through the network were omitted). The [r.sup.2] between target and approximate Criticality x Duration was .97. Evidently, except for very small durations, the duration of the activities used for calibration is not crucial for obtaining a good approximation. In practice, of course, one would want to know actual durations of several activities.

A second test case. There may be something unusual about the arrangement of the activities in the phone call task illustrated in Figures 4 and 5. We therefore considered a new randomly arranged network with the same 10 different activities to see whether the approximation would still be good.

The activity network for the phone call task, the first test case, is made of parts, with two paths in each part (see Figure 5). To produce a second test case with a different form, an activity network was constructed with half the pairs of activities concurrent and half sequential (see Figure 6). Subject to this constraint, choices of which pairs were concurrent and which were sequential were random, as were choices of order for sequential activities. In the activity network for the first test case, some of the 10 different activities occurred more than once, producing 14 nodes. With 14 nodes, the number of pairs of nodes is odd. To have an even number of pairs of nodes, 13 nodes were used in the second test case.

To assign activity durations to the nodes, each of the 10 activities with different means in the test case was assigned to a node at random. Then 3 of the 10 different activity durations were selected again at random, with replacement, to assign to the remaining three nodes.

In case there was something special about the assignment of coefficients of variation to the activities in the first test case, a new random assignment of the 10 coefficients of variation to the 10 different activities was made. For the activities as listed in Table 1, in order from smallest mean to largest, the new coefficients of variation were .5 ,.2,.9,.1,.8,. 6, 1.0, .4, .3, and .7, respectively. As before, we assumed the mean and standard deviation of the activity listen-to-customer1 were known, with a mean of 1000 ms and, in this case, standard deviation of 600 ms (corresponding to coefficient of variation .6).

The simulation was run twice. The first used the actual activity durations from Table 1 as well as the newly assigned coefficients of variation. This was the target simulation. The second used the MDS approximations for activity durations from Table 5 and the same coefficient of variation throughout, .6. This was the MDS approximation simulation.

Results for criticality are in Table 6. The ordering of activities according to criticalities obtained from the target simulation is almost the same as the ordering according to criticalities from the MDS approximation simulation. A few activities are out of order, but these are all activities with criticality less than .1, not likely to be important in practice. The proportion of variance accounted for in predicting the target criticality values from the MDS approximation simulation is [r.sup.2] = .994.

Results for the product of criticality and duration are in Table 7. For two activities for which target Criticality x Duration is positive (read-screen, system-RT1, listen-to-beep), the value produced by the MDS approximation is zero. These activities are out of order when the activities are ordered by target Criticality x Duration. However, the target value of Criticality x Duration for each of these three activities is less than 100 ins, so the error is not likely to be important in practice. All other activities are in the proper target order.

At the start of this project, we thought the approximation would be useful if it succeeded in arranging the activities in the proper order. Not only does our method produce the correct order for Criticality x Duration, but also the MDS approximation values are generally close to their target values. We note that for one activity, system-RT1 b, the error is considerable. Overall, however, the proportion of variance accounted for in predicting the target product of criticality and duration from the MDS approximation simulation is [r.sup.2] = .97.

As with the first test case, one might object that the success of the MDS approximation depends on knowing the mean and standard deviation of an activity with medium mean duration. As before, we did two further simulations, one calibrated with an activity of long duration and one with an activity of short duration. The procedure was as described earlier for the first test case.

The activity with the longest actual mean duration is the same as before, listen-to-customer2. The MDS approximation simulation was run using MDS approximate means calibrated with linear regression using the actual mean duration of listen-to-customer2 (5280 ins). Its newly assigned coefficient of variation (.7) was used for every activity. The [r.sup.2] between criticalities obtained from the MDS approximation simulation and those from the target simulation was .994. The [r.sup.2] between Criticality x Duration obtained from the MDS approximation simulation and those from the target simulation was .97. Calibration based on an activity of long duration clearly leads to good approximations.

The MDS approximate means were calibrated using the actual mean duration of enter-command (320 ins). Its newly assigned coefficient of variation (.2) was used for every activity, and the MDS approximation simulation was run again. The [r.sup.2] between the target criticalities and those from the MDS approximation simulation was .96. The [r.sup.2] between target Criticality x Duration values and those from the MDS approximation simulation was .91. Clearly, calibrating with an activity with short mean duration leads to good approximations. As with the first test case, the MDS approximation does not require one to know the mean and standard deviation of an activity of medium duration.

Discussion. Why does the approximation work? The answer has two parts. First, the MDS estimates of the means are very good, and second, the coefficients of variation need not be very good.

As for multidimensional scaling, although it may have been surprising when first proposed, it is now well established that the numerical scale values assigned to objects are greatly constrained if the ordering of pairs of the objects according to their numerical differences is known. For the activities in the phone call task in Figure 1, the raw scale values obtained from MDS in Table 5 are excellent predictors of the actual durations in Table 1. The worst case is SRT1, for which the actual duration is 730 ms and the MDS approximation is 146 ins. Despite this worst-case discrepancy, the percentage of variance accounted for is [r.sup.2] = .98 (omitting the two activities with durations that were set a priori -- null and LTC 1).

The reason the coefficients of variation do not matter much is subtler. In the MDS approximation simulation, one coefficient of variation, obtained from an activity of medium duration, was used for all activities. As a consequence, in the MDS approximation simulation the standard deviations are perfectly positively correlated with the means.

In the two target test cases, each activity had a different coefficient of variation. Nonetheless, for the first target test case the correlation between the means and standard deviations was .85 and for the second it was .89. The correlation is not perfect, as it is in the MDS approximation simulation, but it is quite high. The result, in both the test cases and in the approximations, is that the standard deviation increases as the mean increases.

It is not an accident that the standard deviation increases with the mean in the test cases. Two facts are relevant. For most human activities, the standard deviation of the duration is less than or equal to the mean. In other words, there is an upper limit of about 1.0 on the coefficient of variation. Second, in practice, human information-processing tasks contain activities that vary considerably in duration-here by a factor of 50, from 100 to 5280 ins. Suppose a low coefficient of variation, . 1, occurs with the longest activity and a high coefficient of variation, 1.0, occurs with the shortest activity. The shortest activity would have mean 100 ms and standard deviation 100 ins. The longest activity would have mean 5280 ms and standard deviation 528 ins. Despite the coefficients of variation acting against it, the standard deviation increases with the mean. For determining critical paths, this seems to be the important aspect of the relation between the means and standard deviation, and the MDS approximat ion simulation gets it right.

The multidimensional scaling approach to approximating activity durations appears promising for use in network models. In particular, it appears it would work well with a CPM-GOMS analysis. For CPM-GOMS, we note that the activities are broken down into relatively small components. For example, in the phone call task, we considered enter-credit-card-number as a single activity, whereas the CPM-GOMS analysis of Gray et al. (1993) considered the individual keystrokes. How well the MDS approximation works with a more fine-grained breakdown is an open question.

USING ORDER-OF-PROCESSING DIAGRAMS

In this section, we give more information on how OP diagrams can be used to represent choices and contingent sequences, plans for operations that cannot easily be represented by critical path networks. We also show how analytic techniques can be used to derive the expected probability that a path is critical.

Representation: Reading Roadside Messages

To show how OP diagrams can easily be used to represent choices and contingent sequences, we diagram a simple task that drivers increasingly must perform -- namely, reading electronic signs alerting them to changing weather and construction conditions. These are called variable message signs. OP diagrams can be used to represent the processes drivers use to read such signs. The resulting model can assist in designing signs and in selecting the optimal times to display each word in a multiphase sign.

Examples of variable message signs (VMSs) can be found in tunnel sections of interstate highways where the ceiling height is limited. There is often room for only a single line of text, so it is not possible to display multiple words simultaneously, and the sign itself may be only 12 characters wide. Thus a two-word sign would require two phases (similar to the standard rapid serial visual presentation task used in word perception experiments). There is typically some period of time when each of the words in the message is individually displayed, then a blank interval, and then the words in the message are displayed again. Drivers will sometimes (though not always) wait for the blanking interval to end before beginning to read the words in the message. Ideally, the driver should spend as little time as possible reading the message.

An OP diagram can be used to represent processing in this task. Again, each node in the OP diagram represents a state of processing. However, now the activities currently being executed in the state are listed in the top part of the node representing the state. Activities waiting to be executed are listed below a horizontal line through the middle of each state. Because we are going to discuss choices and contingent sequences, we will need labels not only for the processes but also the durations of the processes. First, let [W.sub.i] represent the time that word [w.sub.i] is displayed on the VMS. This time is controlled by the traffic engineer and can be made to vary arbitrarily in length. It could be represented as either a constant or random variable, although its representation as a random variable better reflects the reality of electronic VMSs. Second, let [E.sub.i] represent the time that it takes a driver to encode word [w.sub.i]. Let [e.sub.i] represent the encoding of word [w.sub.i]. Third, let [C.sub .i] represent the time that it takes the driver to comprehend a word once it is encoded. Let [c.sub.i] represent the comprehending of word [w.sub.i]. Finally, let [D.sub.i] represent the time that it takes a driver to decide that it is not worth continuing to read the second word in a message if he or she has already missed the first word. Let [d.sub.i] represent this decision process.

A contingent sequence. We can now talk about one of the contingent sequences in the task. In particular, if the presentation time for a word ([W.sub.i]) is less than the encoding time ([E.sub.i]), then we assume the word is not encoded (essentially, the word is temporally masked by the next word or blanking interval) and therefore is never understood and entered into short-term memory However, if presentation time is greater than encoding time, then we assume that the word is encoded. This creates a contingent sequence because the encoding ([e.sub.2]) of the second word begins after presentation ([w.sub.1]) of the first word ends only if the encoding ([e.sub.1]) of the first word has itself ended. However, the encoding of the second word is never instantiated if the presentation of the first word ends before the encoding of the first word. Thus we cannot simply decide to start the encoding of the second word as soon as the (undifferentiated) longest of a set of processes (e.g., [w.sub.1] and [e.sub.1]) has en ded. Instead, we need to know the order of the completion times of the set of processes.

This contingent sequence can easily be represented as follows in the OP diagram. At the start of processing, the first word ([w.sub.1]) is displayed and the encoding ([e.sub.1]) of this word begins (State 1 in Figure 7). One of two things can happen to end processing in this state. To begin, assume that the first word is encoded before it is removed from the display. Then a transition is made from State 1 to State 2. The arc joining the two states is labeled with activity [e.sub.1] to represent the fact that the encoding activity completes when the transition is made from State 1 to State 2. In State 2, an attempt is made to understand ([c.sub.1]) the first word. The first word continues to be displayed, and so both the word display process, [w.sub.1], and the comprehension activity, [c.sub.1], are represented in State 2.

We just traced an initial transition from State 1 to State 2. Now let us follow a transition from State 1 to State 3. Specifically, suppose the first word is removed from the variable message sign before it is encoded. The arc joining the two states is now labeled with two activities because if the first word is no longer displayed on the variable message sign, the encoding process must also stop (the process [e.sub.1] is put in parentheses because it ends prematurely). In State 3, the second word ([w.sub.2]) is now displayed and encoding ([e.sub.2]) of this word begins. Note that if the transition from State 1 to State 3 is made, then the first word will not be recalled, given that it was neither encoded nor, by implication, understood. Additionally, in State 3 the driver must decide whether to continue reading the message, given that he or she has already missed the first word. Thus we place [d.sub.1] in the current set as well.

A choice. We can also talk now about the representation of choices. In particular, consider State 3 in the OP diagram. The user has failed to encode the first word and therefore may or may not decide to encode the second word. If the decision not to encode the second word completes before the second word has been encoded, then we imagine that the driver does not continue reading the message. This is shown in Path 8 of the OP diagram by placing [w.sub.2] and [e.sub.2] in parentheses to indicate that the driver stops attending to the display of the second word and stops encoding it. However, if the second word is encoded before the decision to stop processing it has been made, then we assume that the driver continues in his or her attempt to understand the word.

Another contingent sequence. Finally, we can describe the second contingent sequence in the task. Consider State 5. A transition is made from State 2 to State 5. In State 2, the first word is being displayed and the driver is attempting to understand the meaning of that word. When the transition is made from State 2 to State 5, the second word in the message is displayed. We need to determine whether the driver can begin encoding the second word at the same time that he or she is trying to understand the first word. We look to studies of reading to determine what drivers might do. When reading, individuals may shift their attention ahead of the word they are currently attempting to comprehend, but there is no indication that they actually can comprehend two words at the same time (Reichle, Pollatsek, Fisher, & Rayner, 1998). We choose to reflect this in the OP diagram by assuming that the encoding of the second word does not begin until comprehension of the first word has ended.

Calculations

This completes the discussion of the use of the OP diagram to represent choices and contingent sequences. We can now use the diagram to predict the probability that an individual will read and understand both words in our hypothetical message when the word in each phase of the message is presented for some known duration. Specifically, the goal of the modeling effort is to identify mean times (expected values), E([W.sub.1]), i = 1,2, such that the sum E([W.sub.1]) + E([W.sub.2]) is a minimum, subject to the constraint that the probability that a driver understands both words in the message is at least at or above some given criterion.

We can now easily find the probability that both words in the message are understood, given that we know the distributions of the durations of the various processes. To begin, turn to Table 8 for a summary of the number and identity of the words understood along each path. For Path 7 no words are understood. By comparison, for Path 1 both words are understood and for Path 3 one word was understood -- in this case, the first word. As is clear from the table, the probability that both words are understood is simply the probability that either Path 1 or Path 2 is taken.

For this situation, estimates of the activity durations are available in the literature. (If they were not, one would consider multidimensional scaling.) A rough estimate of the time to encode a word is the time that the eye remains on a word in ordinary reading, about 200 to 250 ins, and a rough estimate of the time to comprehend a word is the time for lexical access, about 100 to 300 ms (time estimates from Reichle et al., 1998). These estimates are good enough for our purposes here. In an application requiring more accurate estimates, one would take into account factors such as word frequency and word length (see, e.g., Rayner & Duffy, 1986; and Rayner & Fischer, 1996).

To proceed, we need to assume something about the distributions of the word display, encoding, comprehension, and decision times. Using gamma distributions to model activity durations during ordinary reading leads to good fits to data (Reichle et al., 1998). This indicates that it is reasonable to use gamma distributions to model reading of VMSs. The exponential distribution is the simplest form of a gamma distribution, and for the sake of a simple illustration, we will assume that activity durations are independent, exponentially distributed random variables. (We note that a constant or a gamma distribution with low variance would be more realistic for the display time, but this would complicate our calculations here.) The form of the activity duration distributions determines whether we can use straightforward analytic techniques or, instead, use simulations. With exponential distributions it is feasible to use equations, rather than spreadsheet simulations, to predict performance.

Let us begin with the probability that Path 1 is taken. This can be written as the joint probability that States 1, 2, 4, 7, 11, and 12 are entered. This can be represented as P(Path 1) = P (1,2,4,7,11,12). Using elementary laws of probability this can be rewritten as

P(Path 1) = P(12\11,7,4,2,1)P(11\7,4,2,1) P(7\4,2, 1 )P(4\2,1)P(2\1).

Now, given the memoryless property of exponentially distributed random variables, the time that is spent in states prior to the current one does not matter (see Fisher & Goldstein, 1983). Thus we can rewrite the foregoing equation more simply as follows:

P(Path 1) = P(12\11 )P(11\7)P(7\4)P(4\2) P(2\1).

We can now write a closed form expression for the probability that states along the path are in sequence on any particular trial. Let [lambda]([w.sub.i]) represent that rate parameter (the reciprocal of the mean) of the exponential distribution that governs the behavior of the word display times for word [w.sub.i], for instance,

P([W.sub.i] < w) = [[integral].sup.w.sub.0] [lambda]([w.sub.i]) [e.sup.- [lambda]([w.sub.i])t]dt.

Define [lambda]([c.sub.i]), [lambda]([e.sub.i]), and [lambda]([d.sub.i]) similarly. Then, the probability that State 2 is entered given that State 1 is current, P(2\1), is simply the probability that the first word is encoded before the display of that word is completed (i.e., [e.sub.1] completes before [w.sub.1]). For exponential random variables, it is straightforward to show that this is equal to the rate parameter of the process that completes first over the sum of the rate parameters of the processes in the current set, in this case, [lambda]([e.sub.i])/[ [lambda]([e.sub.i]) + [lambda]([w.sub.1])]. Completing the computation of the conditional probabilities yields for Path 1 the following probability:

P(Path 1) = [lambda]([e.sub.1])/ [lambda]([w.sub.1]) + [lambda]([e.sub.1]) . [lambda]([c.sub.1])/ [lambda]([w.sub.1]) + [lambda]([c.sub.1])

[lambda]([e.sub.2])/ [lambda]([w.sub.1]) + [lambda]([e.sub.2]) . [lambda]([c.sub.2])/ [lambda]([w.sub.1]) + [lambda]([c.sub.2]).

The probability that Path 2 is taken can be written similarly. Thus we can easily derive the probability that both words in the message are understood.

We now want to find the [lambda]([w.sub.1]) and [lambda]([w.sub.2]) that minimize the expected total display time, subject to the constraint that the probability that two words are understood is above some criterion value. The expected total display time is equal to the sum of the reciprocal of the rate parameters of the distributions of the two word display times,

1/[lambda]([w.sub.1]) + 1/[lambda]([w.sub.2])

The probability that two words are understood is equal to the sum P(Path 1) + P(Path 2). One way to solve this problem is to iterate through the space of possible values for [lambda]([w.sub.1]) and [lambda]([w.sub.2]), choosing suitably small increments. At each value we compute both the expected total display time and the probability that two words are understood. If the probability that two words are understood is below the criterion value, then we continue with the next set of values of the parameters. If the probability that the two words are understood is above criterion, then we compute the expected total display time and compare it with the previous expected total display time in which the probability that the two words were understood was above criterion. If the new expected total display time is less than the stored expected total display time, then we save the values of the new rate parameters and continue with the iteration. Otherwise, we leave the stored values as is and continue with the iteratio n.

The advantage of this procedure is its computational simplicity. The procedure does not guarantee that one will find a global minimum, as it could easily settle on a local minimum. However, in practice, when the number of unknown parameters is relatively small, one can easily explore the space in very fine detail and remain reasonably positive that the minimum one finds is close to the global minimum. Alternatively, when the number of parameters is much larger or the range of values is quite broad, one can use a method that remains computationally straightforward. This is known as the downhill simplex method, which was developed by Nelder and Meade (1965). Unlike all other methods, which require evaluation or estimation of a derivative, the downhill simplex method requires only that the function be evaluated at different points. The method is discussed in more recent texts as well (e.g., Press, Flannery, Teukolsky, & Vetterling, 1988, pp. 305-309).

FINAL COMMENTS

Before concluding the description of a technique, it is good practice to describe the situations in which it will work well and those in which it will not. However, our simulations and calculations are not informative about this, so we must leave it for future work. We conclude instead with a remark about unobservable activities and a summary

The Problem of Unobservable Activities

At many points in conducting a cognitive task analysis, the analyst is confronted with the problem of inferring what mental activities the operator is carrying out and how the activities are organized. The analyst can observe and interview the operator, study the literature, and use common sense. Often, however, the problem is difficult because the mental activities are covert. For the analyst confronting this problem, two models mentioned earlier, EPIC and GOMS, may be useful. We now mention another, general latent network theory (Schweickert, 1978; Schweickert, Fisher, & Goldstein, 1992).

A detailed description of general latent network theory is beyond the scope of this paper. Briefly, to use it, the investigator manipulates experimental factors such as the number of items displayed in a visual search task. By observing the resulting changes in response time, an activity network representing the task can be drawn (or can be shown to be impossible). From the activity network, an OP diagram can be drawn, and it, or simulations, can be used to find the best-fitting estimates of activity durations. If the task is important and a detailed analysis of the cognitive activities is needed, the method can be used to produce a model in a form compatible with standard forms of task analysis, such as hierarchical task analysis, and with models such as EPIC and CPM-GOMS.

Summary

Mathematical and computer models have long been used throughout engineering to simulate the behavior of complex systems and, thereby, to design systems that perform optimally. However, models have been used only infrequently and incompletely when a human operator is in the loop. The aim of this paper is to facilitate model construction in two ways. The first is a broad attempt to make more use of the information given in task analyses. In the future, one may be able to construct a good working model of a human's role in a task from a general description of the task together with general principles. At this time, the complexity of human behavior defies a priori predictions, and the systematic information provided by a trained task analyst is priceless. We propose putting this information to use in models for making approximations.

The second thrust of this paper is to suggest a specific technique for surmounting a hurdle in making models -- namely, estimating the durations of activities. The advantage of the multidimensional scaling approach we propose is that the judgment the expert is called upon to make is less demanding than the usual request to estimate the absolute duration of the activities. Instead, the expert judges the relative differences in durations between pairs of activities. To give a concrete example, the expert might be asked to compare the difference in duration between (a) removing an ink cartridge from a printer and (b) scrolling to the bottom of a page with the difference in duration between (c) pressing the enter key and (d) clicking on an icon. The same effort in observing and interviewing operators, searching the literature, and so on would be applied, but the questions to be answered are easier. Given good judgments about the relative differences in duration, we have provided a proof in principle that the mult idimensional scaling algorithm can provide satisfactory estimates of the absolute durations. The primary benefit would be an increase in the efficient use of expertise. There may be a secondary benefit in that experts participating in the project may have more confidence in their rankings than in their absolute estimates and, hence, may have more confidence in the final product incorporating their judgments.

APPENDIX

Simulations in Excel

A tutorial on our simulations is available on our Web site, http://www.psych.Purdue.edu/Research/CognitiveTaskAnalysisNetworks/ca ll.htm.

In the sample call there are 14 activities. Using the random number generation procedure from the data analysis menu in Excel, 10 000 trials were generated with 14 uniformly distributed random numbers on each trial. To avoid rounding errors introduced by values of exactly o and 1, the uniform random variables were chosen from the interval (.0001,.9999).

To obtain samples from gamma distribution, we used the fact that if X is uniformly distributed between 0 and 1, then Y GAMMAINV (X,alpha,beta) has a gamma distribution with mean alpha x beta and variance alpha x [beta.sup.2]. For each of the 14 activities, a sample was obtained of 10 000 observations from a gamma distribution, with the appropriate parameters from Table 1.

For each trial, the critical path was determined, and the total duration of all the activities on that path was calculated. For each activity, an expression with the logical IF operator was used to set an indicator to 1 if the activity was on the critical path for that trial and to 0 otherwise.

For example, the task begins with systemRTl (SRT I), followed by read-screen a (RSa). These are in parallel with listen-to-beep (LTB). In this activity network, activity SRT1 will be on the critical path on a given trial if the duration of SRT1 plus the duration of RS a is greater than or equal to the duration of LTB on that particular trial. The durations were chosen at random for each trial, as described previously. In our Excel program, on the first trial the randomly selected duration of SRT1 was in cell 021, that of RSa was in cell Q2 1, and that of LTB was in cell P21. An indicator of whether SRT1 was on the critical path on Trial I was calculated in cell AC21, with the formula IF($021 + $Q21 >= $P21), 1,0. This formula results in a 1 if SRT1 is on the critical path on Trial I and in a 0 if not. This calculation was done for all of the 10 000 trials. The average value of this indicator over all the 10 000 trials is the criticality of SRTI. Analogous calculations were done for the other activities. The m ean completion time for the task was estimated as the average of the completion times for each trial.

For larger networks, the formulas described here would be tedious. More efficient algorithms for calculating criticality and the duration of the longest path can be found in Elmeghraby (1977), Modor and Phillips (1970), and Wiest and Levy (1977).

[FIGURE 4 OMITTED]
TABLE 1

Mean, Variance (V), and Coefficient of Variation (CV) for Activities in
Test Case

Activity Name                            V = (CV          Beta =
& Abbreviation                Mean   CV  x [Mean).sup.2]  V/Mean

Listen-to-beep (LTB)           100   .1      100             1.0
Enter-command (EC)             320   .5    25600            80.0
Read-screen (RS)               340   .9    93636           275.4
Thank-customer (TO)            360   .2     5184            14.4
System-RT1 (SRT1)              730   .4    85264           116.8
Listen-to-customer1 (LTC1 )   1000   .8   640000           640.0
Greet-customer (GC)           1570  1.0  2464900          1570.0
System-RT2 (SRT2)             2000   .7  1960000           980.0
Enter-credit-card-no. (ECCN)  4470   .6  7193124          1609.2
Listen-to-customer2 (LTC2)    5280   .3  2509056           475.2

Activity Name                  Alpha =
& Abbreviation                Mean/Beta

Listen-to-beep (LTB)          100.00
Enter-command (EC)              4.00
Read-screen (RS)                1.23
Thank-customer (TO)            25.00
System-RT1 (SRT1)               6.25
Listen-to-customer1 (LTC1 )     1.56
Greet-customer (GC)             1.00
System-RT2 (SRT2)               2.04
Enter-credit-card-no. (ECCN)    2.78
Listen-to-customer2 (LTC2)     11.11

Note: Means (ms) are from Gray et al. (1993): listen-to-customer1 from
their Figure 6 and other means from their Figure 5. For this
demonstration, coefficients of variation were randomly assigned to
activities.

TABLE 2

Activities Ordered by Criticality in simulations of Sample Phone Call

                                   Criticality

                         Test Case     Simulation Using
Name                    Simulation    MDS Approximations

Listen-to-beep             .000              .000
Thank-customer             .001              .003
Enter-command a            .424              .462
Enter-command b            .424              .462
Enter-credit-card-No.      .424              .462
Listen-to-customer2        .576              .538
Enter-command d            .999              .997
Read-screen b              .999              .997
System-RT2                 .999              .997
System-RT1                1.000             1.000
Read-screen a             1.000             1.000

Note: Activities on every path through the network are not included.
Although system-RT1 and read-screen a have criticality 1, they are not
on every path through the network

TABLE 3

Activities Ordered by Criticality x Duration in Sample Phone Call

                               Criticality x Duration
                          Test Case       Simulation Using
Name                   Simulation (ms)  MDS Estimations (ms)

Listen-to-beep                 0                   0
Thank-customer                 0                   0
Enter-command a              136                  61
Enter-command b              136                  61
Enter-command c              320                 131
Enter-command d              320                 131
Read-screen a                340                 131
Read-screenb                 340                 131
System-RT1                   730                 146
Listen-to-customer1         1000                1000
Greet-customer              1570                1917
Enter-credit-card-no.       1893                1944
System-RT2                  1998                2110
Listen-to-customer2         3043                2719

TABLE 4

Rank Ordering of Differences in Durations between Pairs of Activities

NULL  LTB  EC  RS    TC    SRT1  LTC1   GC  SRT2   ECCN  LTC2

NULL   4   9   10    11     21    25.5  32   37     49    55
LTB        5    6     7     17    24    31   36     48    54
EC              1.5   3     14    20    29   35     46    53
RS                    1.5   13    19    28   34     45    52
TO                          12    18    27   33     44    51
SRT1                               8    23   30     43    50
LTC1                                    16   25.5   41    47
GO                                           15     39    42
SRT2                                                38    40
ECCN                                                      22
LTC2

Note: With 11 activities, 55 pairs are possible. The largest difference
in duration is ranked 55. To see whether ranks of short activities are
needed, Ranks 1 through 27 were treated as missing in the
multidimensional scaling analyses.

TABLE 5

Activity Durations Approximated with Multidimensional Scaling

                           Raw           Transformed
Activity Name          Scale Value  MDS Approximation (ms)

Null                     -0.7849             0.0000
Listen-to-beep           -0.7849             0.0000
Enter-command            -0.7086           131.0997
Read-screen              -0.7086           131.0997
Thank-customer           -0.7081           131.9588
System-RT1               -0.7001           145.7045
Listen-to-customer1      -0.2029          1000.0000
Greet-customer            0.3306          1916.6667
System-RT2                0.4464          2115.6357
Enter-credit-card-no.     1.6626          4205.3265
Listen-to-customer2       2.1583          5057.0447

TABLE 6

Activities Ordered by Criticality in Randomly Construced Test PERT
Network

                                 Criticality

                       Test Case    Simulation Using
Name                   Simulation  MDS Approximations

Thank-customer a          .000            .000
Enter-command             .000            .000
listen-to-customer-1a     .023            .020
Thank-customer b          .047            .090
System-RT1 a              .055            .002
Read-screen               .081            .071
Enter-credit-card-no.     .266            .236
System-RT2                .731            .759
Listen-to-customer 2      .734            .764
Listen-to-beep            .872            .839
System-RT1 b              .872            .839
Listen-to-customer-1b     .922            .978
Greet-customer            .977            .980

TABLE 7

Activities Ordered by Criticality x Duration in Randomly Constructed
Test PERT Network

                               Criticality x Duration

                          Test Case       Simulation Using
Name                   Simulation (ms)  MDS Estimations (ms)

Thank-customer a               0                   0
Enter-command                  0                   0
Thank-customer b              17                  12
Listen-to-customer-1a         23                  20
Read-screen                   28                   9
System-RT1 a                  40                   0
Listen-to-beep                87                   0
System-RT1 b                 637                 122
Listen-to-customer-1b        922                 978
Enter-credit-card-no.       1189                 990
System-RT2                  1463                1605
Greet-customer              1533                1878
Listen-to-customer2         3873                3866

TABLE 8

Number and Identity of Words Recalled along Each Path in OP Diagram for
Variable Message Sign

          Number of         Identity of
Path    Words Recalled     Words Recalled

1a, 1b        2         [w.sub.1], [w.sub.2]
2a, 2b        2         [w.sub.1], [w.sub.2]
3             1              [w.sub.1]
4             1              [w.sub.1]
5             1              [w.sub.2]
6             1              [w.sub.2]
7             0
8             0


ACKNOWLEDGMENTS

We thank Wayne D. Gray and two anonymous reviewers for very helpful comments.

Date received: October 5, 2001

Date accepted: September 25, 2002

REFERENCES

Annest, J., & Duncan, K. D. (1967). Task analysis and training design. Occupational Psychology, 41, 211-221.

Ashby, F. G., & Townsend, J. T. (1980). Decomposing the reaction time distribution: Pure insertion and selective influence revisited. Journal of Mathematical Psychology, 21, 93-123.

Atwood, M. E., Gray, W. D., & John, B. E. (1996). Project Ernestine: Analytic and empirical methods applied to a real world CHI problem. In M. Rudsill, C. Lewis, P. B. Poison, & T. D. McKay (Eds), Human-computer interface design: Success stories, emerging met hods and real-world context (pp. 122-134). San Mateo, CA: Morgan Kaufmann.

Baber, C., & Mellor, B. (2001). Using critical path analysis to model multimodal human-computer interaction International Journal of 1-luman-Computer Studies, 54, 613-636.

Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human computer interaction. Hillsdale, NJ: Erlbaum.

Chipman, S. F., Schraagen, J, M., & Shalin, V. L. (2000). Introduction to cognitive task analysis. In J. M. Schraagen, S.F Chipman, & V. L. Shalin (Eds.), Cognitive task analysis (pp. 3-23). Mahwah, NJ: Erlbaum.

Cox, T. F., & Cox, M. A. A. (1994). Multidimensional scaling. London: Chapman and Hall.

Diaper, D. (1989). Task analysis for human computer interaction. Chichester: Wiley.

Elmaghraby, S. E. (1964). An algebra for the analysis of generalized activity networks. Management Science, 10, 494-514.

Elmaghraby, S. E. (1966). The design of production systems. New York: Reinhold.

Elmeghraby, S. E. (1977). Activity networks: Project planning and control by network models. New York: Wiley.

Fisher, D. L., & Goldstein, W M. (1983). Stochastic PERT networks as models of cognition: Derivation of the mean, variance, and distribution of reaction time using order-of-processing COP) diagrams. Journal of Mathematical Psychology, 27, 121-115.

Gilbreth, F. B., & Gilbreth, L. M. (1921). Process charts. Trans. actions of the American Society of Mechanical Engineers, 43, 1029-1050.

Gray, W. D., & Boehm-Davis, D. A. (2000). Milliseconds matter: An introduction to microstrategies and so their use in describing and predicting interactive behavior. Journal of Experimental Psychology: Applied, 6, 322-335.

Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: Validating s GOMS analysis for predicting and explaining real-world task performance. Human-Computer Interaction, 8, 237-309.

John, B. E. (1990). Extensions of GOMS analyses to expert performance requiring perception of dynamic visual and auditory information. In Proceedings of the CHI '90 Conference on Human Factors in Computing Systems (pp. 107-1 15). New York: Association for Computing Machinery,

John, B. E. (1996). TYPIST: A theory of performance in skilled typing. Human-Computer Interaction, 11, 321-355.

John, B. E., & Gray, W. D. (1992, May). GOMS analysis for parallel activities. Tutorial presented at the CHI '92 Conference on Human Factors in Computing Systems, Monterey, CA.

Kieras, D. E. (1988). Towards a practical GOMS model methodology for human interface design. In M. Helander (Ed.). The handbook of human-computer interaction (pp. 135-158). Amsterdam: North-Holland.

Kieras, D. E. (1996). GOMS modeling of user interfaces using NGOMSL. In M. Helander & T. Landauer (Eds.), The handbook of human-computer interaction (2nd ed., pp. 142-161). Amsterdam: North-Holland.

Kieras, D. E., & Meyer, D. E. (2000). The role of cognitive task analysis in the application of predictive models of human performance. In J. M. Schraagen, S. F. Chipman, & V. L. Shalin (Eds.), Cognitive task analysis (pp. 237-260). Mahwah, NJ: Erlbaum,

Kieras, D. E., Wood, S.D., & Meyer, D. E. (1997). Predictive engineering models based on the EPIC architecture for a multimodal high-performance human-computer interaction task. ACM Transactions on computer-Human interaction, 4, 230-275.

Kirwan, B. & Ainsworth, L. K. (Eds.). (1992). A guide to task analysis. London: Taylor & Francis.

Kohfeld, D. L., Santee, J. L., & Wallace, N. D. (1981). Loudness and reaction time: II. Identification of detection components at different intensities and frequencies. Perception and Psychophysics, 29, 550-562.

Kruskal, J. B. (1964a). Multidimensional scaling by optimizing goodness of lit to a nonmetric hypothesis, Psychometrika, 29, 1-27.

Kruskal, J. B. (1964b). Nonmetric multidimensional scaling: A numerical method. Psychometrika, 29, 115-129.

Luce, R. D, (1986). Response times: Their role in inferring elementary mental organization. New York: Oxford University Press.

Modor, J.J., & Phillips, C. R. (1970). Project management with CPM and PERT (2nd ad.). New York: Van Nostrand Reinhold.

Nelder, J. A., & Mead, R. (1965). A simplex method for function minimization. Computer Journal, 7, 308-313.

Olson, J. R,, & Olson, G.M. (1990). The growth of cognitive modeling in human-computer interaction since GOMS. Human Computer interaction, 5, 221-265.

Press, W. H., Flannery, B. P., Teukolsky, S. A., & Vetterling, W. T. (1988). Numerical recipes in C: The art of scientific computing. Cambridge, UK: Cambridge University Press.

Pritsker, A. A. B. (1979). Modeling and analysis using Q-GERT networks (2nd ed.). New York: Wiley.

Rayner, K., & Duffy, S. A. (1986). Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Memory and Cognition, 14, 191-201.

Rayner, K. & Fischer, M. H. (1996). Mindless reading revisited: Eye movements during reading and scanning are different. Perception and Psychophysics, 58, 734-747.

Reichle, E. D., Pollatsek, A., Fisher, D. L., & Rayner, K. (1998). Toward a model of eye movement control in reading. Psychological Review, 105, 125-157.

Schweickert, R. (1978). A critical path generalization of the additive factor method: Analysis of a Stroop task. Journal of Mathematical Psychology, 18, 105-139.

Schweickert, R., Fisher, D., & Goldstein, W. (1992). General latent network theory: Structural and quantitative analysis of networks of cognitive processes. (Tech. Report 92-1). West Lafayette, IN: Purdue University Mathematical Psychology Program.

Shepard, R. N. (1962a). The analysis of proximities: Multidimensional scaling with an unknown distance function I. Psychometrika, 27, 125-140.

Shepard, R. N. (1962b). The analysis of proximities: Multidimensional scaling with an unknown distance function II. Psychometrika, 27, 219-246.

Shepard, R. N. (1966). Metric structures in ordinal data. Journal of Mathematical Psychology, 3, 287-315.

Shepherd, A. (2001). Hierarchical task analysis. London: Taylor & Francis.

Wiest, J. D., & Levy, F. K. (1977). A management guide to PERT/CPM (2nd ed.). Englewood Cliffs, NJ: Prentice Hall.

Williams, K. E. (2000). An automated aid for modeling human-computer interaction. In J. M. Schraagen. S. F. Chipman, & V. L. Shalin (Eds.), Cognitive task analysis (pp. 165-180). Mahwah, NJ: Erlbaum.

Richard Schweickert is a professor in the Department of Psychology at Purdue University. He received his Ph.D. in psychology in 1979 at the University of Michigan.

Donald L. Fisher is a professor in the Department of Industrial Engineering and Operations Research at the University of Massachusetts. He received his Ph.D. in psychology at the University of Michigan in 1983.

Robert W. Proctor is a professor in the Department of Psychological Sciences at Purdue University. He received his Ph.D. in psychology in 1975 at the University of Texas at Arlington.

Address correspondence to Richard Schweickert, Department of Psychological Sciences, Purdue University, W. Lafayette, IN 47907-1364; swike@psych.purdue.edu. HUMAN FACTORS, Vol. 45, No. 1, Spring 2003, pp. 77-103. Copyright[c] 2003, Human Factors and Ergonomics Society. All rights reserved.
COPYRIGHT 2003 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Schweickert, Richard; Fisher, Donald L.; Proctor, Robert W.
Publication:Human Factors
Date:Mar 22, 2003
Words:15852
Previous Article:Production compilation: a simple mechanism to model complex skill acquisition. (Special Section).
Next Article:Empirical assessment of expertise. (Special Section).
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters