Printer Friendly

The impact of mental fatigue on exploration in a complex computer task: rigidity and loss of systematic strategies.


In modern workplaces, people commonly have to deal with increasingly complex problems, such as managing a big company, finding the fault in a power plant, and learning a new computer program without specific instructions. Although in practice many of these tasks can be executed with routines or through the use of standardized procedures, there are also many complex tasks for which routines or procedures are not directly available or that need ad hoc actions that go beyond formal procedures. Dealing with such nonroutine tasks often requires exploration to gain insight into the task and to find out which actions can accomplish task goals (Funke, 1991). Exploration can be done in a systematic or unsystematic way (Dorner, 1980). Exploring systematically means that people behave in a goal-directed way and reflect on action feedback and on their own behavior (Trudel & Payne, 1995; van der Linden, Sonnentag, Frese, & van Dyck, 2001). In contrast, when exploring unsystematically, people often behave in an unstructured way and do not seem to follow a coherent path toward goal attainment (Dorner, 1980). Instead, actions are executed impulsively or are guided by external stimuli that tend to capture attention (see Hollnagel, 1993).

An important factor in whether people explore in a systematic or unsystematic way is their level of task engagement. Task engagement refers to the level of cognitive resources (e.g., attention) allocated to task-relevant processes; these processes involve problem-solving steps such as goal setting, hypothesis formation, planning, and feedback evaluation (Dorner, 1980; Trudel & Payne, 1995). When many resources are allocated to problem solving, exploration is likely to be goal directed and systematic. In contrast, when task engagement is low, this tends to manifest itself in the use of unsystematic exploration strategies, which may generally be ineffective or inefficient (Green & Gilhooly, 1990). Although many factors can influence the level of task engagement, in the current study we focus on one of these factors--namely, mental fatigue. Specifically, we measure exploration behavior and assess how systematic versus unsystematic behavior changes under fatigue. As far as we know, no other study has explicitly investigated exploration under fatigue. Nevertheless, knowing how exploratory behavior may change under fatigue is important because exploration is a substantial part of problem solving in complex tasks (Dorner, 1980; Hollnagel, 1993; Shrager & Klahr, 1986).

Mental Fatigue and Exploration Behavior

Mental fatigue can be defined as a psychophysiological state resulting from sustained performance on cognitively demanding tasks and coinciding with changes in motivation, information processing, and mood (e.g., Meijman, 2000). One of the main characteristics of mental fatigue is an increased resistance against further effort and a tendency to reduce task engagement (Holding, 1983; Meijman, 2000; Sanders, 1998). Thus, if possible, fatigued people will stop working on effortful tasks and postpone the work until they are no longer fatigued. However, even in situations where they cannot stop working, fatigued people still tend to reduce task engagement (often unintentionally; Meijman, 2000; Sanders, 1998). Such reduced task engagement will not manifest itself as a complete withdrawal from the task or as a complete breakdown of performance. More likely, periods of adequate performance will more frequently be alternated with lapses of task engagement under fatigue (Sanders, 1998). During such lapses, behavior may be directed not by clear task goals but by more automatic cognitive processes (see Monsell & Driver, 2000). With regard to exploration behavior, it can be expected that during those lapses, people will not show thoughtful, systematic exploration.

Before being to able to study exploration under fatigue, it is necessary to establish what are the behavioral manifestations of systematic and unsystematic exploration. Therefore, in the following sections we describe three major types of exploration behavior we assess in the current study. With these three types of exploration, we do not intend to exhaustively cover all possible forms of exploration behavior; rather, we want to capture broad patterns of behavior that people show when working on complex, nonroutine tasks (Hollnagel, 1993; Trudel & Payne, 1995). We labeled these types of exploration behavior systematic exploration, unsystematic trial and error, and rigid behavior.

Systematic exploration implies that a person explores a system in a goal-directed, coherent way (Green & Gilhooly, 1990; Trudel & Payne, 1995). This means that hypotheses about where to search for and about possible outcomes of actions are generated and that behavior is guided by these hypotheses (Shrager & Klahr, 1986). Moreover, exploring systematically implies reflection on action feedback and on working methods. Several studies support the importance of goal-directed, reflective behavior for successful exploration (Chi, Bassok, Lewis, Reimann, & Glaser, 1989; Shrager & Klahr, 1986; Trudel & Payne, 1995). For example, Trudel and Payne analyzed verbalizations of people who explored a digital stopwatch; they found that "good" explorers (the ones who learned much about the watch) tended to verbalize discoveries they made earlier, frequently assessed what had been learned so far, and often confirmed or disconfirmed feedback. In general, using systematic exploration involves a thoughtful, reflective approach to the task (Trudel & Payne, p. 325). As such, it can be argued that systematic exploration involves a relatively high level of engagement. However, as mental fatigue coincides with a reduction in task engagement (e.g., increased lapses of task engagement), it can be expected that the use of systematic exploration will decrease under fatigue (Hypothesis 1).

Unsystematic trial and error refers to exploration that is unstructured and does not seem to be guided by clear hypotheses; nor is it accompanied by signs of reflection (Hollnagel, 1993; Trudel & Payne, 1995). During unsystematic trial and error, people often shift from one subgoal to another, and none or only a few of these subgoals are well considered. In the literature on problem solving and human-computer interaction (HCI) there are many reports of such behavior, even though different labels have been assigned to it--for example, vagabonding (D6rner, 1980), scrambled mode (Hollnagel, 1993), and unsystematic trial and error (Trudel & Payne, 1995). In general, unsystematic trial and error may coincide with a withdrawal of cognitive resources from hypotheses formation, planning, and reflection. Moreover, as the tendency to reduce task engagement increases under fatigue, it can be expected that the use of unsystematic trial and error will also increase under fatigue (Hypothesis 2).

Rigid behavior is characterized by decreased cognitive flexibility and increased tendency to perseverate. During periods of rigid behavior, actions or ideas are often initially guided by habits or by salient cues that capture attention. Based on such habits or cues, people relatively quickly adopt certain action patterns in which they persist, even though feedback clearly indicates that this is no longer useful (Dorner, 1980; van der Linden et al., 2001). Rigid behavior is another specific type of unsystematic behavior often reported in the problem-solving and HCI literature. For example, in a study on learning a statistical program through exploration, Green and Gilhooly (1990) found that poor learners show a tendency to repeat methods, to pay less attention to feedback, and to fail to act appropriately on evaluation feedback. This finding has been replicated in several other studies (e.g., Somsen, van der Molen, Jennings, & van Beek, 2000; Trudel & Payne, 1995; van der Linden et al., 2001). We expect that a reduction in reflection and in the allocation of attention to action will lead to an increase in rigid behavior. Thus we expect an increase of rigid behavior under fatigue (Hypothesis 3).

Performance. To study exploration, we used a complex computer task (with Microsoft Excel 4.0) in which participants could freely explore. In accordance with the literature on exploration, we expected the use of systematic, reflective exploration to lead to better learning of options and procedures (Funke, 1991; Green & Gilhooly, 1990; Hollnagel, 1993; Trudel & Payne, 1995; van der Linden et. al, 2001). Moreover, if more procedures of the program are learned, then more subtasks can be achieved. Thus we expected a positive relationship between systematic exploration and performance in terms of the number of subgoals solved (Hypothesis 4a). Moreover, because systematic exploration involves thoughtful actions, we hypothesized that this type of exploration would be negatively related to the number of errors (Hypothesis 4b).

In contrast, during periods of unsystematic exploration, task engagement is low and so are planning and reflection. This implies that many errors will be made and that relatively little will be learned about new procedures or options. Consequently we expected a negative relationship between unsystematic trial and error and rigid behavior on the one hand and number of subtasks solved on the other hand (Hypothesis 5a). As unsystematic exploration strategies tend to lead to an increase of errors (e.g., rigid behavior leading to ineffective actions), we postulate Hypothesis 5b as a construct validity hypothesis--that the use of unsystematic exploration strategies will be positively related to the number of errors.

With regard to mental fatigue, we expected a negative relationship with solved subtasks (Hypothesis 6a) and a positive relationship with errors (Hypothesis 6b). Although such hypotheses make sense, we have to note that a surprising number of findings in the literature do not show clear-cut relationships between fatigue and performance (Hockey, 1997; Holding, 1983; Sanders, 1998). The main reason for this is that people can reallocate resources, thereby forcing themselves to stay engaged in the task despite their fatigue (Hockey, 1997). Nevertheless, we hypothesized a relationship of fatigue with a low number of subtasks solved and with a high number of errors, although we are fully aware that these hypotheses are not easily supported in empirical studies on fatigue (Hockey, 1997; Sanders, 1998).

Mental Fatigue and General Experience

If the allocation of cognitive resources to task-relevant processing plays an important role in exploration, other factors that influence the availability of resources and the ability to work systematically on a task can be expected to moderate relationships between fatigue and exploration. We argue that experience is such a moderator. In complex tasks, demands on cognitive resources during the initial learning phase are high because novices have to guide every step in the problem-solving process consciously (Anderson, 1982). However, with growing experience, people develop action procedures that can be executed in an automatic way that does not require a high level of cognitive resources, such as attention (Anderson, 1982; Norman & Shallice, 1986). As a result, experienced people work more efficiently and are better able to plan their behavior and to interpret feedback. Thus, compared with low-experienced people, experienced people will use more systematic exploration (Hypothesis 7a) and less unsystematic exploration, such as trial and error and rigid behavior (Hypothesis 7b). Moreover, compared with low-experienced people, experienced people will achieve more subgoals (Hypothesis 8a) and make fewer errors (Hypothesis 8b). Such main effects of experience on exploration can be expected to be even greater than the effects of mental fatigue on exploration, as the effects of fatigue on behavior are often quite subtle (Broadbent, 1979; Hockey, 1997).

Because experienced people can work on a task more efficiently and without excessive demands on cognitive resources, their behavior and performance may be less susceptible to the influence of suboptimal states such as mental fatigue (Bainbridge, 1978). Stated differently, lapses in task engagement under fatigue may be less disruptive for experienced people, as they can efficiently execute and plan their behavior even when task engagement is relatively low. Hence we expect interactions between the effects of fatigue and the effects of experience on type of exploration behavior. Specifically, the changes in exploration behavior under fatigue will be less strong for experienced people than for novices (Hypothesis 9).



Sixty-eight psychology students participated in this study for additional study credits. The participants were randomly assigned to a fatigue group or a control group. None of the participants had experience with the computer program Excel, which we used in the experimental task.


Fatigue. Subjective fatigue was measured with the general activation subscale of the Activation-Deactivation Checklist (AD-ACL; Thayer, 1989: Cronbach's alpha = .81) and with the Rating Scale Mental Effort (RSME; Zijlstra, 1993; Cronbach's alpha = .86). Although the RSME is often used as a single measure of fatigue, it consists of seven 150-point answer scales concerning several fatigue aspects. Two items relate to mental fatigue, two items relate to physical fatigue, and the remaining items measure resistance against further effort, boredom, and visual fatigue.

General computer experience. We assessed general computer experience with five questions in a 5-point Likert scale format (Cronbach's alpha = .83). The questions concerned the frequency of computer use during the last year and experience with a range of computer applications (e.g., Word, Windows, MacOS). In the analyses, participants were assigned to a high or low general experience group, based on a median split procedure.

Fatigue manipulation. We used a so-called scheduling task (Taatgen, 1999) on the computer as fatigue manipulation. In this task, participants assigned work time to fictional employees. Adequate planning in this task required strong task engagement, as previous planning steps had to be kept in mind--taking notes was not allowed--and participants simultaneously had to think about further planning steps.

Computer tasks. Participants worked on two different tasks on a Macintosh computer: a task with the spreadsheet program Excel (to test our hypotheses) and a task with the graphical program ClarisDraw (for practice purposes).

The ClarisDraw task was introduced to familiarize participants with the exploration method, allow them to practice thinking aloud (see Procedure section), and instruct them on how to adequately think aloud. In the ClarisDraw task, participants had to reproduce (draw) an example figure that was presented on the screen.

The Excel task was with the software program Excel (version 4.0 for Mac). The Excel task was given directly after the manipulation and was used to study exploration. The overall goal in the task consisted of changing the format of a table on the screen according to an example, which was presented in printed form. The task consisted of eight subtasks: moving text, adding text, changing text alignment, adding table rows, coloring table rows, changing font type, adding borders, and changing the location of the table. These subtasks were not explicitly mentioned in the instructions, nor were the participants instructed how to approach the task. Task instruction mentioned only that the table on the screen should look like the example table. For each experimental session, the appearance and settings of Excel were standardized so that each participant would start out in the same environment, in which only the worksheet, the standard toolbar, and the formatting toolbar were visible. Participants could freely explore the program. An exception was the help function, which we had disabled to reduce behavioral freedom, hence simplifying the coding of behavior. Although disabling the help function makes the experiment somewhat artificial as compared with real-life settings, it had no substantial consequences for our study (which looks at exploration of a system without step-by-step instructions).

Thinking Aloud

We used a thinking-aloud procedure (Ericsson & Simon, 1993). Participants had to verbalize their thoughts while working on the tasks. Although verbalizations do not cover all cognitive processes during the tasks (e.g., sometimes people omit information, and some processes cannot be verbalized), they have been shown to provide useful indications about goals and intentions underlying behavior (Ericsson & Simon, 1993).


Participants were tested individually in sessions that lasted approximately 3 hr. At the beginning of the session, participants filled in questionnaires on general computer experience and level of fatigue (see Measures section). Thereafter they worked on the ClarisDraw task for 15 min. Participants had to think aloud, and when necessary the experimenter clarified and corrected the thinking-aloud procedure during this task. When participants stayed quiet for several seconds the experimenter asked them to "keep on thinking aloud, please" (Ericsson & Simon, 1993). After the ClarisDraw task, participants in the fatigue condition continuously worked on the scheduling task for 2 hr. Participants in the control condition were told to wait for 2 hr, during which they could read magazines or watch videos. After the manipulation participants again filled in fatigue questionnaires. All participants then worked on the Excel task for 15 rain, during which they had to think aloud again. The participants' computer screen was directly connected to a video recorder that recorded all their actions and verbalizations.

Coding of the Data

Exploration behavior in the Excel task was coded from the videotapes that contained the participants' behavior on the computer screen and verbalizations. Coders were blind to experimental condition and to the participants' scores on the general experience questionnaires. We used three behavioral categories that represented systematic or unsystematic exploration (see Introduction): systematic exploration, unsystematic trial and error, and rigid behavior (category descriptions are given in the next section). In addition, we had one category for coding nonexploratory behavior. In complex tasks, such as our exploratory computer task, often there are no easy observable begin or end points; therefore, we decided to use fixed time intervals of 20 s as coding units. First, coders decided whether a 20-s interval contained exploration behavior. If that was the case, the coders assessed whether exploration behavior fell into one of the three exploration categories.

Of the 68 videotapes analyzed, a random sample of 20 videos was coded by more than one coder. We used intraclass correlations (ICCs), as reported by Shrout and Fleiss (1979), to assess interrater reliability. ICCs for the categories are reported in the next section: 60 < ICC < 75 = good interrater agreement, ICC > .75 = excellent interrater agreement (Cicchetti & Sparrow, 1981).

Coding Categories

Systematic exploration. Exploration was coded as systematic when the participants tried out functions or ideas in a structured and coherent way. This was the case when participants' actions either followed from explicit plans (e.g., "I will now try to use this same function for changing this part here ...") or naturally followed from the previous actions (e.g., changing the settings of an option and then trying that option again). Moreover if coders found participants to evaluate what happened, then such behavior was also coded as systematic exploration (e.g., "okay ... this function serves to ..."). ICC for systematic exploration was .75.

Unsystematic trial and error. Behavior was coded as unsystematic trial and error if the participants did not show signs (either in terms of actions or of verbalization) of reflection or of feedback evaluation--for example, when the participants quickly "jumped" from one option to another, showing no signs that options were well considered before going to the next one. ICC for this category was .83.

Rigid behavior. Behavior was coded as rigid if the participants repeated (more than twice; Trudel & Payne, 1995) the same action sequence that already turned out to be unsuccessful in earlier attempts. Behavior was also coded as rigid when participants continued to come back to the same options despite accumulating evidence that that option did not work. These criteria for coding rigid behavior in the computer task largely resembled perseverative behavior, as assessed in traditional psychological tests that are used to diagnose deficits in the regulation of attention (e.g., Heaton, 1981). ICC for this category was .76.

Although coding of exploration behavior was done with fixed time intervals, coders took information from long-term goals (that extended over 20-s periods) and task context into account in order to obtain a fine-grained analysis of behavior.

Nonexploratory behavior. In the Excel task, participants also displayed nonexploratory behavior. For example, from their experience or as a result of exploration, participants discovered procedures in Excel that they then applied to fulfill the task (e.g., changing the border around several parts of the table). Such application behavior often covered several 20-s coding periods. Application behavior was coded under a separate category. (ICC = .58, which implies moderate to good interrater agreement.) In addition, there was a proportion of behavior that was neither exploration nor application. This behavior was placed under a "residual" category.


In task analysis of the Excel assignment, we determined eight subtasks that had to be accomplished (see list of subtasks for Excel in the Computer tasks section). An important performance variable was the number of subtasks solved within the time given. We also counted number of errors. Errors were defined as actions with negative consequences or actions that had no effect at all.


Because of the relatively strong statistical power required to detect multivariate effects, we adopted an [alpha] of. 10 for multivariate tests. For all univariate and post hoc tests an overall [alpha] of .05 was used. Eta squares ([[eta].sup.2]) are reported as effect size.

Manipulation Check

Analyses of the RSME scores confirmed that our manipulation was successful, as there was a significant interaction effect for Time of Measurement (pre- vs. postmanipulation) x Condition (fatigue versus not; see Table 1). Moreover, post hoc tests showed that the fatigue group reported significantly higher levels of fatigue after, but not before, the manipulation (Table 1). A similar pattern of resuhs was found for the general activation measure. To investigate in more detail which aspects of fatigue were affected by our manipulation, we analyzed the different fatigue aspects of the RSME separately. This analysis showed that our manipulation successfully induced mental fatigue and an increased resistance against further effort in the fatigue group. Compared with the control group, the fatigue group also reported significantly higher levels of boredom. Groups did not significantly differ in physical fatigue. The interaction for Time of Measurement x Condition for visual fatigue was significant, although none of the post hoc comparisons reached significance (Table 1).

Exploration Behavior

To test whether fatigue coincided with changes in exploration behavior and whether general computer experience moderated such effects, we submitted the three categories of exploration behavior to a multivariate analysis of variance with condition (fatigue vs. not) and experience (high vs. low) as independent variables. This revealed a significant multivariate main effect for condition, F(3, 62) = 4.22, p = .009, and a significant main effect for experience, F(3, 62) = 4.50, p = .006. The multivariate interaction effect for Condition x Experience also reached significance at an [alpha] =. 10 level, F(3, 62) = 2.38, p = .08. Univariate tests showed systematic exploration to decrease under fatigue (Hypothesis 1). We found a significant main effect for condition on systematic exploration, F(1, 64) = 3.97, MSE = 18.81, [[eta].sup.2] = .06, p = .050, with fatigued participants using significantly less systematic exploration (M = 7.23) as compared with the control group (M = 9.06). There was also a main effect for experience (Hypothesis 7a), in which highly experienced participants used more systematic exploration than did the low-experience group, F(1, 64) = 10.39, MSE = 18.81, [[eta].sup.2] = .14, p <.01, M = 9.54 and M = 6.21, respectively. There was no interaction effect for condition and experience on systematic exploration, F(1, 64) = .50, MSE = 18.81, [[eta].sup.2] = .01, p = .48 (see Table 2 for means), indicating that fatigue had an effect on systematic, reflective behavior for both high and low general computer experience participants.

Hypothesis 2, stating that unsystematic trial and error would increase under fatigue, was not confirmed; there was no significant main effect for condition, F(1, 64) = 1.52, MSE = 55.35, [[eta].sup.2] = .02, p = .22, and no significant interaction effect (Hypothesis 5a), F(1, 64) = .11, MSE = 55.35, [[eta].sup.2] = .002, p = .75. The main effect for experience was significant: Compared with low-experience participants, highly experienced participants used a lower level of unsystematic trial and error, F(1, 64) = 6.00, MSE = 55.35, [[eta].sup.2] = .09, p < .05 (Hypothesis 7b), M = 11.31 and M = 6.95, respectively.

As expected, fatigue coincided with increased rigid behavior (Hypothesis 3). There was a significant main effect for condition on rigid behavior, F(1, 64) = 10.92, MSE = 7.33, [[eta].sup.2] = .15, p < .01, M = 4.91 and M = 3.06 for the fatigued and nonfatigued groups, respectively. There was also a significant main effect for experience, F(1, 64) = 4.71, MSE = 7.33, i12 = .07, p < .05 (Hypothesis 7b), with highly experienced participants using less rigid behavior (M = 3.44) than low-experience participants (M = 4.79). As the interaction effect for condition and experience also was significant, this confirmed the hypothesis that experience moderates the effects of fatigue on exploration, F(1, 64) = 7.33, MSE = 7.33, [[eta].sup.2] = .10, p < .01 (Hypothesis 9). Post hoc t tests showed the mean of low-experience, fatigued participants to be significantly different from all other groups (see Table 2). This implies that fatigue leads to increased rigidity only when people are highly unfamiliar with the task at hand.

Analyses of nonexploratory behavior (the application and residual categories) showed that there were no significant main effects for condition or experience on either of these two categories, nor were there any significant interaction effects of condition and experience (p values ranged from. 19 to .79 for these analyses).

Check on number of verbalizations. To check for possible effects of fatigue on verbalizations, which might have affected the coding of exploration behavior, we analyzed the number of verbal statements of fatigued and nonfatigued participants. From a random sample of the participants (approximately 70%: 24 fatigued and 23 nonfatigued participants), we counted the number of statements within a 5-min segment of the videotape. Analyses showed that fatigued and nonfatigued participants did not differ on the number of verbalizations during the Excel task, F(1, 47) = .01, [[eta].sup.2] = .00, p = .91 (mean numbers of statements for nonfatigued vs. fatigued participants were M = 45.96 and 45.53, respectively). Thus differences in exploration behavior could not be explained by differences in the number of verbalizations.


There was 510 significant main effect for condition (fatigue vs. not) on the number of subtasks solved (Hypothesis 6a), F(1,64) = 1.24, MSE = 6.00, [[eta].sup.2] = .02, p = .27. However, there was a significant effect for condition on number of errors (Hypothesis 6b): Fatigued participants made significantly more errors, F(1, 64) = 4.77, MSE = 61.15, [[eta].sup.2] = .07, p < .05 (M = 23.11 and 19.57 for fatigued vs. nonfatigued, respectively). Compared with low-experience participants, experienced participants solved significantly more subtasks, F(1, 64) = 7.79, MSE = 46.75, [[eta].sup.2] =. 11, p = .007 (Hypothesis 8a; M = 2.38 and 4.03, respectively), and made marginally significant fewer errors, F(1, 64) = 3.80, MSE = 232.45, [[eta].sup.2] = .06, p = .056 (Hypothesis 8b; M = 23.45 and 19.87, respectively). There were no significant interaction effects between condition and experience on number of subtasks solved, F(1, 64) = 0.11, MSE = 6.00, [[eta].sup.2] = .002, p = .74, or on errors, F(1, 64) = 2.59, MSE = 61.15, [[eta].sup.2] = .04, p = .11.

Fatigue, Exploration, and Errors

To test relationships between exploration on the one hand, and number of subtasks solved and errors on the other hand (Hypotheses 4a and 4b), we first analyzed whether the correlations between these variables significantly differed between the different subgroups (fatigued vs. nonfatigued and high vs. low general experience). To test this we used the Fisher r-to-Z procedure, which showed that none of the correlations significantly differed between the subgroups. Therefore we report overall correlations collapsed across all subgroups. These correlations confirmed Hypothesis 4a, as systematic exploration correlated positively with number of subtasks solved, r(68) = .58, p < .001, and negatively with number of errors, r(68) = -.41, p < .001. Our hypotheses on the negative relationship between the use of unsystematic exploration and number of subtasks solved (Hypothesis 4a) and the positive relationship of unsystematic exploration with errors (Hypothesis 4b) were also confirmed: trial and error/subtasks r(68) = -.43, p < .001; trial and error/errors r(68) = .59, p < .001; rigidity/subtasks r(68) = -.28, p < .05; rigidity/errors r(68) = .40, p = .001. We also looked at the overall correlation between the number of errors and number of solved subtasks, which revealed that a high number of errors was related to a low number of subtasks solved, r(68) = -.50, p < .001.

In this study we also looked at nonexploratory application behavior, which refers to the use of Excel procedures to fulfill the task. Such application often was the outcome of successful exploration, in which the participants found out how to attain a subgoal (e.g., "oh ... so, this is the way to align text"). Thus application behavior was related to performance. More specifically, application was correlated with subtasks solved, r(68) = .34, p < .05, and errors, r(68) = -.66, p < .05. To determine whether exploration stayed associated with performance measures beyond application, we also looked at partial exploration-performance correlations (controlling for application behavior). These analyses showed that the exploration-performance relationships stayed significant and did not notably change (the exception was the rigid behavior/ subtasks correlation, which changed from r = -.28 to -.18, p = .14). Thus for five of the six relationships tested, the relationship between exploration and performance was not accounted for by nonexploratory application behavior. Therefore, the overall results are consistent with the literature on systematic versus unsystematic problem-solving behavior (e.g., Hollnagel, 1993; Trudel & Payne, 1995; van der Linden et al., 2001) and showed the different types of exploration behavior to be strongly related to performance on the Excel task. The amount of behavior in the residual category did not show any significant correlations with any other variable in this study.


The main research question in our study was whether fatigue changes the way people explore a complex system. The present study showed that this was the case: Compared with nonfatigued participants, fatigued participants showed significantly fewer periods of systematic exploration. This finding suggests that fatigued participants were less thoughtful and reflective in their exploration behavior (Trudel & Payne, 1995). The results on unsystematic behavior were mixed; whereas fatigue did not coincide with significant changes in unsystematic trial and error, there was a significant effect on rigid behavior. Subgroup analyses showed that only participants with a low level of general computer experience displayed more rigid behavior under fatigue. Specifically, these low-experience fatigued participants showed more inefficient perseveration and had the highest tendency to come back to the same, unsuccessful options. This finding was in accordance with our hypothesis that the effects of fatigue on exploration are strongest for people who have a high level of unfamiliarity with the task.

Despite the fact that fatigue was related to exploration behavior, which in turn was related to number of subtasks solved, we did not find significant differences in the number of subtasks solved between fatigued and nonfatigued participants. Although admittedly somewhat speculative, this nonfinding may be related to the relatively short duration of the Excel task (15 min). Within such a short time span, small differences in exploration behavior may not directly lead to a major (significant) breakdown of overall performance. This is a common finding in many fatigue studies, which have shown that fatigue can lead to observable changes in behavior long before primary task performance starts to deteriorate (Hockey, 1997; Holding, 1983; Sanders, 1998). However, we found that fatigued participants made more errors than did nonfatigued participants. These findings suggest that the frequency of making errors is more easily affected under fatigue than is the primary task output (solving subtasks).

One important question of interpretation is what might have caused changes in exploration. We assumed that under fatigue there is an increase of periods in which task engagement is temporary lowered and in which people do not invest a high level of resources into thoughtful reflection or planning. Hence exploration during those periods will be less systematic. It is important to note that any reduction in task engagement under fatigue does not necessarily have to involve conscious decisions to do so (Meijman, 2000). Specifically, fatigued people might experience lapses in task engagement even when, at an intentional level, they want to perform well on the task.

Theoretically, we argue that under fatigue, periods of reduced task engagement coincide with compromised executive control on behavior (see Monsell & Driver, 2000). Executive control refers to attentional processes that regulate perception and motor processes in order to ensure goal-directed behavior (Miller & Cohen, 2001; Norman & Shallice, 1986). Such executive control plays a major role in systematic behavior, based on planning and cognitive flexibility (e.g., reacting to feedback). Moreover, one hallmark of compromised executive control is reduced flexibility and the subsequent increased tendency to perseverate, particularly when a task is unfamiliar (see Miller & Cohen, 2001). In the current study, the overall reduction in goal-directed, systematic exploration and the increased tendency to perseverate for low-experience participants indeed indicate such compromised executive control in fatigued participants. Moreover, these results are in accordance with other studies, which found fatigued participants to perform poorly on tasks that specifically tap executive control processes (van der Linden, Frese, & Meijman, 2005). In future studies, researchers may want to directly combine measures of executive control with behavior in more applied tasks, as in the current study.

Limitations of the Study

Our study used a detailed analysis of exploration behavior based on video recordings that also contained participants' verbalizations. This method provided us with a rich source of data from which we could infer whether participants explored systematically or unsystematically. Nevertheless, in spite of their richness, our methods also had some limitations. One of these limitations involved our focus on exploration behavior. Exploration plays a major role in dealing with complex tasks, yet there are also many other types of behavior people might display. In the current study we captured much of these other types of behavior with a category labeled "application." A more detailed analysis of such nonexploratory behavior and its interactions with exploration and performance may provide additional insight into the way people deal with complex tasks. Nevertheless, because the fatigue and control groups did not differ in overall amount of exploratory versus nonexploratory behavior and because exploration remained significantly related to performance after controlling for nonexploratory behavior, this limitation does not compromise the study's results or their interpretation.

Another limitation is the possible influence of fatigue on thinking aloud. However, two lines of argument suggest that our results did not arise from mere verbalization effects. First, fatigued and nonfatigued participants did not differ in number of statements. This result is in accordance with findings of a pilot study, in which we found very low correlations between pretask measures of fatigue (RSME and general activation from the AD-ACL) and number of statements during a similar Excel task (r = -.02 and r = .05 for RSME and AD-ACL, respectively). Second, assessment of exploration behavior did not rely exclusively on verbal reports but also on participants' actions (e.g., repetition of the same unsuccessful responses). Exploration in terms of systematic/unsystematic behavior was inferred from the "broader context of behavior" (Shrager & Klahr, 1986).

Practical Implications

As fatigue coincided with changes in exploration behavior, the results of this study have consequences for practice. Specifically, the results suggest that under fatigue, hypothesis generating and testing behavior (as in the Excel task) has a decreased likelihood of being systematic and coherent. Moreover, in an unfamiliar setting or task environment, behavior under fatigue tends to be guided by salient cues in a rigid way. System designers should be aware of such tendencies because they suggest that not-well-considered salient aspects of the human-computer interface may capture attention of users under fatigue and subsequently may yield erroneous ideas or solutions. For example, in the current study many participants were often misled by the label "color" in the option "color palette." This label corresponded with a task goal ("change color of the table"), and therefore many participants initially thought this was the relevant option when in fact it wasn't. However, because of their fatigue, low-experience participants seemed to have difficulties in detaching from such erroneous ideas and subsequently showed increased perseveration and rigidity. These findings show that system designers should construct the interface in such a way that option labels are congruent with task goals, thereby reducing the need for users to engage in extensive reasoning. Moreover, clear relationships between task goals and interface features are expected to lead to less rigidity and fewer errors.

As we argued that the effects of fatigue on exploration are related to general effects on information processing (e.g., executive control), the practical implications of the current study may also extend to settings different from the specific learning context we used. Specifically, it can be expected that similar manifestations of mental fatigue may be found in other situations in which people must deal with partial information, systematically generate and test hypotheses, and be flexible. A typical example of such a situation is finding the fault underlying a system failure. In such cases, not being systematic or using a rigid approach may have strong negative consequences. For example, in the Three Mile Island nuclear plant incident, engineers who worked at a time when people generally feel fatigued repeatedly failed to use optimal exploration strategies to find the fault and repeatedly discarded useful information in a rigid way (Johnson, 1999).

Although the direct link between the behavior of those engineers and the results of this study is somewhat speculative, it illustrates the severity of consequences that rigidity and a Jack of systematic behavior may have in real-life settings. This also suggests that it might be inefficient to rely on fatigued people to optimally explore a system. When exploration is required, it might be better to rely on the exploratory abilities of well-rested people. In practice, this may be implemented by using back-up teams who are deployed when a situation requires systematic search of alternatives. Such teams may replace or support teams who are at the end of their shifts or have already worked for several hours during the night.

The findings of the current study also suggest another countermeasure against adverse manifestations of fatigue. Specifically, whereas low-experience participants showed more rigid behavior under fatigue, this was not the case for experienced participants. Hence training and extended experience may, to some limit, reduce the detrimental effects of fatigue on exploration. Obviously, it is not possible to train people for every unforeseen situation that requires exploration. However, training in general heuristics and procedures of good exploration might reduce the need for people to figure out good exploration strategies at times when it is relatively difficult to do so (e.g., when they're fatigued). Specifically, such training should teach the steps of systematic exploration (generating specific goals and action plans to attain them, deliberate evaluation of action outcomes; Frese, 1995) and make people aware of the possible consequences of adverse conditions (e.g., fatigue) on exploration. Well-trained general principles of exploration and knowledge about the "dangers" of suboptimal energetic conditions may be tools that can help to prevent or overcome fatigue effects.
TABLE 1: Means (and Standard Deviations) of Pre- and
Postmanipulation Measures of Fatigue


                       Time x Condition   Control    Fatigue
                           F(1, 64)       (n = 32)   (n = 36)     t

RSME (total)               16.39 **        24.07      23.40      0.40
                                          (11.15)    (13.22)

General activation          7.16 *         12.64      12.26      0.54
                                           (2.16)     (2.59)

Mental fatigue             25.64 **        53.12      48.74      0.62
                                          (25.12)    (32.37)

Physical fatigue            0.96           59.21      60.97     -1.20
                                          (39.10)    (42.11)

Resistance to effort       22.94 **        25.18      25.03      0.10
                                          (-14.50)   (21.27)

Boredom                     7.54 *         18.46      12.77     -0.68
                                          (14.34)    (19.84)

Visual fatigue              5.88 *         14.88      15.65      1.34
                                          (15.62)    (11.19)


                       Control    Fatigue
                       (n = 32)   (n = 36)      t

RSME (total)            26.57      45.01      3.58 **
                       (16.88)    (24.94)

General activation      11.21       9.47      3.24 **
                        (2.32)     (2.28)

Mental fatigue          48.12      97.88      4.26 **
                       (37.26)    (56.96)

Physical fatigue        73.46      88.47     -0.09
                       (52.05)    (51.47)

Resistance to effort    14.33      46.82      4.56 **
                       (16.66)    (38.04)

Boredom                 14.76      21.50     -3.18 *
                       (27.54)    (39.16)

Visual fatigue          35.30      60.41     -1.64
                       (15.87)    (26.95)

* p <. 05; ** p < .01.

TABLE 2: Means (and Standard Deviations) of Strategy Use
and Performance for the Different Groups

                                  Nonfatigued            Fatigued

                               Low        High       Low        High
                             (n = 14)   (n = 18)   (n = 15)   (n = 21)

                              Exploration Behavior Measures

Systematic exploration         7.60      10.28       4.71       8.09
                              (5.80)     (4.78)     (3.38)     (3.13)

Unsystematic trial             9.93       6.06      12.79       7.71
  and error                   (9.08)     (4.68)     (10.0)     (5.11)

Rigid behavior                 2.87       3.22       6.86       3.62
                              (2.36)     (3.02)     (2.74)     (2.62)

                              Performance Measures

No. of subtasks solved (a)     2.80       4.28       1.93       3.81
                              (1.97)     (3.27)     (1.54)     (2.42)

No. of errors                 19.93      19.28      27.21      20.38
                              (7.01)     (6.51)     (7.78)     (9.28)

                              Nonexploratory Behaviors

Application behavior          21.60      22.83      18.42      20.33
                              (8.89)     (3.89)     (7.79)     (5.35)

Residual                       3.00       2.61       2.21       4.42
                              (4.27)     (3.91)     (3.14)     (4.87)

Note. Low = low experience; high = high experience. Means denote
the frequency with which a strategy occurred during the task.

(a) Number of subtasks solved: range from 0 to 8.


This study was supported by Grant 580.02. 103 from the Netherlands Concerted Research action "Fatigue at Work" of the Netherlands Organization of Scientific Research (NWO).


Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369-406.

Bainbridge, L. (1978). Forgotten alternatives in skill and workload. Ergonomics, 21, 169-185.

Broadbent, D. E. (1979). Is a fatigue test now possible? Ergonomics, 22, 1277-1290.

Chi, M. T. H., Bassok, M., Lewis, M. W.. Reimann, R, & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.

Cicchetti, D. V., & Sparrow, S. S. (1981). Developing criteria for establishing the interrater reliability of specific items in a given inventory: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86, 127-137.

Dorner, D. (1980). On the difficulties people have in dealing with complexity. Simulation and Games, 11, 87-106.

Ericsson, K. A., & Simon, H. A. (1995). Protocol analysis: Verbal reports as data (Rev. ed). Cambridge. MA: MIT Press.

Frese, M. (1995). Error management in training: Conceptual and empirical results. In C. Zucchermaglio, S. Bagnara, & S. U. Stuckey (Eds.), Organizational learning and technological change (pp. 112-124). New York: Springer.

Funke, I. (1991). Solving complex problems: Exploration and control of complex systems. In R. J. Sternberg & P. A. Frensch (Eds.), Complex problem solving: Principles and mechanisms (pp. 185-222). Hillsdale, NJ: Erlbaum.

Green. A. J. K., & Gilhooly, K. J. (1990). Individual differences and effective learning procedures: The case of statistical computing. International Journal of Man-Machine Studies, 33, 97-117.

Heaton, R. K. (1981). Wisconsin card sorting test manual Odessa, FL: Psychological Assessment Resources.

Hockey, G. R. J. (1997). Compensatory control in the regulation of human performance under stress and high workload: A cognitive-energetical framework. Biological Psychology, 45, 73-93.

Holding, D. (1983). Fatigue. In R. Hockey (Ed.), Stress and fatigue in human performance (pp. 145-164). Chichester, UK: Wiley.

Hollnagel, E. (1993). Human reliability analysis: Context and control. London: Academic.

Johnson, S. (1999). Inside Three Mile Island: Minute by minute. Retrieved September 24, 2003, from

Meijman, T. F. (20OO). The theory of the stop emotion: On the functionality of fatigue. In D. Pogorski & W. Karwowski (Eds.), Ergonomics and safety for global business quality and productivity: Proceedings of the 2nd International Conference, Ergon-Axia 2000 (pp. 45-50). Warsaw, Poland: Central Institute for Labour Protection.

Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24, 167-202.

Monsell, S., & Driver, J. E. (Eds.). (2000). Control of cognitive processes: Attention and performance XVIII. Cambridge, MA: MIT Press.

Norman, D. A., & Shallice, T. (1986). Attention to action: Willed and automatic control of behavior. In R. J. Davidson, G. E. Swartz, & D. Shapiro (Eds.), Consciousness and self-regulation: Advances in theory and research (Vol. 4, pp. 1-18). New York: Plenum.

Sanders, A. F. (1998). Elements of human performance. Mahwah, NJ: Erlbaum.

Shrager, J., & Klahr, D. (1986). Instructionless learning about a complex device: The paradigm and observations. International Journal of Man-Machine Studies, 25, 153-189.

Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420-428.

Somsen, R. J. M., van der Molen, M. W., Jennings, J. R., & van Beek, B. (2000). Wisconsin card sorting in adolescents: Analysis of performance, response time, and heart rate. Acta Psychologica, 104, 227-257.

Taatgen. N. A. (1999). Learning without limits: From problem solving towards a unified theory of learning. Unpublished doctoral dissertation, University of Groningen, Netherlands.

Thayer, R. E. (1989). The biopsychology of mood and arousal. New York: Oxford University Press.

Trudel, C. I., & Payne, S. J. (1995). Reflection and goal management in exploratory learning. International Journal of Human Computer Studies, 42, 307-339.

van der Linden, D., Frese, M., & Meijman, T. F. (2003). Mental fatigue and the control of cognitive processes: Effects on perseveration and task engagement. Acta Psychologica, 115, 45-65.

van der Linden, D., Sonnentag, S., Frese, M., & van Dyck, C. (2001). Exploration strategies, error consequences, and performance when learning a complex computer task. Behaviour and Information Technology, 20, 189-198.

Zijlstra, F. R. H. (1993). Efficiency in work behavior: A design approach for modern tools. Delft. Netherlands: Delft University Press.

Dimitri van der Linden is an assistant professor in the Department of Work and Organizational Psychology at the University of Nijmegen. He received his Ph.D. in psychology in 2002 at the University of Amsterdam.

Michael Frese is a professor in the Department of Psychology at the University of Giessen, Germany. He received his Ph.D. in psychology in 1978 at the Technical University of Berlin.

Sabine Sonnentag is a professor in the Department of Work and Organizational Psychology at the University of Braunschweig, Germany, where she received her Ph.D. in psychology in 1986.

Address correspondence to Dimitri van der Linden, University of Nijmegen, Department of Work and Organizational Psychology, P.O. Box 9104, 6500 HE Nijmegen, Netherlands:

Date received: June 14, 2001

Date accepted: March 24, 2003
COPYRIGHT 2003 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:van der Linden, Dimitri; Frese, Michael; Sonnentag, Sabine
Publication:Human Factors
Date:Sep 22, 2003
Previous Article:Determining the effectiveness of the usability problem inspector: a theory-based model and tool for finding usability problems.
Next Article:Traffic signal color recognition is a problem for both protan and deutan color-vision deficients.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |