A note about the effect of auditor cognitive style on task performance.
Keywords: cognitive style; cognitive misfit; task performance.
The recent surge in the number of corporate financial scandals and restatements together with associated claims of audit failure suggest that there is an urgent need to better understand auditor task performance. Substantial research has been devoted to understanding the determinants of task performance within audit settings to improve auditor education and training (Libby and Luft 1993). Performance is typically viewed as the product of the person and task environment. Libby and Luft (1993) model performance as a function of ability, knowledge, environment, and motivation. However, ability, operationalized as "problem-solving ability" (Bonnet and Lewis 1990; Libby and Tan 1994), has been the focus of the cognitive characteristics examined. Our goal is to expand the extant research on performance by including cognitive style.
Ho and Rodgers (1993) differentiated three cognitive characteristics: abilities, cognitive style, and strategy. (1) Unlike cognitive ability and strategy, cognitive style represents a trait variable. In particular, cognitive style represents an individual's preferred method of acquiring and processing information during the problem-solving process. In this regard, cognitive style entails "distinctive ways of acquiring, storing, retrieving, and transforming information" (Ho and Rodgers 1993, 103). Ho and Rodgers contend that research should adequately differentiate these cognitive characteristics to properly evaluate and compare results within the literature.
The purpose of this paper is to examine the relationship between cognitive style and task characteristics on auditor task performance. The study's design addresses problems identified by Ho and Rodgers (1993). First, we selected cognitive style as our cognitive characteristic and measured it using the Myers-Briggs Type Indicator (MBTI). Second, we relied and built upon the "Cognitive Misfit" theoretical framework developed by Chan (1996). This theory predicts lower performance when a mismatch occurs between an individual's cognitive style and task attributes. The implications of this theory are particularly important for audit engagements where the extent and variety of tasks performed are considerable (Bonner and Pennington 1991).
An experimental approach was used to test the association between cognitive style and task performance. In the study, each auditor performed two tasks and served as his/her own control. The first task, characterized as an analytic task, required auditors to review workpapers prepared by a staff auditor. The second task, labeled the intuitive task, required auditors to perform an analytical review of pre-report financials. Last, each participating auditor completed the MBTI. Based on the cognitive misfit theory (Chan 1996) we hypothesized that auditors perform better on the task that matches their cognitive style than on the task that does not match their cognitive style. The results of the study generally support the hypothesis. Test results revealed that analytic (intuitive) auditors performed significantly (marginally) better on the analytic (intuitive) task.
The contributions of the present research are fourfold. First, we respond to Ho and Rodgers' (1993) concerns by specifically addressing how cognitive style affects task performance. Second, we extend Chan's (1996) cognitive misfit theory to auditor task performance. While prior accounting research has examined auditor personality types using the MBTI (Schloemer and Schloemer 1997; Jacoby 1981), it has not considered whether cognitive style interacts with task type to affect performance. Third, by using a within-subjects design we can assess the impact of individual differences (e.g., cognitive style) among auditors across multiple tasks. Pincus (1990) advocates the use of multiple tasks for this purpose. Finally, we provide evidence suggesting that prior accounting and auditing performance models are incomplete because they omit cognitive style.
The remainder of the paper is organized as follows. The next section presents an overview of the cognitive style literature and presents the hypothesis. The methodology and results are contained in the following two sections. The last section discusses the results and includes directions for further research.
COGNITIVE STYLE AND TASK PERFORMANCE
The Measurement of Cognitive Style
Prior studies have characterized cognitive style as either (1) simple versus complex; (2) adaptor versus innovator (measured by Kirtron's  KAI index); (3) field-dependence versus field-independence (measured by Witkin et al.'s  Embedded Figures Test); or (4) analytic versus intuitive (measured by the MBTI [Ruble and Cosier 1990]). Prior accounting research has predominately used the latter two cognitive style measures. We chose the MBTI cognitive style measure because: (1) field-independence correlates highly with the perception dimension of the MBTI, and (2) the MBTI carries larger information content (Corman and Platt 1988). However, Chan (1996) used the KAI index to measure cognitive style to test his cognitive misfit theory. To be consistent with and meaningfully interpret prior accounting research under the cognitive misfit paradigm, we chose the MBTI over the KAI.
According to Jung ( 1971), much of the random variation in human behavior is actually a consistent and orderly result of basic observable differences between the way individuals prefer to make perceptions and judgments. One typology of individual differences is cognitive style. Cognitive style is a person's characteristic mode of perceiving and organizing information about the environment and represents a relatively stable characteristic of an individual (Ho and Rodgers 1993; Myers 1980). Ho and Rodgers (1993) contend that in the long run, traits such as cognitive style may change, but rarely do. Likewise, Myers (1980) contends that individuals will use their preferred method of acquiring and processing data unless explicitly instructed not to. Briggs and Myers 1977 developed the MBTI to operationalize Jung's (1971) theory of psychological types. The cognitive style component of this measure consists of two independent bipolar dimensions: perception and judgment (Carlyn 1977; Keen and Bronsema 1981; Zmud 1979). Perception, anchored by sensation and intuition, represents the manner in which one perceives incoming information. Sensors prefer facts whereas intuitives prefer possibilities. Judgment, anchored by thinking and feeling, represents the method used to arrive at a decision. Thinkers rely on rational processes of association whereas feelers use relational comparisons (Myers and McCaulley 1985).
When combined, these two dimensions form four cognitive styles. Figure 1 presents the dimensions and the associated cognitive styles. When characteristics of the two dimensions are similar, one has either an analytic or intuitive cognitive style. When the characteristics of the dimensions are not aligned, one possesses a hybrid cognitive style. Accordingly, we categorize auditors as possessing an analytic, intuitive, or hybrid cognitive style.
[FIGURE 1 OMITTED]
The analytic individual concentrates on details and breaks that which is observed into component parts (Hunt et al. 1989). Accordingly, an analytic person prefers detailed structural problems and routine precise work. Individuals with an intuitive cognitive style comprehend the field as an integrated whole (Hunt et al. 1989). Consequently, an intuitive person prefers new or unstructured problems and often makes insightful decisions (Myers 1977). The remaining two cognitive styles were collapsed together and termed "hybrids" for our research.
Implications of Cognitive Style
Chan (1996) developed a theoretical framework that enveloped cognitive style, performance, turnover, and style demands of the work context. Accordingly, he defined a cognitive misfit as "the degree of mismatch between an individual's cognitive style of problem solving and the predominant style demands of the work context" (Chan 1996, 198). Using engineers and engineering tasks, Chan (1996) examined whether a cognitive misfit leads to higher incidence of turnover and lower job performance reviews. The results show that a cognitive misfit is significantly associated with turnover but not performance. However, Chart's (1996) performance measure was dichotomous and he cautioned that using this restricted measure potentially contributed to the insignificant finding for performance.
Many studies have examined the relationship between cognitive style and performance using single tasks (Casey 1980; Davis 1982; Henderson and Nutt 1980; Rodgers and Housel 1987). Accordingly, these studies cannot determine whether cognitive style is contingent upon or interacts with the task. Studies employing multiple tasks have found an interactive effect of accounting instructional method and student cognitive style (Ott et al. 1990); an association between cognitive style and pre-decision processes in a scenario involving controversial payoffs to foreign officials (Hunt et al. 1989); and no association between cognitive style and performance using financial and human relations decision settings (Ruble and Cosier 1990).
In each of the above studies, although not labeled as such, the underlying theory is identical to Chan's (1996) cognitive misfit framework. That is, these studies advance the position that task performance will be better when there is a match between cognitive style and task attributes and will suffer when the task attributes are mismatched with the individual's cognitive style.
The current study used an auditor judgment context to provide further evidence on the interactive role of cognitive style and task attributes on decision performance. Because auditors perform tasks that can vary substantially in terms of cognitive processes and structure (Abdolmohammadi 1999), we used an analytic and intuitive task in a within-subjects design to test the interaction between an auditor's cognitive style and audit task type. A significant interaction would indicate that an auditor's performance is better on the task that matches his/her cognitive style than on the task that does not match, i.e., when a "fit" occurs between an individual's cognitive style and the task attributes. Using Chart's (1996) cognitive misfit theory, we propose that auditors will perform better on the task that matches their cognitive style than on the task that does not. Thus, we present the following hypothesis:
H1: Analytic auditors will perform better on the analytic task than on the intuitive task; and conversely, intuitive auditors will perform better on the intuitive task than on the analytic task.
The study involved an experiment where audit seniors completed two audit tasks, a debriefing questionnaire, and the MBTI. As described below, the analytic task requires a review of a subset of a staff auditor's workpapers, whereas the intuitive task requires an analytical review of pre-report financial statements as part of the final stage of the audit. Audit seniors were chosen for the study because they are responsible for running the audit and performing a large number of tasks. Additionally, because audit seniors review workpapers and perform analytical procedures throughout an audit, both tasks are considered to be within their ability, knowledge, and assignment domains.
The current study made two innovations relative to prior cognitive style research. First, a within-subjects design was used to restrict alternative individual difference explanations regarding differential performance. While differences in these factors are expected to affect performance, they should not account for an interaction between performance and cognitive style across the two tasks. Consequently, our restricted within-subjects design should control for individual variations in intelligence and domain-specific knowledge that may be present among participating subjects in our experiment. (2)
A second innovation involved task design. Our study used theory to guide the selection of each task. As described in more detail below, we relied upon a task continuum proposed by Hammond et al. (1987). The task continuum identifies task characteristics that induce either analytic or intuitive cognitive processing. Accordingly, our tasks were designed to systematically differ in ways that are bound by theory.
The task choice and development was guided by the work of Hammond et al. (1987), who report that greater correspondence between task and cognitive properties improves subject achievement. Hammond et al. (1987) assert that tasks should be constructed and presented in terms of packages that include many task properties rather than two or three orthogonal variables. Consequently, they develop the task continuum theory, which identifies 11 task characteristics that, when appropriately packaged, result in tasks best completed by either analytic or intuitive cognitive processing.
We chose our tasks using two criteria. First, we used Hammond et al.'s (1987) theory and selected two tasks containing different task property packages rather than manipulating certain task properties within a single task. Given this decision, it was not possible to find two tasks that differed on all 11 task properties identified by Hammond et al. (1987). (3) Instead, we found tasks that differed on attributes most central to those possessing either an intuitive or analytic cognitive style. The two tasks differed on the amount of task decomposition in conjunction with the number, measurement, and display of cues within each task. These "packaged" characteristics represent an analytic task when the task can be easily decomposed into smaller subcomponents that contain a small number of objective cues viewed sequentially. Likewise, the "packaged" task characteristics represent an intuitive task when the task is not amenable to decomposition and the large number of subjective cues should be viewed simultaneously. The tasks were sent to a panel of experts (six audit partners from international and local firms and two auditing professors) to determine if the tasks agreed with our assessment of task type. The experts were queried as to which task best described the polar ends of each of the task characteristics chosen above. The experts displayed near unanimous agreement with our assessment. (4) The debriefing questionnaire included manipulation checks related to these task characteristics.
Our second task choice criterion was task complexity. Prior research shows that both experience and task structure influence task complexity (Bonner 1990, 1994; Bonner and Lewis 1990; Abdolmohammadi and Wright 1987). Accordingly, we chose tasks that, in our estimation, were relatively similar in terms of task structure and appropriate in terms of experience as reported by Abdolmohammadi (1999). (5) By selecting tasks that are relatively similar in terms of appropriate experience and structure, dominant performance on one task by all subjects is less likely to occur.
Analytic Task: Workpaper Review
The analytic task required audit seniors to review a subset of workpapers prepared by a staff auditor on a small manufacturing company. The materials are adapted from Moeckel (1990) and contained information about the nature of the industry, the client's business, management and personnel, engagement risk assessment, as well as workpapers to be reviewed. Specific task instructions were to write any points that needed to be cleared before the papers went to the partner for final review and opinion. The workpapers contained eight seeded errors. (See Moeckel  for additional information about the errors.) The percentage of correct seeded errors identified serves as the dependent performance measure on this task. This percentage is determined by dividing the number of seeded errors identified by 8. The task is analytic because it is amenable to high decomposition, workpapers are sequentially displayed, cue measurement is objective (correct or incorrect), and only two cues are present to assess each error (the original and contradicting evidence).
Intuitive Task: Analytical Review
The intuitive task required the audit seniors to perform an analytical review of pre-report financial statements as part of the final stage of the audit. The materials contain information about the nature of the industry, the client's business, the major corporate officers, and comparative financial statements. Subjects were instructed to perform the final-phase analytical review on the data and determine whether they would concur with the in-charge as to an unqualified opinion. The materials were based upon a case developed by Ingram et al. (1995), in which a fraud actually occurs and the financial statements are materially misstated. The fraud entailed capitalizing costs of goods sold as property, plant, and equipment (PP&E). The case contains multiple cues, which in aggregate should raise significant doubt about whether the financial statements are free of material misstatements. (See Ingram et al.  for additional information about the case.) As part of the task, auditors were asked to "indicate which account(s), if any, are misstated." Six accounts were materially misstated due to the fraud. These accounts were: accounts receivable, inventory or cost of goods sold, PP&E, accounts payable, sales, and retained earnings. (6) The dependent measure for this task is the percentage of materially misstated accounts identified. This percentage is determined by dividing the number of misstated accounts identified by 6. The task is intuitive because there are many subjective cues that are best viewed simultaneously in order to identify the fraud pattern and the ability to decompose the task is low.
Subjects and Data Collection
The subjects were audit seniors employed by two then Big 6 accounting firms. Instruments were sent to a coordinating contact person within each firm. Because of its length (approximately one hour per task), auditors worked on the experiment at their convenience. However, they were asked to complete each task without interruption and to not return to it once completed. As an incentive to participate, a $100 and $50 bonus drawing was offered to auditors who completed the experiment.
Forty-five subjects participated; however, one subject was deleted because the workpaper review task materials were not returned. Twenty-four auditors were male and 20 were female. Their mean level of audit employment was 34 months (standard deviation 14 months) and average age was 26 years (standard deviation three years). Collectively, the mean time spent on both tasks was over two hours, indicating that the auditors were actively engaged in the experiment. The mean minutes (standard deviation) for the analytic and intuitive tasks were 74.2 (23.9) and 64.1 (26.6), respectively. (7)
The research materials consisted of one envelope containing three folders and an overall set of instructions. The first two folders contained the two audit tasks described above. The order of the tasks was randomized across subjects. (8) The third folder contained the MBTI along with debriefing and manipulation check questions.
Cognitive style was measured using the MBTI short form. The instrument is fixed choice, bipolar in nature, and has been subjected to extensive validity testing (Carlyn 1977; Myers and McCaulley 1985). The instrument contains 126 questions. Each question pertains to either the perception or the judgment dimension and lists two responses, each corresponding to the endpoints of that dimension. Subjects were scored according to the procedures detailed in the MBTI manual (Myers and McCaulley 1985). Twenty-six of the 44 participating auditors had an analytic cognitive style, eight were intuitive, and ten were cognitive-style hybrids. (9) Specifically, nine subjects had the [hybrid.sub.2] style (intuitive-thinker) and the remaining subject had a [hybrid.sub.1] style (sensor-feeler). This cognitive style distribution is similar to prior studies (Schloemer and Schloemer 1997; Kreiser, McKeon and Post 1990; Otte 1983; Jacoby 1981).
The study included two independent variables: task type (within-subject) and auditor cognitive style (between-subject).
The results are presented in two sections: those from the manipulation checks, other task perceptions and task completion times; and results related to the hypothesis tests.
Manipulation Checks and Other Task Perceptions
The debriefing questionnaire contained six questions about perceived task characteristics. The odd (even) numbered questions included characteristics consistent with an analytic (intuitive) task.
As shown in Table 1, for all questions except question 6, the difference between the mean and subject response was significant (p < .05), which provides evidence that the tasks were perceived as systematically different in terms of characteristics.
Table 2 presents questions included to provide evidence on whether any task feature dominated performance. As shown, both tasks were viewed as moderately difficult and realistic. However, the analytical review (intuitive) task was rated more difficult than the workpaper review (analytic) task. Significant differences were found between the two tasks in terms of knowledge, ability, and reasonableness of the assignment. Audit seniors perceived themselves as possessing more knowledge and ability to perform the workpaper review (analytic) task. They also believed that it is more reasonable for an audit senior to be assigned the workpaper review task over the analytical review (intuitive) task.
While statistically significant, these differences do not necessarily negate the influence of cognitive style on task performance. Indeed if these perceptual differences in task features negate cognitive style then all subjects should perform better on the workpaper review (analytic) task (i.e., a significant main effect for task only), which is inconsistent with our hypothesized task-by-style interaction and results. Additional analyses were conducted to determine whether these task perceptions were associated with cognitive style. No significant within-task differences were found between cognitive styles for each question. Likewise, no significant between-task differences were found for task realism. Of the remaining four questions (difficulty, knowledge, ability, and assignment) only analytic auditors showed significant differences between the audit tasks.
Our hypothesis predicts that analytic (intuitive) auditors will perform better on the workpaper review (analytical review) task than on the analytical review (workpaper review) task. A repeated-measures ANOVA was performed where task (workpaper review versus analytical review) was the within-subjects variable and cognitive style the between-subjects variable. The analysis included only auditors with an intuitive or analytic cognitive style.
Table 3 presents and illustrates descriptive statistics related to task performance by cognitive style and Table 4 presents the statistical results. (10,11) As shown in Table 4, a significant interaction occurs between task and cognitive style. (F = 5.7, p < .02). The pattern of results shown in Panel B of Table 3 is consistent with the predicted interaction of the cognitive misfit theory. The mean performance by analytic (intuitive) auditors on the analytic workpaper review (intuitive analytical review) task is greater than their mean performance on the intuitive analytical review (analytical workpaper review) task (e.g., .269 versus .122, p < .01, and .292 versus .188, p < .08, respectively). (12)
A final nonparametric test to provide additional support for the hypothesis was conducted. For each auditor we classified their performance into one of three categories: better performance on the workpaper review (analytic) task, the analytical review (intuitive) task, or equal between the two tasks. The classification was based on the percentage correct (see above). The classification results are summarized by cognitive style and presented in Table 5. Eighteen of the 26 analytic cognitive style auditors performed better on the workpaper review (analytic) task than on the analytical review (intuitive) task. Of the eight intuitive auditors, only one performed better on the workpaper review task than on the analytical review task. The Chi-square test using this classification is consistent with the hypothesis (Chi-square = 9.4, p < .01). This classification indicates that the classification of auditors by task performance on a particular task type is not independent of their cognitive style. Overall, the results from the parametric and nonparametric tests support the hypothesis.
Cognitive style was positively and significantly associated with months of experience such that auditors with greater tenure were more likely to have an analytic cognitive style (p < .05). Thus, it is possible that experience is confounded with cognitive style. Additional ex post analyses were performed to explore this issue. Both styles had relatively similar experience ranges (15 to 67 months for the analytic auditors and 14 to 60 months for the intuitive auditors). When experience (months employed) was added to the model as a covariate the overall results did not change from our main analysis. Three additional analyses were conducted: (1) excluding the most experienced subset of analytic auditors such that the range for both styles ended at 60 months; (2) excluding the most experienced subset of analytic auditors such that the median for each style was similar; and (3) a matched-pair analysis where an analytic auditor was selected with the same months of experience as the subset of intuitive auditors. The results in each of these analyses were consistent with our original analysis in that a significant interaction occurred. Thus, based on these results, we do not believe that experience represents a plausible alternative explanation for our results.
The results from the study generally supported the hypothesis. A significant cognitive style by task type interaction effect was found. Additionally, as predicted, analytic (intuitive) cognitive style auditors performed better on the workpaper review (analytical review) task. These results provide support for Chan's (1996) cognitive misfit theory.
The results of our study contribute to our overall understanding of auditor task performance by demonstrating a systematic relationship between a stable personality trait and auditor task performance. These results suggest that task performance models and tests that exclude cognitive style and/ or within-subjects design are incomplete. Had only one task been used, our conclusion may have differed (e.g., only one cognitive style would associate with superior performance) and
Our results raise important questions regarding the training and staffing of auditors. Recall that Myers (1980) contends that individuals use their preferred method of acquiring and processing data unless explicitly instructed not to. Thus, to what extent can auditors be trained to recognize the task characteristics (analytic or intuitive) of the problem at hand and the type of processing that is needed and should be used for optimal performance on that task? It may be possible to improve effectiveness and efficiency by matching, to the extent practicable, auditors to the tasks that best suits their cognitive style and reviewing the work of an auditor who performs a task that is not aligned with his/ her cognitive style.
Note that our results do not imply that auditors are incapable of adaptation. Instead, they indicate lower performance and not performance inability. Presumably, task performance occurs, in part, through the application of certain cognitive activities or processes (Bonner and Pennington 1991). We speculate that cognitive style may constrain or limit an auditor's cognitive activities (processes) such that auditors are less effective when performing tasks that do not fit their cognitive style. Further research could consider the role of other personality traits and their effects on auditor task performance as well as explore the relationships among cognitive style, cognitive activities, and performance.
As in all research, this study has limitations. The relatively small sample size represents an important limitation. However, we believe that the advantages of a within-subjects design outweigh the shortcomings of a small sample size. In addition, the intuitive task involved fraud, which is a rare event. Whether the results would extend to a more common event is a question for future research. Finally, the absence of incentives and environmental features that audit seniors normally face represents an additional limitation.
TABLE 1 Task Characteristic Mean Ratings Manipulation Check Mean (standard deviation) Task Characteristics (a,b) n = 44 1. Which case uses an analytic (e.g., a sequential or 4.7 * mechanical) approach toward successful task (2.8) completion? 2. Which case uses a global (e.g., holistic) approach 7.7 * toward successful task completion? (2.4) 3. In which case is it more beneficial to view all 4.3 * pertinent task information sequentially in order (2.7) to successfully complete the task? 4. In which case is it more beneficial to view all 7.7 * pertinent task information simultaneously in (2.7) order to successfully complete the task? 5. Which case is more objective? 4.8 ** (3.1) 6. Which case is more subjective? 7.0 -3.3 *, ** Significantly different from 6, the midpoint, at p < .001 and p < .05, respectively. (a) An 11-point scale was used to measure the responses where 1 = the workpaper review (analytic) task and 11 = the analytical review (intuitive) task. The midpoint represented no difference between the two tasks on the attribute in question. (b) No significant differences were found on the above subject ratings when age, gender, and experience (months) were added as covariates in the analysis. TABLE 2 Task Feature Mean Ratings Manipulation Check f Workpaper Review Question (Analytic) Task Scale Endpoints (n = 41) 1. How difficult did you find the case? 1 11 not extremely 5.0 difficult difficult (2.1) How realistic did you find the case? 1 11 not extremely 6.5 realistic realistic (2.1) 3. Does an audit senior possess the knowledge to successfully complete this task? 1 11 definitely has definitely 2.7 the knowledge does not (1.6) 4. Does an audit senior possess the ability to successfully complete this task? 1 11 definitely has definitely 2.2 the ability does not (1.0) 5. Do you believe that it is reasonable for an audit senior to be assigned this task? 1 11 highly highly 2.0 reasonable unreasonable (1.2) Task Feature Mean Ratings Manipulation Check Task Means (standard deviation): Analytical Review Question (Intuitive) Task Scale Endpoints (n = 44) 1. How difficult did you find the case? 1 11 not extremely 6.6 difficult difficult (2.0) How realistic did you find the case? 1 11 not extremely 6.3 realistic realistic (2.5) 3. Does an audit senior possess the knowledge to successfully complete this task? 1 11 definitely has definitely 4.6 the knowledge does not (2.3) 4. Does an audit senior possess the ability to successfully complete this task? 1 11 definitely has definitely 4.1 the ability does not (2.4) 5. Do you believe that it is reasonable for an audit senior to be assigned this task? 1 11 highly highly 4.4 reasonable unreasonable (2.6) Question Difference Scale Endpoints (n = 41)(a) 1. How difficult did you find the case? 1 11 not extremely 1.6 * difficult difficult (0.4) How realistic did you find the case? 1 11 not extremely 0.2 realistic realistic (0.5) 3. Does an audit senior possess the knowledge to successfully complete this task? 1 11 definitely has definitely 1.9 * the knowledge does not (0.4) 4. Does an audit senior possess the ability to successfully complete this task? 1 11 definitely has definitely 2.1 * the ability does not (0.4) 5. Do you believe that it is reasonable for an audit senior to be assigned this task? 1 11 highly highly 2.4 * reasonable unreasonable (0.4) * Significant at p < .001. (a) Sample sizes vary due to missing data. A paired sample t-test was conducted on the difference score to determine wheter it was statistically different from zero. TABLE 3 Descriptive Statistics and Graph of Task Performance by Cognitive Style Panel A: Descriptive Statistics of Task Performance Task Performance Means (standard deviation): Workpaper Analytical Review (a) Review (b) Difference Cognitive Style (Analytic Task) (Intuitive Task) Score (c) Analytic .269 .122 .147 (n = 26) (.20) (.20) (.06) Intuitive .188 292 -.120 (n = 8) (.15) (.21) (.06) Panel B: Graph of Mean Task Performance [GRAPHIC OMITTED] (a) The dependent variable is a percentage representing the number of errors found divided by 8. (b) The dependent variable is a percentage representing the number of misstated accounts identified divided by 6. (c) The difference score is found by subtracting performance on the analytical review task from the workpaper review task. TABLE 4 Cognitive Misfit Repeated Measure ANOVA Results Sum of Mean Source of Variation df Squares Squares F Significance Between Subjects Style 1 .0238 .0238 .59 .45 Error 32 1.2987 .0406 Within Subjects Task 1 .0057 .0057 .17 .68 Task x Style 1 .1936 .1936 5.70 .02 Error 32 1.0872 .0340 Total 67 2.6090 The dependent variables were measured in percentages where (# errors found)/8 was the measure for the workpaper review task and (# of accounts misstated)/6 for the analytical review task. TABLE 5 Cognitive Misfit Classification Chi-Square Results Task Performance Best on the Best on the Equal Workpaper Analytical between Cognitive Style Review Task Review Task Tasks Total Analytic 18 5 3 26 Intuitive 1 6 1 8 Total 19 11 4 34 [chi square] = 9.4 p < .01 The dependent variable in the above table is a classification of the number of auditors who performed higher on that task type than on the other type. This classification was based on a percentage performance measure where (# errors found)/8 was the measure for the workpaper review task and (# of accounts misstated)/6 for the analytical review task.
The data upon which this paper is based may be obtained from the first author upon request. Accepted by Susan Haka.
(1) Cognitive abilities relate to knowledge encoding and retrieval, whereas cognitive strategies focus on ongoing, multidirectional interactions between an individual's cognitive style, ability, and the task environment (Ho and Rogers 1993).
(2) However, differential domain-specific knowledge could be associated with auditor for cognitive style if the auditor gravitates to and only performs those tasks that match his/her cognitive style.
(3) "Packaging" task characteristics refers to a subset of the 11 characteristics, not all 11 (Hammond et al. 1987).
(4) One expert reversed the task type from our assessment on the two questions regarding cues.
(5) Abdolmohammadi's (1999) research provides a ratings database needed to perform 332 audit tasks. Complexity ratings are on a nine-point scale and knowledge (experience level) uses a five-point scale where one is an audit assistant and five a partner. Mean complexity (knowledge) ratings for "workpaper review for aggregation of errors" and "post-financial analytical review" are 5.6 (3.41) and 4.8 (3.0), respectively.
(6) The performance outcome scale for each dependent variable is different; e.g., the analytic task has outcomes of 12.5 percent, 25 percent, 37.5 percent, 50 percent, 62.5 percent, 75 percent, 87.5 percent, or 100 percent, whereas the intuitive task has fewer outcomes 16.67 percent, 33.33 percent, 50 percent, 66.67 percent, 83.33 percent, or 100 percent, which may affect the interpretation of the results. To equalize the performance outcome scales between the tasks two dependent measures of the analytic task were dropped from the analysis (every combination of pairs was dropped). The results of these analyses were similar or better than the analysis presented in the paper.
(7) The amount of time spent on the analytic (intuitive) task for an analytic (intuitive) cognitive style auditor was not significantly different from an intuitive (analytic) cognitive style auditor ([T.sub.A] = 69.4; [T.sub.I] = 72.9; p < .58; [T'.sub.A] = 58.0; [T'.sub.I] = 70.0; p < .37, respectively).
(8) Twenty-three subjects (22) completed the tasks where the analytic (intuitive) task was performed first and the intuitive (analytic) task second. Task order was found insignificant in all tests; therefore, it is excluded in the reported statistics.
(9) Cognitive style was associated with months of experience such that auditors with more tenure were more likely to have an analytic cognitive style (p < .05).
(10) A general linear model procedure controls for unequal cell size. Neither task order nor firm reached significance; therefore, both were excluded from the reported analysis.
(11) An arcsine transformation on dependent variables reported as percentages is used to correct any problems associated with heterogeneity of variance (Libby 1985). When the transformation was applied in our data, the results did not change. Therefore, the results reported use the raw data.
(12) Using the difference score between the workpaper and analytical review task performance as the dependent measure and cognitive style as the independent variable showed cognitive style is significant (ANOVA result: F = 6.34, p < .02).
Abdolmohammadi, M. J., and A. Wright. 1987. An examination of the effects of experience and task complexity on audit judgments. The Accounting Review (January): 1-13.
--. 1999. A comprehensive taxonomy of audit task structure, professional rank and decision aids for behavioral research. Behavioral Research in Accounting: 51-92.
Bonner, S. E. 1990. Experience effects in auditing: The role of task-specific knowledge. The Accounting Review (January): 72-92.
--, and B. Lewis. 1990. Determinants of auditor expertise. Journal of Accounting Research (Supplement): 1-20.
--, and N. Pennington. 1991. Cognitive processes and knowledge as determinants of auditor expertise. Journal of Accounting Literature: 1-50.
--. 1994. A model of the effects of audit task complexity. Accounting, Organizations and Society 19: 213-234.
Carlyn, M. 1977. An assessment of the Myers-Briggs Type Indicator. Journal of Personality Assessment: 461-73.
Casey, C. J., Jr. 1980. The usefulness of accounting ratios for subjects' prediction of corporate failure: Replication and extensions Journal of Accounting Research (Autumn): 603-613.
Chan, D. 1996. Cognitive misfit of problem-solving style at work: A facet of person-organization fit. Organizational Behavior and Human Decision Processes 68 (December): 194-207.
Corman, L. S., and R. G. Platt. 1988. Correlations among the group embedded figures test, The Myers-Briggs Type Indicator and demographic characteristics: A business school study. Perceptual and Motor Skills: 507-511.
Davis, D. L. 1982. Are some cognitive types better decision makers than others? An empirical investigation. Human Systems Management 3: 165-172.
Hammond, K. R., F. M. Harem, J. Grassia, and T. Pearson. 1987. Direct comparison of the efficacy of intuitive and analytical cognition in expert judgment. IEEE Transactions on Systems, Man, & Cybernetics 17 (September/October): 753-770.
Henderson, J. C., and P. C. Nutt. 1980. The influence of decision style on decision making behavior. Management Science 26 (April): 371-386.
Ho, J. L., and W. Rodgers. 1993. A review of accounting research on cognitive characteristics. Journal of Accounting Literature: 101-130.
Hunt, R. G., F. J. Krzystofiak, J. R. Meindl, and A. M. Yousry. 1989. Cognitive style and decision making. Organizational Behavior and Human Decision Processes 44: 436-453.
Ingram, R. W., W. D. Samson, and G. F. Klersey. 1995. When the numbers seem too good to be true: The case of American Computer Electronics, Inc. Journal of Accounting Case Research.
Jacoby, P. F. 1981. Psychological types and career success in the accounting profession. Research in Psychological Type 4: 24-37.
Jung, C. G.  1971. Psychological Types. Reprint, Princeton, NJ: Princeton University Press.
Keen, P. G., and G. S. Bronsema. 1981. Cognitives style research: A perspective for investigation. Proceedings of the Second International Conference on Information Systems.
Kirton, M. J. 1987. Kirton Adaption-Inovation Inventory (KAI) Manual. 2nd edition. Hatfield, U.K.: Occupational Research Centre.
Kreiser, L., J. M. McKeon, Jr., and A. Post. 1990. A personality profile of CPAs in public practice. The Ohio CPA Journal (Winter): 29-34.
Libby, R. 1985. Availability and the generation of hypotheses in analytical review. Journal of Accounting Research (Autumn): 648-62.
--, and J. Luft. 1993. Determinants of judgment performance in accounting settings: Ability, knowledge, motivation, and environment. Accounting, Organizations and Society: 425-450.
--, and H. Tan. 1994. Modeling the determinants of auditor expertise. Accounting, Organizations and Society: 701-716.
Moeckel, C. 1990. The effect of experience on auditors' memory errors. Journal of Accounting Research (Autumn): 368-87.
Myers, I. B. 1977. Myers-Briggs Type Indicator. Palo Alto, CA: Consulting Psychologists Press, Inc.
--. 1980. Gifts Differing. Palo Alto, CA: Consulting Psychologists Press, Inc.
--, and M. H. McCaulley. 1985. Manual: A Guide to the Development and Use of the Myers-Briggs Type Indicator. Palo Alto, CA: Consulting Psychologists Press.
Ott, R. L., M. H. Mann, and C. T. Moores. 1990. An empirical investigation into the interactive effects of student personality traits and method of instruction (lecture or CAI) on student performance in elementary accounting. Journal of Accounting Education 8: 17-35.
Otte, P. J. 1983. Psychological typology of the local firm Certified Public Accountant. Doctoral dissertation, Western Michigan University.
Pincus, K. V. 1990. Auditor individual differences and fairness of presentation judgments. Auditing: A Journal of Practice & Theory 9 (Fall): 150-166.
Rodgers, W., and T. J. Housel. 1987. The effects of information and cognitive processes on decision making. Accounting and Business Research 18: 67-74.
Ruble, T. L., and R. A. Cosier. 1990. Effects of cognitive styles and decision setting on performance. Organizational Behavior and Human Decision Processes 46: 283-295.
Schloemer, P. G., and M. S. Schloemer. 1997. The personality types and preferences of CPA firm professionals: An analysis of changes in the profession. Accounting Horizons 11 (December): 24-39.
Witkin, H. A., P. K. Oltman, E. Raskin, and S. A. Karp. 1971. A Manual for the Embedded Figures Tests. Palo Alto, CA: Consulting Psychologists Press.
Zmud, R. W. 1979. Individual differences and MIS success: A review of the empirical literature. Management Science 25: 966-979.
Lori R. Fuller
Steven E. Kaplan
Arizona State University
|Printer friendly Cite/link Email Feedback|
|Author:||Fuller, Lori R.; Kaplan, Steven E.|
|Publication:||Behavioral Research in Accounting|
|Date:||Jan 1, 2004|
|Previous Article:||Internet-based experiments: prospects and possibilities for behavioral accounting research.|