Printer Friendly

A Research Note concerning Practical Problem-Solving Ability as a Predictor of Performance in Auditing Tasks.

ABSTRACT

The ability to recognize when there is a variety of solutions to a particular situation has been shown to be important to success in the accounting profession (Baril et al. 1998). Recently, a measure of ability has been developed in psychology that focuses on "practical" problem-solving ability (PPSA) (Devolder 1993). From a theoretical standpoint, relatively little is known about the association between ability and performance in accounting tasks. Thus, the purpose of this study is to investigate if PPSA predicts performance on two important auditing tasks, internal-control-evaluation and analytical procedures. Participants in this study (66 auditors and 78 accounting students) assessed vignettes of real-world financial problems and provided solutions to these problems. Participants also solved an analytical procedures and internal-control-evaluation task. The results suggest that PPSA was useful in predicting the performance of both accounting students and experienced auditors on both analytical procedures and internal-control evaluation. This is the first accounting study to examine PPSA. Practically, results suggest it may be important to attract students with high PPSA into the accounting profession.

INTRODUCTION

The ability to solve problems is critical to a variety of disciplines, including accounting (AECC 1990 Baril et al. 1998). However, relatively few accounting studies have examined the effect of ability on task performance, and there is no widely accepted method of measuring ability in the accounting literature (Davidson 1998). The measure of ability used in this study is based on recent research in psychology that focuses on "practical" problem-solving ability (PPSA). People use PPSA to solve problems they encounter in their everyday lives (Devolder 1993). [1] Since there may not be one perfect solution for every problem, higher PPSA scores are given to solutions that reflect an awareness of the many possible causes of a problem and a corresponding set of potential solutions (Camp et al. 1989). For this reason, PPSA may a relevant and reliable predictor of performance in professional tasks, including certain accounting tasks, where a variety of potential solutions exist. However, previous research in account ing has not examined PPSA as predictor of performance in accounting tasks.

The purpose of this study is to investigate if PPSA predicts performance on two important auditing tasks, internal-control evaluation and analytical procedures. To investigate this issue, participants in this study (66 auditors and 78 accounting students) assessed vignettes of real-world financial problems and were asked to provide solutions to these problems. Participants also solved an analytical-procedures and internal-control evaluation task. The results suggest that PPSA predicted performance on both auditing tasks.

LITERATURE REVIEW AND HYPOTHESIS DEVELOPMENT

Practical Problem-Solving Ability

The PPSA scoring scheme uses a structured approach to analyzing an individual's ability to solve complex problems. The scoring scheme for measuring PPSA includes whether the individual realizes the problem(s), suggests a solution(s), makes effective use of available resources, avoids future negative consequences, references relevant information, formulates how to carry out the action required, and provides a solution(s) that is specific and complete. [2]

Similarly, interviews with accounting practitioners suggest that the ability to identify problems, recognize when additional information is needed, and plan ahead are important to success in the accounting profession (Baril et al. 1998). In addition, accountants and auditors must be aware of future negative consequences (such as potential litigation), reference relevant information (i.e., sources of GAAP or GAAS), make effective use of available resources (firm materials, decision aids, accounting software, etc.), and so on. Therefore, PPSA may a relevant and reliable predictor of performance in certain accounting and auditing tasks where a variety of potential solutions exists. [3]

Problem-Solving Ability Research in Auditing

Libby (1995) suggests that there is a wide variety of mental abilities that may contribute to audit problem solving. [4] However, with the exception of Bonner and Lewis (1990), relatively few accounting studies have attempted to examine whether ability predicts performance on auditing tasks. [5] Bonner and Lewis (1990) used questions taken from the 1987 Graduate Record Exam (GRE) to measure ability (see also Bonner et al. 1992; Bonner and Walker 1994; Libby and Tan 1994). They found that ability, as measured by GRE questions, was a significant predictor of performance on their ratio-analysis task, in which auditors were asked to determine a single accounting error that could account for all of the unexpected changes in the ratios.

The Bonner and Lewis (1990) ratio-analysis task is similar to other tasks used in analytical procedures research in which auditors were asked to explain unexpected fluctuations in financial ratios and account balances (Libby 1985; Bedard and Biggs 1991). Analytical procedures are increasingly important for both planning the audit and testing account balances. Auditors use analytical procedures to determine if the client's unaudited account balances are different from the amounts that could be expected based on their past financial performance, their industry, forecasts, etc. These procedures are very important to audit effectiveness and efficiency (Hirst and Koonce 1996).

Previous research in auditing suggests that generating multiple solutions may be critical to auditors' performance with analytical procedures, as well as other auditing tasks (Heiman 1990; Bedard and Biggs 1991; Bierstaker et al. 1999). Auditing research has generally focused on the link between experience and analytical procedures performance (Bonner 1990; Ashton 1991). Libby and Frederick (1990), for example, found that more experienced auditors generated a greater quantity of accurate explanatory hypotheses than less experienced auditors. To be consistent with prior research, a measure of audit experience is included in the hypothesis-testing models. [6] However, with the exception of Bonner and Lewis (1990), relatively few studies have examined the relationship between ability and analytical procedures performance. Since practical problem-solving ability is measured based on the number of effective solutions generated by the problem solver, it may enhance our understanding of auditors' productive use of analytical procedures beyond previous accounting research, which has focused mainly on experience. Thus, the following hypothesis is proposed:

H1: Practical problem-solving ability will be significantly positively correlated with performance in analytical procedures.

Bonner and Lewis (1990) also investigated the relationship between ability and evaluation of internal control weaknesses. Evaluation of internal-control weaknesses is important for both error detection and fraud detection. In addition, internal-control-evaluation may be an important means of improving audit efficiency (Messier et al. 1997; Smith et al. 1998). Bonner and Lewis (1990) did not find that ability was a significant predictor of performance on their internal-control task, however, they did find that audit experience had a significant effect on internal-control-evaluation performance.

Although Bonner and Lewis (1990) did not find a significant relationship between ability and internal-control-evaluation performance, the task used in that research gave participants a specific internal control weakness, and asked participants to list two errors that could occur and go undetected, as well as two substantive procedures that would be useful for detecting those errors. Thus, their task was relatively structured (Libby and Tan 1994). The internal control evaluation task in the current study is relatively more complex and unstructured, since it involves the analysis of detailed information regarding the accounting system for the revenue cycle of a hypothetical client. This accounting system contains a number of important internal-control weaknesses (see "Methods" and the Appendix). A common finding of research on ability in accounting suggests that higher levels of PSA improve performance on relatively complex and unstructured accounting tasks, but do not improve performance on relatively structu red accounting tasks (Amernic and Beechy 1984; Davidson and Jones 1998: Phillips 1998). Therefore, while ability was not significantly correlated with performance in a relatively structured internal-control-evaluation task from previous research, it may predict performance in the relatively less structured internal-control-evaluation task used by the current study. The second hypothesis is as follows:

H2: Practical problem-solving ability will be significantly positively correlated with internal-control-evaluation performance.

METHODS

Participants

Participants consisted of 78 students who had substantially completed an auditing course and 66 auditors with approximately two years of experience from a single Big 5 auditing firm. Auditors were drawn from a single firm to avoid differences in training or audit approach that may exist between firms. Students had 1.6 months (standard deviation 0.8 months) of audit experience on average compared with 22.6 months (standard deviation 8.6 months) for auditors. In addition, 80 percent of the auditors who participated in this study had between 18 and 26 months of experience.

Procedure

Participants received a booklet containing an introduction, general instructions, the tasks to be completed, answer forms, and a questionnaire. The introduction informed participants that responses would be kept confidential and stressed the importance of working independently. Also, one of the authors was present during the administration of the experiment to answer any questions and ensure that all participants worked independently.

Tasks

Auditing Tasks

All participants completed an internal-control-evaluation task and an analytical-procedures task. All case materials for both tasks, including background information on hypothetical audit clients, were reviewed by experienced auditors prior to the administration of the experiment to ensure that case information was realistic. The internal-control-evaluation task asked participants to review a narrative description of accounting procedures used in the revenue cycle and list the internal control weaknesses. The case was designed to include a variety of important weaknesses, as discussed below. The analytical-procedures task asked participants to examine selected account balances and financial ratios, and list the accounting errors that were most likely to be the cause of discrepancies between reliable projections and the client's unaudited figures. A debt misclassification error was the cause of the discrepancies. Debt misclassification was chosen because it is a commonly occurring error (Coakley and Loebbecke 1985).

Practical Problem-Solving Ability Tasks

Prior to performing these tasks, participants responded to a financial problem-solving scenario used in Devolder (1993). Following the performance of the audit tasks, participants responded to a second financial problem-solving scenario from Devolder (1993) and then filled out a questionnaire designed to gather data on their experience with the problem-solving scenarios, and their relevant work experience. [7] One of the problem-solving scenarios describes a situation where it was necessary to find a new mechanic to repair your car, which is no longer under warranty. The other scenario describes a situation where you are unhappy with the preparation of your tax return by a paid tax preparer because the amount of deductions seems too low. Participants always received a problem-solving scenario, the two audit tasks, and a second problem-solving scenario.

The presentation of problem-solving scenarios and audit tasks were counterbalanced to avoid order effects, and statistical tests for order effects were not significant (p [greater than] 0.10). Participants were also asked to indicate the amount of time spent on each task to examine if duration of effort affected performance. [8] The amount of spent was not significantly correlated with task performance (p [greater than] 0.10). [9]

Dependent Variables

Two dependent variables are used in this study. The dependent variable for analytical-procedures performance (AP) was assigned a value of 0 (incorrect) or 1 (correct), based on whether participants listed either the seeded error (debt misclassification) or another error that would account for all of the discrepancies in the ratios and account balances. [10] The dependent variable for internal-control-evaluation performance (ICP) is based on the number of control weaknesses a participant correctly evaluated. A Delphi panel of three audit managers was used to evaluate the weaknesses contained in the case. [11] To measure ICP, participants' lists of control weaknesses were compared to the list developed by the Delphi panel (see the Appendix).

Independent Variables

There are two independent variables used in this study. The first independent variable, experience (EXP), was assigned a value of 0 for students and 1 for auditors. [12] As a supplemental analysis, months of audit experience is also examined in a model with auditors only. The second independent variable, practical problem-solving ability (PPSA), was measured using a seven-point scale. This scale was chosen because it has been validated in the psychology literature (Devolder 1993). The scale provides one point for each of the following:

1. Is the problem or problems realized by the subject?

2. Does the solution remove the immediate problem?

3. Does the solution make effective use of the subject's resources (i.e., the easiest or least wasteful solution)?

4. Does the solution avoid future negative consequences?

5. If the subject does not know what has to be done, does he/she know of an effective resource that should be contacted and is not (e.g., IRS for a complex tax problem)?

6. Does the subject indicate that he/she would carry out the action cited as required?

7. Is the solution that is given specific and complete?

Participants' written responses to the two PPSA scenarios were independently scored by three individuals: the two authors, plus an independent coder. Agreement was greater than 90 percent. All disagreements were reconciled between the coders. To be consistent with Devolder (1993), the sum of the scores on the two scenarios (0 to 14) was used as an independent variable (PPSA) to predict performance in the auditing tasks. [13] The Cronbach's alpha for this variable is 0.5961. In addition, the scores on the two scenarios used in this study were highly correlated (r = 0.81), and the results of a matched-pair t-test indicate that PPSA scores are not significantly different across the two scenarios for each participant (t = 0.01; p = 0.995).

RESULTS

Analytical Procedures

The first hypothesis stated that practical problem-solving ability would be positively correlated with analytical-procedures performance. As shown on Panel A of Table 1, the mean PPSA score was 9.11 and the median PPSA score was 9. PPSA was entered into the regression models as a continuous variable. However, for descriptive purposes participants were partitioned into high (10-13), medium (9), and low (1-8) PPSA groups. Approximately 34.7 percent of the participants in the high PPSA group identified an error that could account for all of the discrepancies in the financial information, compared to 18.6 percent in the medium PPSA group and 5.7 percent of the participants in the low PPSA group. The results of a logit regression model are displayed on Panel A of Table 2. As predicted in H1, PPSA is significant (p [less than] 0.01), suggesting that participants with higher problem-solving ability outperformed participants with lower problem-solving ability on the analytical-procedures task.

As shown on Panel B of Table 1, 33.3 percent of auditors identified the correct error as compared to 10.3 percent of students. As shown on Panel A of Table 2, the main effect of EXP is significant (p [less than] 0.01), indicating that auditors outperformed students on the analytical-procedures task. The interaction between PPSA and EXP is not significant. [14]

In addition, as a supplemental analysis, a separate model was run with only auditors and PPSA was significant (p [less than] 0.01). Auditors' months of experience, and the interaction between months of experience and PPSA, were not significant. [15] Thus, for auditors with a similar level of experience, PPSA appears to be an important predictor of analytical-procedures performance.

Internal Control Evaluation

The second hypothesis stated that practical problem-solving ability would be positively correlated with internal-control-evaluation performance. Panel A of Table 1 shows that the mean ICP score for the high PPSA group was 2.796, compared to 2.464 for the medium PPSA group and 1.941 for the low PPSA group. [16] The results of the multiple regression model are shown on Panel B of Table 2. [17] The main effect of PPSA is significant (p [less than] 0.01), indicating that participants with higher ability outperformed participants with lower ability on the internal-control-evaluation task. Therefore, H2 is supported.

Panel B of Table 1 shows that the mean ICP score of auditors was 2.969 compared to 2.000 for students. As shown on Panel B of Table 2, the main effect of EXP is significant (p [less than] 0.01), indicating that auditors outperformed students on internal control evaluation. The interaction between PPSA and EXP is not significant.

In addition, a separate model was run with only auditors and PPSA was significant (p [less than] 0.05), suggesting that auditors with higher ability outperformed auditors with lower ability on internal control evaluation. Auditors' months of experience, and the interaction between months of experience and PPSA, were not significant.

SUMMARY AND CONCLUSIONS

This study has several important findings. First, participants' practical problem-solving ability was significantly correlated with performance on the analytical-procedures task, consistent with Hi. Second, participants' practical problem-solving ability was significantly correlated with performance on the internal-control-evaluation task, supporting H2. In addition, auditors outperformed students on both tasks. Therefore, based on the results of this research, both experience and ability appear to be significant factors that predict performance on two Important auditing tasks. The implications of these results are discussed below.

The results of this study have important implications for accounting research. Relatively little is known about the effect of problem-solving ability on performance in accounting tasks. Since the results of this study suggest PPSA predicts performance In auditing tasks, the PPSA scale may be a valuable addition to the set of tools accounting researchers can use to measure ability.

In addition, the relationship between ability, decision process, and performance in accounting tasks has not been investigated. Future research could employ methods such as think-aloud verbal protocols (Bedard and Biggs 1991) or information-search monitor (Rosman et al. 1999), along with the PPSA measure, to examine the relationship between ability, decision process, and performance in accounting contexts.

Finally, many auditing tasks are performed by groups of auditors rather than individuals. Therefore, future research could explore the relationship between problem-solving ability, its underlying components, and task performance when auditors either work individually or in groups.

Unlike previous research (Bonner and Lewis 1990), this study found that ability, as measured by PPSA, predicts internal-control-evaluation performance. One explanation for the significant relationship between ability and internal-control-evaluation performance found in this study, in contrast to prior research, is that this study used a measure of ability (PPSA) that has not been used in previous accounting research. However, since the effect of ability on performance may diminish as task complexity decreases, another explanation for the conflicting findings is that differences may exist in the structure of the internal-control-evaluation task used in this study as compared to internal-control-evaluation tasks used in prior research. Therefore, future research is needed to examine the relationship among ability, performance, and task structure.

Another limitation of this study is that the measure of ability may have Inadvertently captured participants' knowledge. To alleviate this concern, it may be important for future research employing the PPSA measure to also incorporate measures of domain-specific knowledge. Consistent with previous analytical-procedures research, this study included measures of auditors' experience in the hypothesis-testing models. However, a limitation of comparing students with auditors is that there may be a selection bias, since not all participating students will be hired by Big 5 firms. Finally, differences may have existed among participants' motivation. Although the amount of time spent on the auditing tasks was not correlated with performance, a limitation of the time spent measure is that it cannot capture participants' intensity of effort (Libby and Lipe 1992).

We thank Don Kent. Ed O'Donnell, Arnie Wright, Jay Thibodeau, Priscilla Burnaby. John Barrick, Roger Gibson, Jean Bedard, and three anonymous reviewers from the 1998 ABO Conference for their helpful comments and suggestions. We would also like to thank the participants of the 1997 AAA Annual Meeting, the 1998 Northeast Regional Meeting, the 1998 ABO Conference, and the 1999 International Symposium of Audit Research, as well as the auditors and students who participated in this study.

(1.) In Devolder (1993). the terms "performance" and "ability" are used interchangeably. However, consistent with previous accounting research, the current study uses the term "performance" only in reference to accounting/auditing tasks and not the tasks used to measure ability.

(2.) Many of the components of practical problem-solving ability are similar to analytical abilities (Sternberg 1996).

(3.) There is a variety of potential solutions to both auditing tasks used in this study (see the "Methods" Section).

(4.) Sternberg and Kaufman (1998) also suggest there is a wide variety of abilities that contribute to problem solving in general.

(5.) In previous auditing research, Libby and Luft (1993. 428) define ability as the "Capacity to complete information encoding, retrieval, and analysis tasks."

(6.) Libby and Luft (1993) and Libby (1995) also suggest that experience, and knowledge, are important factors associated with performance. Although knowledge is not explicitly measured in this study, a dichotomous variable named EXP (0 = student; 1 = auditor) is included in the hypothesis-testing models, and auditors' months of experience is examined.

(7.) The correlation between the experience variable, EXP (0 = student, 1 = auditor) and PPSA is = 0.186 and clearly indicates the absence of multicolilnearity. Moreover, auditors' months of experience was not significantly correlated with PPSA. in addition, experience with the problem-solving scenarios was not significantly correlated with PPSA, and not statistically significant when Included In any of the regression equations.

(8.) Some participants received $5.00 each, or competed for a cash prize of $100.00. No significant differences were found based on this manipulation for the amount of time expended or task performance, except that those students who competed for $100.00 outperformed students who did not have the opportunity to compete for the cash prize on Internal-control evaluation (p [less than] .05).

(9.) However, a limitation of the time spent measure is that it cannot capture participants' intensity of effort (Libby and Lipe 1992).

(10.) The error caused discrepancies in the current and quick ratios, but did not affect current assets or net income.

(11.) The Delphi panel of audit managers reviewed a list of all possible weaknesses evaluated by participants. They rated each weakness based on its relative importance. Consensus was reached after three rounds.

(12.) A limitation of the EXP variable is that only a subset of the students who participated in this study are likely to be hired by Big 5 firms. Thus, there may be a selection bias when comparing auditors and students.

(13.) PPSA is also significant for both auditing tasks (p [less than] 0.01) when average scores are used.

(14.) Variables were standardized to avoid multicollinearity with the interaction term (Alken and West 1990). Variance Inflation factors were computed to test for multicollinearity (Neter et al. 1985). No evidence of multicollinearity was found between the standardized Independent variables, or the interaction term.

(15.) Experience may not have been significant in the auditors-only model because auditors had a fairly narrow range of experience (i.e., 18 to 26 months).

(16.) While mean scores may appear relatively small, recall that the maximum score achievable is based on the pooled responses of a group of audit managers working as a Delphi panel. Therefore, no one auditor could be reasonably expected to perform as well as a group of experts. In addition, auditors' performance on the tasks used in this study is comparable to the results of previous research that used similar tasks (Bedard and Biggs 1991; Bierstaker 1999).

(17.) Basic regression diagnostics were performed. Results revealed that residuals were independently and randomly distributed suggesting no threshold effects, no step effects, and no increasing or diminishing returns.

REFERENCES

Accounting Education Change Commission (AECC). 1990. Objectives of education for accountants: Position Statement Number One. Issues in Accounting Education (Fall): 307-312.

Aiken. L. S., and S. G. West. 1990. Multiple Regression: Testing and Interpreting Interactions. Newbury Park. CA: Sage.

Amernic, J., and T. Beechy. 1984. Accounting students' performance and cognitive complexity: Some empirical evidence. The Accounting Review 59 (April): 300-313.

Ashton. A. 1991. Experience and error frequency knowledge as potential determinants of audit experience. The Accounting Review (April): 218-239.

Baril. C., B. Cunningham. D. Fordham. R. Gardner. and S. Wolcott. 1998. Critical thinking in the public accounting profession: Aptitudes and attitudes. Journal of Accounting Education 16: 381-406.

Bedard. J., and S. Biggs. 1991. Processes of pattern recognition and hypothesis generation in analytical review. The Accounting Review 66 (July): 622-642.

Bierstaker, J., J. Bedard, and S. Biggs. 1999. The effect of problem representation shifts on auditor performance in analytical procedures. Auditing: A Journal of Practice & Theory (Spring): 18-36.

-----. 1999. A test of the split attention effect in a professional context. Journal of Business and Behavioral Sciences 6:177-189.

Bonner, S. E. 1990. Experience effects in auditing: The role of task-specific knowledge. The Accounting Review (January): 72-92.

-----. and B. L. Lewis. 1990. Determinants of auditor expertise. Journal of Accounting Research 28 (Supplement): 1-20.

-----. J. Davis. and B. Jackson. 1992. Expertise in corporate tax planning: The issue identification stage. Journal of Accounting Research (Supplement): 1-28.

-----. and P. Walker. 1994. The effects of instruction and experience on the acquisition of auditing knowledge. The Accounting Review: 157-178.

Camp. C. J., K. Doherty, S. Moody-Thomas, and N. W. Denney. 1989. Practical problem solving in adults: A comparison of problem types and scoring methods. In Everyday Problem Solving: Theory and Applications, edited by J. D. Sinnott. 211-228. New York. NY: Praeger.

Coakley, J. R, and J. K. Loebbecke. 1985. The expectation of accounting errors in medium-sized manufacturing firms. Advances in Accounting 2:199-245.

Davidson, R. A. 1998. The ability to solve unstructured problems. Working paper, Arizona State University West.

-----. and S. H. Jones. 1998. A comparison of three linguistic measures of problem-solving ability. Working paper. Arizona State University West.

Devolder, P. A. 1993. A scale of practical problem-solving performance. Experimental Aging Research: 129-146.

Heiman, V. 1990. Auditors' assessments of the likelihood of error explanations in analytical review. The Accounting Review (October): 875-890.

Hirst, D. E., and L. Koonce. 1996. Audit analytical procedures: A field investigation. Contemporary Accounting Research (Fall): 457-486.

Libby, R. 1985. Availability and the generation of hypotheses in analytical review. Journal of Accounting Research (Autumn): 648-667.

-----, and D. Frederick. 1990. Experience and the ability to explain audit findings. Journal of Accounting Research (Autumn): 348-367.

-----, and M. G. Lipe. 1992. Incentive effects and the cognitive processes involved in accounting judgments. Journal of Accounting Research (Autumn): 249-273.

-----, and J. Luft. 1993. Determinants of judgment performance in accounting settings: Ability, knowledge, motivation, and environment. Accounting, Organizations and Society: 425-450.

-----, and H.-T. Tan. 1994. Modeling the determinants of audit expertise. Accounting, Organizations and Society: 701-716.

-----, 1995. The role of knowledge and memory in audit judgment. In Judgment and Decision-Making Research in Accounting and Auditing, edited by R. H. Ashton, and A. H. Ashton. New York, NY: Cambridge University Press.

Messier, W. F., S. J. Kachelmeier, and K. Jensen. 1997. An experimental assessment of recent professional developments in nonstatistical audit sampling guidance. Working paper, University of Florida.

Neter. J., W. Wasserman, and M. H. Kutner. 1985. Applied Linear Statistical Models: Regression, Analysis of Variance, and Experimental Designs. Homewood, IL: Richard D. Irwin.

Phillips, F. 1998. Accounting students' beliefs about knowledge: Associating performance with underlying belief dimensions. Issues in Accounting Education 13 (February): 113-125.

Rosman, A. J., I. Seol, and S. F. Biggs. 1999. The effect of stage development and financial health on auditor decision behavior in the going-concern task. Auditing: A Journal of Practice & Theory (Spring):3 7-54.

Smith, R., S. L. Tiras, and S. Vichitlekarn. 1998. The interaction between internal control assessment and substantive testing in audits for fraud. Working paper, University of Oregon.

Sternberg, R. J. 1996. Successful Intelligence. New York, NY: Simon Schuster.

-----, and J. C. Kaufman. 1998. Human abilities. Annual Review of Psychology 49: 479-502.
             Descriptive Data for Practical Problem-Solving
                         Ability and Experience
               Panel A: Practical Problem-Solving Ability
                               (n = 144)
Variable  Mean  Standard Dev.  Median  Maximum  Minimum
PPSA      9.11      1.65         9       13        3
                          High
                      PPSA (10-13)   Medium PPSA (9)  Low PPSA (0-8)
Variable                (n = 49)        (n = 59)         (n = 35)
AP (Correct)             17 (34.7%)     11 (18.6%)        2 (5.7%)
AP (Incorrect)           32 (65.3%)     48 (81.4%)       33 (94.3%)
ICP Mean
(Standard deviation)  2.796 (1.369)  2.464 (1.348)    1.941 (1.229)
                     Panel B: Experience (n = 144)
                         Auditor        Student
Variable                (n = 66)       (n = 78)
Low PPSA (0-8)           12 (34.3%)     23 (65.7%)
Medium PPSA (9)          29 (49.2%)     30 (50.8%)
High PPSA (10-13)        24 (49.0%)     25 (51.0%)
AP (Correct)             22 (33.3%)      8 (10.3%)
AP (Incorrect)           44 (66.7%)     70 (89.7%)
ICP Mean
(Standard deviation)  2.969 (1.299)  2.000 (1.250)


AP = analytical procedures performance. measured by correct identification of the seeded error (0 to 1);

PPSA = practical problem-solving ability, measured by performance on two financial problem-solving tasks (range =.0 to 14);

EXP = experience (0 student, 1 = auditor); and

ICP = internal-control-evaluation performance, measured by the number of weaknesses evaluated (range = 0 to 7).
                   Practical Problem-Solving Ability,
                      Experience, and Performance
                  Panel A: Analytical Procedures Task
Logit Model: AP = f (PPSA, EXP)
Chi-squared: 24.01                            p-value: 0.0001
Variable                         Coefficient    Chi-Square     p-Value
PPSA                               0.8428         7.3997        0.003
EXP                                0.6278         6.2818        0.006
PPSA*EXP                           0.2991         0.4571        0.499
                    Panel B: Internal Controls Task
Regression Model: ICP = f
(PPSA, EXP)
F-test: 8.12; p-value: 0.0001;
[R.sup.2]: 0.195; Adj.
[R.sup.2]: 0.171
Variable                        Coefficient  t-test  p-Value
PPSA                               0.368     3.259   0.0007
EXP                                0.422     3.933   0.0001
PPSA*EXP                           0.053     0.462   0.6447


AP = analytical procedures performance, measured by correct identification of the seeded error (0 to 1);

ICP = internal-control-evaluation performance, measured by the number of weaknesses evaluated;

PPS = practical problem-solving ability, measured by performance on two financial problem-solving tasks (0 to 14); and

EXP = experience (0 = student, 1 = auditor).

Delphi Panel Results

Internal Control Weaknesses

1. Checks and remittance advice are not separated/Remittance advice created by the mail clerk.

2. The credit function should not be in the accounting department/Accounting department manager is also the credit manager.

3. Customer remittances should not come to accounting department.

4. Accounts receivable clerk is overworked and has incompatible duties.

5. There are two separate cash receipts flows.

6. The cashier" is in the sales department and performs a credit granting rather than a cashier's function.

7. Inventory is relieved the day after the sale rather than the same day/Improper relief of inventory.

8. No review of bank reconciliation.

9. No indication of aging A/R.

10. Batch totals for daily sales summary should be compared to bank deposit and ledger totals.

11. Checks are not stamped by the mail clerk when the mail is opened.

12. No pre-listing of checks is prepared.

13. Sales clerk receives cash and prepares invoice.

14. No approval process for credit sales over a certain $ amount.

15. No master price list.

16. A shipping report should be generated and compared to the invoice.

17. Credit is approved after the invoice has been prepared.

18. No procedures to ensure that all invoices have been accounted for.
COPYRIGHT 2001 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bierstaker, James Lloyd; Wright, Sally
Publication:Behavioral Research in Accounting
Article Type:Statistical Data Included
Geographic Code:1USA
Date:Jan 1, 2001
Words:5120
Previous Article:The Relationship between Auditor Characteristics and the Nature of Review Notes for Analytical Procedure Working Papers.
Next Article:An Analysis of Statistical Power in Behavioral Accounting Research.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters