Printer Friendly

A test of changes in auditors' fraud-related planning judgments since the issuance of SAS No. 82.

INTRODUCTION

Prior to issuing SAS No. 82, the AICPA noted that some auditors failed to take responsibility to detect financial statement fraud (hereafter fraud). (l) A primary objective of issuing SAS No. 82 (AICPA 1997) was to sensitize auditors to their responsibility to detect fraud (Mancino 1997). This study compares auditors' sensitivity to fraud risk factors as reflected in their audit-planning decisions before and after the issuance of SAS No. 82. We use Zimbelman's (1997) pre-SAS No. 82 data for our benchmark and gather new post-SAS No. 82 data to form a basis for comparison. We believe that, while not without inherent limitations, this longitudinal approach provides useful insights for audit research, policy, and practice.

SAS No. 82's goal to clarify auditors' responsibility to detect fraud is no small task, considering academics and standard-setters have pursued this goal for many years (see, e.g., Mautz and Sharaf 1961, 113). For example, SAS No. 82 was released less than a decade after SAS No. 53 (AICPA 1988) was issued to clarify auditors' responsibilities to detect irregularities (Guy and Sullivan 1988); only five years later, SAS No. 99 was released (AICPA 2002). The study described in this paper is one of four research projects sponsored by the Auditing Standards Board to assist in the Board's deliberations in forming SAS No. 99. (2)

In addition to clarifying the level of responsibility for detecting fraud, SAS No. 82 provides specific indicators of elevated fraud risk and guidance to auditors on how to modify their audit plans in response to fraud risk. It also requires auditors to explicitly assess fraud risk. Our results suggest that auditors' planning decisions exhibit greater sensitivity to fraud risk since the passage of SAS No. 82. We find that post-SAS No. 82 auditors are more aware of the need to modify audit plans and are more likely to increase the extent of their audit tests in response to increased fraud risk when compared with pre-SAS No. 82 auditors. These measures suggest that SAS No. 82 has had an impact in practice. However, we do not find (pre- or post-SAS No. 82) evidence that auditors modify the nature of their planned tests in response to fraud risk. Our finding that the nature of planned tests do not change in response to fraud risk is consistent with recent research that similarly reports no association between fraud risk assessments and audit program effectiveness (Asare and Wright 2002). Given the importance of modifying the nature of audit plans in order to effectively detect fraud (e.g., see AICPA 1997; Albrecht et al. 2001; Erickson et al. 2000; Shibano 1990), this result suggests that future audit policy and training should focus on helping auditors to effectively modify the nature of audit procedures to increase the likelihood of fraud detection.

BACKGROUND: THE PRE-SAS NO. 82 STUDY

Zimbelman (1997) (hereafter Z97) argued that SAS No. 82's explicit requirement to separately assess fraud risk would raise auditors' sensitivity to fraud cues by raising the auditors' awareness of the risk factors. (3) Z97 used a 2 x 2 between-subjects design with two levels of fraud risk (low, high) and two types of risk assessment (combined, separate). The version of the case with "high" fraud risk included positive signals of fraud contained in the draft version of SAS No. 82. The version with "low" fraud risk contained an innocuous level of the factors. The combined risk assessment condition was consistent with SAS No. 53's requirement to make one assessment of the likelihood of misstatement due to intentional and unintentional causes. The separate risk assessment condition required auditors to make two risk assessments--one due to intentional causes (i.e., fraud) and one due to unintentional causes--and thereby attempted to proxy for SAS No. 82's requirement for a separate fraud risk assessment. In response to elevated fraud risk, Z97 expected auditors in the separate condition to increase their budgeted hours (i.e., extent of tests) and plan audit tests that they perceived would be more likely to detect fraud (i.e., nature of tests) relative to auditors in the combined condition. To test these predictions, Z97 analyzed the effects of fraud risk (low versus high) and risk assessment method (separate versus combined) to determine main and interaction effects.

Z97 reports that, across risk conditions, auditors in the separate fraud risk assessment condition budgeted more hours than auditors in the combined condition. However, the predicted interaction between fraud risk and risk assessment method for budgeted hours (i.e., extent) was not supported using a between-subjects test and only marginally supported using a within-subjects test. Even though he employed various metrics to measure changes in the nature of audit tests as a result of his low- and high-risk conditions, Z97 did not find significant main or interaction effects, suggesting that neither the risk-assessment condition nor the level of fraud risk influenced the nature of pre-SAS No. 82 auditors' planned audit procedures. In summary, Z97's results suggested that the separate fraudrisk assessment required by SAS No. 82 would likely increase total budgeted hours, but the study provided little evidence that SAS No. 82 would be effective at improving the nature or extent of audit plans for detecting fraud.

We believe that Z97's mixed results could be due to one or more of the following reasons: (1) SAS No. 82's requirement for an explicit fraud risk assessment did impact auditors' planning decisions, but because Z97 was conducted prior to the issuance of SAS No. 82, Z97 was unable to investigate the full impact of the standard on auditor behavior (via training, policy manuals, implementation experience, etc.); (2) SAS No. 82's requirement for an explicit fraud risk assessment does not impact auditors' planning decisions; or (3) the experimental instrument is ineffective at detecting and measuring changes in audit plans due to SAS No. 82.

The primary motivation for this study is to test whether changes in the audit environment related to the release of SAS No. 82 have impacted auditors' planning decisions given varying degrees of fraud risk. Because Z97 was conducted before firms changed their audit policies, training, and working practices in response to SAS No. 82, that study was unable to gauge the effects of the actual issuance of the standard and whether it successfully improved auditors' planning judgments in response to increased fraud risk. By replicating Z97, we are able to compare pre- and post-SAS No. 82 planning judgments to investigate the impact of changes in the audit environment that have occurred subsequent to the issuance of SAS No. 82 (e.g., see Shelton et al. 2001).

The second potential explanation (i.e., that SAS No. 82's requirement for an explicit fraud risk assessment had no effect) is also plausible. While SAS No. 82 attempted to clarify auditors' responsibilities to detect fraud, and provided relevant guidance in that regard, there is no guarantee that the standard actually and appropriately influenced auditors' planning decisions in response to fraud risk. Prior studies offer mixed evidence regarding auditors' tendency to modify the nature and extent of testing in response to increased audit risk (see Bedard et al. [1999] for a review). Furthermore, as noted above, a recent experimental study by Asare and Wright (2002) finds that audit program effectiveness did not improve with increased fraud risk.

The third potential explanation for Z97's mixed results relates to possible limitations in the experimental instrument. Since we use the same instrument as Z97, our study may be subject to this same caveat. However, if we had changed the instrument, we would not have been able to compare pre- and post-SAS No. 82 judgments using Z97 as a baseline. Further, this potential explanation becomes less likely to the extent this study provides evidence that auditors' fraud-related planning judgments have changed since issuing SAS No. 82. However, where we find no effects, our results are subject to the limitations of the Z97 instrument.

HYPOTHESES

Detecting fraud is a particularly challenging task for auditors because: (1) fraud is intentional and efforts are made to conceal it from the auditor, (2) fraud base rates are very low (Nieschwietz et al. [2000] estimated the fraud base rate at less than 0.5 percent), and (3) even the best fraud prediction models result in a high proportion of false positives (see Nieschwietz et al. 2000). The lack of accurate prediction models highlights the difficulty of creating a standard that actually influences auditors' planning judgments. The fact that SAS No. 82's intent is to "clarify" and not increase fraud-detection responsibility or require detailed new fraud-related audit procedures further limits the standard's potential measurable impact. In other words, prior to SAS No. 82, firms' audit policies and training already included much of the guidance on fraud detection provided in SAS No. 53. (4) Thus, it is not clear a priori that SAS No. 82 will necessarily have an effect on auditors' judgments.

Despite the difficulty of fraud detection and the potential limitations of SAS No. 82, we believe the release of SAS No. 82 with its related training and policy implications has heightened auditors' overall level of sensitivity to fraud risk cues and affected their fraud-related judgments. The low base rate of fraud means that most auditors have no experiential learning in fraud detection (Libby 1995; Bonner and Walker 1994; Herz and Schultz 1999). Consequently, most auditors' fraud-detection knowledge and expertise comes from training and audit policy. Formal training programs, as well as formal review feedback, can be expected to increase auditors' sensitivity to fraud-risk factors. Recent studies show that while audit firms took different approaches to implementing

SAS No. 82, they all enhanced their firms' approaches by including SAS No. 82's requirements in their audit approach and training (see Shelton et al. 2001). Training and policy based on SAS No. 82 can be expected to provide improved fraud-detection guidance compared to training based on SAS No. 53 because SAS No. 82 provides more detailed and explicit guidance than does SAS No. 53. Furthermore, the high cost to firms when fraud is detected after the release of a clean audit opinion increases the importance of auditors' understanding and complying with new fraud-related guidance.

Based on the foregoing discussion, we predict that the issuance of SAS No. 82 and related events (e.g., new policy, training, working practices) have resulted in measurable improvements in auditors' post-SAS No. 82 sensitivity to fraud risk and their fraud-related planning decisions. The following hypotheses, stated in the alternative form, formally express our expectations:

H1: Post-SAS No. 82 perceptions of the need to modify audit plans to detect fraud will be more positively related to fraud risk cues than pre-SAS No. 82 judgments.

H2: Post-SAS No. 82 judgments about planned audit hours (i.e., extent) will be more positively related to fraud risk cues than pre-SAS No. 82 judgments.

H3: Post-SAS No. 82 judgments about the effectiveness of planned audit tests at detecting fraud (i.e., nature) will be more positively related to fraud risk cues than pre-SAS No. 82 judgments. (5)

METHOD

Our research design is a 2 x 2 between-subjects experiment where the first independent variable is the data collection period (pre- or post-SAS No. 82) and the second independent variable is the level of fraud risk (low or high). Data collection for our post-SAS No. 82 study involved carefully replicating all essential features of Z97 to the extent possible. The fundamental difference between our study and Z97 is that the Z97 data were obtained two years before implementation of SAS No. 82 (i.e., Fall 1995), while our data were collected two years after implementation (i.e., Fall 1999). Z97 provides a detailed description of data collection procedures. Our procedures duplicate Z97's methods to the extent possible, including:

* The same participating firms

* Identical participant identification and data collection procedures

* Identical experimental instrument (6)

* Comparable participant audit experience (averages, 1995 = 5.5 years, 1999 = 4.8 years)

* Similar numbers of participants (1995 = 108, 1999 = 91)

In Table 1, we show the assignment of auditors to the four experimental conditions. The software randomly assigned participants to experimental condition. In Panel A, we show this assignment for all participants; Panel B (C) shows the assignment of participants from firm A (B). Because this assignment process resulted in differences in cell sizes, we statistically control for all firm effects including two- and three-way interactions. Because one of our independent variables is a time-period variable (pre- and post-SAS No. 82), this is a longitudinal study. Accordingly, we acknowledge that this design is inherently limited in its ability to conclusively attribute causality to the issuance of SAS No. 82. As described in the next section, we statistically control for this limitation to the extent possible.

RESULTS

Our hypotheses predict differences between pre-and post-SAS No. 82 judgments in response to fraud risk. For each hypothesis, we examine descriptive statistics, perform t-tests comparing differences due to fraud risk before and after SAS No. 82, and run an analysis of covariance (ANCOVA) to examine the interaction between manipulated fraud risk and time period (pre-and post-SAS No. 82). We recognize that numerous environmental changes have occurred since SAS No. 82 was issued. We attempt to control for variation that is not of interest to our study by using control variables available in both the pre- and post-SAS No. 82 data and by transforming the dependent variables. We transform the dependent variables to control for main effects in auditors' judgments since the issuance of SAS No. 82 (e.g., changes in audit approach) and to control for effects due to Z97's risk assessment manipulation. We convert each dependent variable into a Z-score based on inclusion in one of four groups that correspond to the combination of time period (pre- and post-SAS No. 82) and risk assessment method studied in Z97. Our transformation is determined as follows: Z(x) = ([x.sub.i]-[[upsilon].sub.j])/[[sigma].sub.j], where [x] is the dependent variable, [x.subi] is an observation of [x] as reported by auditor i, [[upsilon].sub.j] and [[sigma].sub.j] are the mean and standard deviation for group j, respectively (i.e., the four groups described previously). In Tables 2-4, we report descriptive statistics for our dependent measures before and after the transformation.

Control variables used in the ANCOVA include FIRM, a two-level categorical variable based on the two firms participating in the study, and EXPERIENCE, a continuous variable that is the self-reported number of audits performed by a participant. (7) EXPERIENCE is entered as a covariate, while FIRM is entered as an independent variable. We control for FIRM because variation exists in how the firms implemented SAS No. 82 (Shelton et al. 2001). We control for experience because there is evidence that fraud judgments are affected by practitioner experience (Knapp and Knapp 2001). We use the same ANCOVA model (with different dependent measures) to test all three hypotheses.

Hypothesis 1

To determine each participant's perception of the need to revise audit plans in response to the presence of fraud cues, we asked the following question:
   In comparison with a typical audit client, to what extent would you
   modify your audit plan for this client to detect intentional
   misstatement? (8)


Responses to this question are labeled "FDR" (fraud detection response) and ranged from -5 (decrease ability to detect fraud) to +5 (increase ability to detect fraud). FDR encompasses the nature, timing, and extent of testing in response to detecting fraud (i.e., the participant's overall planned audit response). Results appear in Table 2.

Results of t-tests show that differences between fraud-risk conditions are highly significant for both pre- and post-SAS No. 82 groups (pre: t = 3.63, p < .001; post: t = 5.18, p < .001). These results indicate that both groups of participants perceived the need to modify their audit plans to make them more effective when fraud cues were present. However, consistent with Hl, the pattern of means shown in Table 2--Panels A and B--indicate that the difference in the perceived need to modify overall audit plans between the low- and high-risk conditions was greater for the post-SAS No. 82 group than for the pre-SAS No. 82 group. (9) ANCOVA results using the Z-transformed data shown (see Table 2, Panel C) indicate that participants appropriately reacted to RISK (p < .001). More importantly, the SAS 82 x RISK interaction is moderately significant (p = .07, one-tailed) suggesting that the post-SAS No. 82 participants were more sensitive to fraud cues than the pre- SAS No. 82 participants. This result supports H1. (10) The only other significant effect in the analysis is the FIRM x RISK interaction, which is significant at p = .02. Although FDR increased significantly (p = .001) for both firms in response to RISK, this interaction is significant because the F-statistic was roughly three times larger for firm B than firm A. Exploring these firm differences is beyond the scope of this paper and would violate an agreement made with the participating firms. (11)

Hypothesis 2

Hypothesis 2 predicts that post-SAS No. 82 budgeted audit hours (extent of testing) will be more positively related to fraud risk cues than pre-SAS No. 82 budgeted hours. Participants were required to develop a budget of audit hours to audit accounts receivable. Mean results using raw and Z-transformed total budgeted hours are reported in Panels A and B of Table 3, respectively. ANCOVA results using the Z-transformed total hours (see Panel C of Table 3) show that the SAS 82 x RISK interaction is significant (one-tailed p = .05). (12) The pattern of means in Panels A and B suggest that increases in budgeted hours due to increases in fraud risk are greater for post-SAS No. 82 auditors than for pre-SAS No. 82 auditors. In fact, pre-SAS No. 82 auditors did not significantly increase their budgeted hours in response to increased fraud risk (t = 0.12, p > .90), while post-SAS No. 82 auditors did significantly increase their budgeted audit hours in response to increased fraud risk (t = 1.74; one-tailed p = .04). (13) These results provide support for H2, suggesting that post-SAS No. 82 auditors' decisions to vary the extent of testing for accounts receivable are more positively related to fraud risk than decisions of pre-SAS No. 82 auditors. (14)

Hypothesis 3

Hypothesis 3 predicts that post-SAS No. 82 planned audit procedures (nature of testing) will be more positively related to fraud risk cues than pre-SAS No. 82 procedures. The dependent variable for testing H3 is APS (audit program strength), a measure of the perceived effectiveness of a participant's audit plan at detecting fraud in accounts receivable. This measure is identical to APS as reported in Z97 and incorporates auditors' planning decisions regarding their planned mix of ten standard audit procedures for accounts receivable. APS is affected by participants' perceptions of the effectiveness of the ten procedures at detecting fraud and by the emphasis planned for each procedure. Thus, APS for a given participant increases (decreases) as that participant plans to emphasize procedures that they perceive are more (less) effective at detecting fraud. Specifically the APS (audit program strength) metric computed for each auditor, j, is:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Where FER (fraud effectiveness rating) is the auditors' evaluation of each procedure according to its "effectiveness at detecting intentional misstatements," Weight represents participants' rankings of their "most important" audit procedures, i = 1 to 5 represents auditor j's top five "most important" audit procedures for this client, and FE[R.sub.avg j], is auditor j's average FER over the ten procedures. (15) Results reported in Panel A of Table 4, indicate a significant main effect for pre- and post-SAS No. 82 (pre 1734.15, post 1963.04, p < .001), suggesting the post-SAS No. 82 auditors perceive their planned mix of tests to be more effective at detecting fraud than did pre-SAS No. 82 auditors. We performed additional analyses to determine if this main effect represents an actual increase in effectiveness or simply a change in perception over time. We surveyed fraud experts to obtain their effectiveness rankings for the ten procedures available to auditors in our study. We also evaluated our pre- and post-SAS No. 82 auditors' planned mix of tests relative to the procedures deemed most effective by our experts. This evaluation shows that the apparent overall improvement in the nature of audit tests selected since SAS No. 82 appears to be merely a change in perception of participating auditors that is not supported by our experts' perceptions as the main effect for APS is no longer significant. (16) The expert surveys show that pre-SAS No. 82 auditors and post-SAS No. 82 auditors selected the same top three procedures and ranked them in the same order. This is evidence that the procedures used by the auditors have not changed as much as have their perceptions. In sum, the increase in effectiveness is not due to an improved set of planned tests but, rather, higher fraud effectiveness ratings for the entire set of procedures. This change in perceived effectiveness may be due to changes in auditors' perceptions of their ability to detect fraud in response to their increased sense of responsibility for fraud since the issuance of SAS No. 82.

In testing H3, we control for this perceived main effect and focus on a possible SAS 82 x RISK interaction by analyzing the Z-transformed APS data. Our ANCOVA shows the SAS 82 x RISK interaction is not significant (p = .90); thus, H3 is not supported. We performed numerous sensitivity tests using tests that are similar to those reported in Z97. (17) These sensitivity tests reveal no qualitative differences in results. Although the means for APS appear to decrease while risk increases, the ANCOVA revealed that this decrease is not significant (two-tailed p > .62). (18)

We recognize that the lack of support for H3 could be due to the restricted number of procedures included in the experimental instrument. However, in pre-testing, Z97 asked experts to suggest useful procedures, and these were incorporated into his instrument and ours. Further, this concern is mitigated by the fact that participating subjects, as well as fraud experts, were able to rank the individual procedures according to their ability to detect fraud. Thus, the lack of results is not due to a lack of perceived variability in the effectiveness of the ten procedures at detecting fraud. As an illustration, Z97's Table 5 shows that there is significant variability in participants' perceptions of the effectiveness at detecting fraud for the individual tests in this portfolio of procedures; similar results are found in the post-SAS No. 82 data. Based on the relatively strong consensus in the top fraud-detection procedures, the fact that Z97 had experts validate the set of procedures used, and the variability in the fraud effectiveness ratings of the ten procedures (see Z97, Table 5) it appears participants had the ability to manipulate detection risk for fraud in response to changes in auditee fraud risk. Thus, the APS measure (using either the participants' or experts' rankings) should detect meaningful changes to the planned nature of tests in response to increased fraud risk.

Our finding that audit-procedure effectiveness does not increase with an increase in fraud risk is consistent with the findings reported in Asare and Wright (2002), who used an instrument that allowed auditors to add as many procedures as they considered necessary. Collectively these results are discouraging in light of prior research (e.g., Fellingham and Newman 1985; Shibano 1990) and SAS No. 82 which suggest that the nature (and not just the extent) of audit plans generally needs to change in response to fraud risk so as to be effective at detecting fraud.

Limitations

Several factors condition our findings. First, regarding H1 and H2, we have significant firm effects that may suggest that the results for H1 and H2 are due to changes in one firm but not the other. Due to a confidentiality agreement, we cannot report results by firm. However, we note that the differences for both firms are in the predicted direction and the power of our tests is reduced by the smaller sample sizes when analyzing the data separately for each firm. Furthermore, even if our results were indicative of changes in some firms but not others, we believe they would be valuable evidence of changes in the profession since the issuance of SAS No. 82.

A second limitation with respect to H1 and H2 is that this study is unable to attribute the cause of our support for these hypotheses specifically and conclusively to SAS No. 82. We acknowledge that many variables other than the adoption of SAS No. 82 have changed during the four years that separate the two periods in our study. We attempt to control for these variables to the extent possible by transforming our dependent measures, controlling for audit firm and auditor experience, and focusing on interactions rather than main effects. Ideally, a longitudinal study would measure changes across very short time periods, allowing for control of factors other than the one being tested (e.g., Dopuch et al. 1986). However, other longitudinal studies have used relatively long time periods, similar to ours (e.g., Chewning et al. 1989). It is not uncommon for long time periods to involve changes other than the change of interest (see Cook and Campbell 1979, Chapter 5). In our case, at least three potentially significant events took place during the data-collection period. First, the Private Securities Litigation Reform Act of 1995 (for a discussion of how the Act relates to fraud detection, see King and Schwartz [1997]). Second, firms have revised their audit approaches in ways other than to reflect the issuance of SAS No. 82 (e.g., Bell et al. 1997; Lemon et al. 2000). Third, while numerous frauds have brought negative publicity to the audit profession for decades, several large frauds occurred between 1995 and 1999, which may have heightened the sensitivity of our participants to the potential for fraud. We acknowledge that our statistical controls cannot completely control for the possible effects of these events. While we are unable to attribute the causality of our findings to SAS No. 82, we believe that measuring changes in auditors' sensitivity to fraud risk is still valuable. For example, audit policy can benefit from understanding current auditor behavior before modifications are made. Research can similarly benefit from descriptions of current practice.

Regarding H3, it is possible that auditors in an actual audit setting may change the nature of their audit plans in ways that our instrument does not capture. Also, it is feasible that the risk situation portrayed and perceived by our participants could be adequately addressed by changes to the extent of testing without changing the nature of tests. Furthermore, some firms use software-based decision aids that link risk factors to audit procedures and our study cannot determine the sensitivity of these aids for responding to fraud risk. Even so, given that our results are consistent with many prior studies that show the link between risk and the nature of tests is suspect, we believe that our findings should warrant some concern.

Finally, other limitations are inherent in this type of research. For example, our instrument is an abstraction containing limited information and one set of risk factors; participants may not invest the same level of effort into case-study decisions that they might put into actual audit decisions; and our participants' planning decisions did not benefit from the input of other audit team members. Together, these limitations suggest that our results may need to be interpreted with caution.

SUMMARY AND CONCLUSIONS

On the basis of evidence from two substantially identical studies, we conclude that changes in auditors' fraud-related planning judgments since issuing SAS No. 82 are generally consistent with the intent of the Auditing Standards Board. Two of our three measures suggest that auditors' judgments after SAS No. 82 appear to be more responsive to fraud risk than were auditors' judgments prior to SAS No. 82. First, post-SAS No. 82 participants demonstrated a greater awareness of the need to modify audit plans in response to changes in fraud cues relative to pre-SAS No. 82 participants (H1). Second, post-SAS No. 82 participants increased the extent of their audit plans in response to increased fraud risk, while pre-SAS No. 82 participants did not (H2). However, consistent with other research, our study suggests that auditors do not effectively vary the nature of their audit plans in response to changes in fraud risk (H3). Taken together, these results suggest that while auditors have become more aware of the need to modify audit plans and are more likely to increase the extent of audit plans in response to increased fraud risk since the issuance of SAS No. 82, they still do not modify the nature of their audit plans to make planned procedures more effective at detecting fraud.

That the nature of testing appears not to change is potentially serious. Responses to fraud risk may be ineffective unless the nature of audit tests is appropriately modified. Audit plans must consider the strategic nature of fraud as perpetrators may conceal their acts from traditional audit procedures (Zimbelman and Waller 1999). We encourage audit policy that facilitates strategic reasoning (Wilks and Zimbelman 2002) and believe that future research should help determine how audit policy can facilitate such reasoning and lead auditors to effectively change the nature of their audit plans in response to fraud risk.
TABLE 1
Auditors Assigned to Experimental Conditions (Pre-and Post-SAS No. 82)

Panel A: Both Firms

                  Low Risk   High Risk   Total

Pre-SAS No. 82       55          53       108
Post-SAS No. 82      40          51        91

Total                95         104       199

Panel B: Firm A

                  Low Risk   High Risk   Total

Pre-SAS No. 82       25         36         61
Post-SAS No. 82      25         38         63

Total                50         74        124

Panel C: Firm B

                  Low Risk   High Risk   Total

Pre-SAS No. 82       30         17        47
Post-SAS No. 82      15         13        28

Total                45         30        75

TABLE 2
Perceived Need to Revise Audit Plan

Panel A: Fraud Detection Response (FDR) Descriptive
Statistics: Mean (Standard Deviation)

                      Fraud Risk

                    Low       High      Overall

Pre-SAS No. 82      0.535     1.628      1.071
                   (1.568)   (1.506)    (1.626)
                   n = 55       53        108
Post-SAS No. 82     0.083     1.667      0.970
                   (1.505)   (1.335)    (1.612)
                   n = 40       51         91
Overall             0.344     1.647      1.025
                   (1.550)   (1.418)    (1.616)
                   n = 95      104        199

Panel B: Z-Transformed FDR Descriptive Statistics: Mean
(Standard Deviation)

                      Fraud Risk

                    Low        High     Overall

Pre-SAS No. 82     -0.324     0.336      0.000
                   (0.954)   (0.931)    (0.995)
Post-SAS No. 82    -0.537     0.421      0.000
                   (0.937)   (0.826)    (0.994)
Overall            -0.413     0.377      0.000
                   (0.948)   (0.878)    (0.992)

Panel C: ANCOVA Results Using Z-Transformed FDR
(Fraud Detection Response)

Source of Variance       df       MS      F       p

Covariate
 Experience               1      0.66    0.81    0.37

Main Effects
 SAS 82                   1      0.52    0.64    0.43
 RISK                     1     33.89   41.53   <0.01
 FIRM                     1      1.02    1.25    0.26

Interaction Effects
 SAS 82 x RISK            1      1.76    2.15    0.07 (a)
 SAS 82 x FIRM            1      1.79    2.20    0.14
 RISK x FIRM              1      4.96    6.08    0.02
 SAS 82 x RISK x FIRM     1      0.41    0.50    0.48

Residual                 190     0.82

[R.sup.2] = .205

(a) One-tailed significance level due to directional hypothesis;
all other significance levels are two-tailed.

This table reports results of auditors' perceived need to modify
audit plans in response to increased fraud risk. To determine the
perceived need to modify audit plans, we asked auditors (in both
the pre- and post-SAS No. 82 studies) the following question: In
comparison with a typical audit client, to what extent would you
modify your audit plan for this client to detect intentional
misstatement? We label responses to this question "FDR" (fraud
detection response). The response scale for FDR ranged from-5
(decrease ability to detect fraud) to +5 (increase ability to
detect fraud). The experimental design is a 2 x 2 between-subjects
design where one level is fraud-risk (low and high) and the other
variable is the timing of data collection (pre- or post-SAS No. 82).
The transformation is determined as follows: Z(x) = ([x.sub.i] -
[[upsilon].sub.j])/[[sigma].sub.j], where x is the dependent variable,
[x.sub.i] is an observation of x as reported by auditor i, [upsilon],
and [[sigma].sub.j] and [s.sub.j] are the mean and standard deviation
for group j. Responses are categorized into four groups that correspond
to the combination of time period (pre- and post-SAS No. 82) and risk
assessment method studied in Zimbelman (1997).

TABLE 3
Extent of Audit Plans

Panel A: Total Budgeted Hours Descriptive Statistics:
Mean (Standard Deviation)

                      Fraud Risk

                    Low        High     Overall

Pre-SAS No. 82     54.57      53.87      54.22
                  (31.82)    (27.25)    (29.53)
Post-SAS No. 82    36.73      44.73      41.21
                  (16.58)    (26.95)    (23.21)
Overall            47.05      49.39      48.27
                  (27.82)    (27.36)    (27.54)

Panel B: Z-Transformed Total Budgeted Hours Descriptive
Statistics: Mean (Standard Deviation)

                      Fraud Risk

                    Low       High      Overall

Pre-SAS No. 82      0.025    -0.026      0.000
                   (1.099)   (0.885)    (0.995)
Post-SAS No. 82     -0.211    0.166      0.000
                   (0.751)   (1.130)    (0.994)
Overall             -0.074    0.068      0.000
                   (0.970)   (1.012)    (0.992)

Panel C: ANCOVA Results Using Z-Transformed Total Budgeted Hours

Source of Variance        df      MS       F        p

Covariate
Experience                 1     1.77    1.86      0.18
Main Effects
  SAS 82                   1     0.07    0.07      0.79
  RISK                     1     2.71    2.84      0.09
  FIRM                     1     4.36    4.56      0.03
Interaction Effects
  SAS 82 x RISK            1     2.53    2.65      0.05 (a)
  SAS 82 x FIRM            1     1.40    1.46      0.23
  RISK x FIRM              1     0.04    0.04      0.84
  SAS 82 x RISK x FIRM     1     1.56    1.63      0.20
Residual                 190

[R.sup.2] = .070

This table reports results of auditors' budgeted audit hours.
Participants were required to develop a budget to audit
accounts receivable for the hypothetical client described in
the experimental materials. The experimental design is a 2 x 2
between-subjects design where one level is fraud-risk (low and
high) and the other variable is the timing of data collection
(pre- or post-SAS No. 82). The transformation is determined as
follows: Z(x) = ([x.sub.i] - [u.sub.j])/[s.sub.j], where x is the
dependent variable, [x.sub.i] is an observation of x as reported
by auditor i, [u.sub.j] and [s.sub.j] are the mean and standard
deviation for group j. Responses are categorized into four groups
that correspond to the combination of time period (pre- and post-SAS
No. 82) and risk assessment method studied in Zimbelman (1997).

TABLE 4
Nature of Audit Plans

Panel A: Audit Program Strength (APS) Descriptive
Statistics: Mean (Standard Deviation)

                            Fraud Risk

                          Low       High      Overall

Pre-SAS No. 82          1821.38    1643.62    1734.15
                       (2109.35)  (1760.48)  (1938.98)
Post-SAS No. 82         2063.95    1883.90    1963.04
                       (1794.94)  (1772.49)  (1774.71)
Overall                 1923.52    1761.45    1838.82
                       (1976.67)  (1761.92)  (1864.52)

Panel B: Z-Transformed APS Descriptive Statistics:
Mean (Standard Deviation)

                            Fraud Risk

                          Low       High      Overall

Pre-SAS No. 82           0.049     -0.051      0.000
                        (1.056)    (0.935)    (0.995)
Post-SAS No. 82          0.067     -0.053      0.000
                        (1.012)    (0.987)    (0.994)
Overall                  0.057     -0.052      0.000
                        (1.032)    (0.956)    (0.992)

Panel C: ANCOVA Results using Z-Transformed APS

Source of Variance        df         MS          F       p

Covariate
 Experience                1        1.72       1.75     0.19
Main Effects
 SAS 82                    1        0.11       0.11     0.74
 RISK                      1        0.23       0.23     0.63
 FIRM                      1        3.10       3.16     0.08
Interaction Effects
 SAS 82 x RISK             1        0.16       0.16     0.90
 SAS 82 x FIRM             1        1.37       1.39     0.24
 RISK x FIRM               1        0.29       0.29     0.59
 SAS 82 x RISK x FIRM      1        0.25       0.26     0.61
Residual                  190
[R.sup.2] = .044

This table reports results of auditors' "APS" (audit program
strength), a measure of the perceived effectiveness of each
participant's audit plan at detecting fraud in accounts
receivable. The experimental design is a 2 x 2 between-subjects
design where one level is fraud-risk (low and high) and the
other variable is the timing of data collection (pre- or post-
SAS No. 82). The transformation is determined as follows:
Z(x) = ([x.sub.i] - [u.sub.j])/[s.sub.j], where x is the
dependent variable, [x.sub.i] is an observation of x as
reported by auditor i, [u.sub.j] and [s.sub.j] are the mean
and standard deviation for group j. Responses are categorized
into four groups that correspond to the combination of
time period (pre- and post-SAS No. 82) and risk assessment
method studied in Zimbelman (1997).


We thank the editor, two anonymous reviewers, Mark Beasley, Arnie Wright, and participants at the 2000 Annual Meeting of the American Accounting Association for comments. The Auditing Standards Board of the American Institute of Certified Public Accountants (AICPA) sponsored this research. We also thank the AICPA and participating firms for their support.

(1) For example, in January 1994, The CPA Letter explained: The auditor's responsibility to detect fraud was discussed in the May 1993 issue of The CPA Letter, but it is clear some members are still confused about their responsibility ... If a firm's standard engagement letter says something like "our examination is not primarily or specifically designed, and cannot be relied upon, to detect fraud, defalcations, and other irregularities," then it is wrong (AICPA 1994, 5).

(2) See Nieschwietz et al. (2000) for a more thorough discussion of the approach-avoidance that auditors exhibit related to their responsibility for detecting fraud and for a summary of empirical research devoted to auditors' detection of fraud.

(3) Details of the pre-SAS No. 82 study appear in Zimbelman (1997) and are not repeated here.

(4) Like SAS No. 82, SAS No. 53 indicated that the nature of tests need modification in order for fraud to be effectively detected ("audit procedures that are effective for detecting a misstatement that is unintentional may be ineffective for a misstatement that is intentional and is concealed ..." [see AICPA 1988]).

(5) Our H2 and H3 deal with the extent and nature of testing, as do Z97's H2 and H3. Z97's HI related to the time auditors spent reading red-flag cues. We do not examine reading times in this study because Z97 examined the impact of a combined and decomposed assessments, which we do not. Rather, our HI examines if there has been a change in auditors' sensitivity to the need to modify audit plans in response to increased fraud risk. This variable was not reported in Z97.

(6) The instrument is identical except that we updated the dates in the financial statements and eliminated an order manipulation. Z97 found that order did not interact with fraud risk or any of the dependent variables of interest in our study.

(7) We also analyzed the data using total time spent as an auditor as the "experience" variable and the results were not qualitatively different from those reported.

(8) This question was collected as part of Zimbelman's dissertation (i.e., Zimbelman 1996). Z97 examines a subset of the data collected in Zimbelman (1996) and responses to this particular question were not reported or analyzed in Z97.

(9) Examining the means in Panel A suggests that the pre-SAS No. 82 group viewed the Low Fraud Risk case as requiring slightly more resources than the typical audit, while the post-SAS No. 82 group viewed the Low Fraud Risk case as being the typical audit. However, the differences between the pre- and post-SAS No. 82 groups are not significantly different. An independent samples t-test shows that participants' mean FDR for the low risk case is not significantly different when comparing the pre- versus post-SAS No. 82 groups (t = 1.41; two-tailed p = 0.16); a similar test for Z-FDR resulted in a two-tailed p = 0.28. Similar tests show that participants' mean FDR for the high risk case is also not significantly different when comparing the pre- versus post-SAS No. 82 groups for either FDR or Z-FDR (two-tailed p > 0.62, 0.89, respectively). Thus, we cannot attribute the significant interaction to differences in simple main effects within risk category.

(10) To ensure our conclusions would not be different if we used the raw data, we also analyzed the data for Hl without transforming the dependent variable. Using the otherwise identical model, we obtain results that are not qualitatively different for any variable including those relevant to Hl (p = .08, one-tailed, for SAS 82 x RISK interaction). Thus, F-statistics and p-values in this analysis are qualitatively identical to those reported in Table 2. However, we choose to focus on the Z-transformed data in Table 2 (Panel B) for several reasons. First, we prefer to use consistent statistical models and data throughout all our hypothesis tests. Second, while subjects were asked to respond relative to a typical audit as a form of control, relying on subjects to provide their own control may not be fully effective. Thus, a statistical control may have incremental value in reducing noise. Third, the transformation controls for any differences induced by Z97's manipulation of risk assessment methods (i.e., holistic versus decomposition), which are not relevant to this study (see the first paragraph in the "Results" section).

(11) Due to a confidentiality agreement, we cannot report detailed results by firm. However, without disclosing results separately for each firm, we can report that the SAS 82 x RISK interaction is in the direction predicted by Hl for both firms.

(12) The raw data yield results that are in the predicted direction and approach statistical significance. However, because there is significant noise in the data set an ANCOVA using the raw-hours data shows the one-tailed SAS 82 X Risk interaction at p = .125. We transformed the data to control for noise variation and changes other than SAS No. 82 (e.g., new budgeting practices).

(13) Panel A reports a significant main effect for pre- and post-SAS No. 82 (pre: 54.2; post: 41.2; p < .00l). This overall decrease in budgeted hours is consistent with other research (Messier et al. 2001), which may be due to efficiencies gained through technology and audit methodology (e.g., Lemon et al. 2000). This difference is controlled for in the analysis through the transformation of the dependent variable as explained previously.

(14) Other variables in the ANCOVA that are significant (i.e., two-tailed p < 0.10) include Risk (p = 0.09; auditors increased budgeted hours as fraud risk increased) and Firm (p = 0.03; one firm budgeted more hours overall, independent of fraud risk). Due to a confidentiality agreement, we cannot report results by firm. However, without disclosing results separately for each firm, we can report that the SAS 82 x RISK interaction is in the direction predicted by H2 for both firms.

(15) See Z97 for further discussion.

(16) The two participating firms provided a total of nine top fraud-audit experts with an average of over 25 years of experience. The experts were asked to evaluate each procedure according to its effectiveness at detecting fraud and to list the three most effective procedures in rank order. We used the experts' fraud-effectiveness ratings to re-compute participants' APS scores. Specifically, we summed participating auditors' weights assigned to the top five (and in a separate analysis using the top three) audit procedures identified by the experts as the most important procedures for detecting fraud on the hypothetical client. By computing each auditor's APS using the procedures deemed most effective by the experts, we derive a measure of APS that is not dependent on participating auditors' perceptions but is instead based on experts' perceptions.

(17) The sensitivity tests we performed included (using both participant and expert data): (1) five metrics for each of the top five most important procedures selected were used; (2) APS was summed for the top three procedures selected; (3) a metric that eliminated the effect of multiplying the FERs by W was used; (4) a metric which used the top three (and again with the top five) procedures was used directly without standardizing for average FER or multiplying by weight did not standardize for each participant's average FER; and (5) a metric that provided for a reduction term as a function of the perceived effectiveness of the procedure at detecting unintentional misstatements. We were unable to find support for H3 using any measure or technique or combination. When we analyze the nontransforrned data, we obtain similar results.

(18) The only variable in this analysis that approaches significance is the FIRM variable (p = .08). Firm B's APS was marginally significantly greater than Firm A's because four of Firm B's FERs are significantly greater than Firm A's (one-tailed p < .05) and none of the remaining FERs are significantly different from one another. This main effect increase in APS is independent of RISK or time period (i.e., SAS 82). The other main input to APS (i.e., Weight) appears to be similar between firms.

REFERENCES

Albrecht, C., Albrecht, S., and G. Dunn. 2001. Can auditors detect fraud: A review of the research evidence. Journal of Forensic Accounting 2: 1-12.

American Institute of Certified Public Accountants (AICPA). 1988. The Auditor's Responsibility to Detect and Report Errors and Irregularities. Statement on Auditing Standards No. 53. New York, NY: AICPA.

--. 1994. The auditor's responsibility to detect fraud. The CPA Letter (January).

--. 1997. Consideration of Fraud in a Financial Statement Audit. Statement on Auditing Standards No. 82. New York, NY: AICPA.

--. 2002. Consideration of Fraud in a Financial Statement Audit. Statement on Auditing Standards No. 99. New York, NY: AICPA.

Asare, S., and A. Wright. 2002. The impact of fraud risk assessments and a standard audit program on fraud detection plans. Working paper, University of Florida.

Bedard, J., T. Mock, and A. Wright. 1999. Evidential planning in auditing: A review of the empirical research. Journal of Accounting Literature 18: 96-142.

Bell, T., F. Marrs, I. Solomon, and H. Thomas. 1997. Auditing Organizations Through a Strategic-Systems Lens: The KPMG Business Measurement Process. Montvale, NJ: KPMG Peat Marwick LLP.

Bonnet, S., and P. Walker. 1994. The effects of instruction and experience on the acquisition of auditing knowledge. The Accounting Review 69: 157-178.

Chewning, G., K. Pany, and S. Wheeler. 1989. Auditor reporting decisions involving accounting principle changes: Some evidence on materiality thresholds. Journal of Accounting Research (Spring): 78-96.

Cook, T. D., and D. T. Campbell. 1979. Quasi-Experimentation: Design & Analysis Issues for Field Settings. Boston, MA: Houghton Mifflin.

Dopuch, N., R. W. Holthausen, and R. W. Leftwich. 1986. Abnormal stock returns associated with media disclosures of "subject to" qualified audit opinions. Journal of Accounting and Economics 8: 93-117.

Erickson, M., B. W. Mayhew, and W. L. Felix, Jr. 2000. Why do audits fail? Evidence from Lincoln Savings and Loan. Journal of Accounting Research (Spring): 165-194.

Fellingham, J. C., and D. P. Newman. 1985. Strategic considerations in auditing. The Accounting Review 60: 634-650.

Guy, D., and J. Sullivan. 1988. The expectation gap auditing standards. Journal of Accountancy (April): 36-46.

Herz, P. J., and J. J. Schultz, Jr. 1999. The role of procedural knowledge in accounting judgment. Behavioral Research in Accounting 11: 1-26.

King, R. R., and R. Schwartz. 1997. The Private Securities Litigation Reform Act of 1995: A discussion of three provisions. Accounting Horizons (March): 92-106.

Knapp, C. A., and M. C. Knapp. 2001. The effects of experience and explicit fraud risk assessment in detecting fraud with analytical procedures. Accounting, Organizations and Society 26: 25-37.

Lemon, W. M., K. W. Tatum, and W. S. Turley. 2000. Developments in the audit methodologies of large accounting firms. Caxton Hill, Hertford, U.K.: Stephen Austin & Sons Ltd.

Libby, R. 1995. The role of knowledge and memory in audit judgment. In Judgment and Decision-Making Research in Accounting and Auditing, edited by R. H. Ashton, and A. H. Ashton. New York, NY: Cambridge University Press.

Mancino, J. 1997. The auditor and fraud. Journal of Accountancy (April): 32-36.

Mautz, R., and H. Sharaf. 1961. The Philosophy of Auditing. Monograph No. 6. Sarasota, FL: American Accounting Association.

Messier, W. F., Jr., S. L Kachelmeier, and K. Jensen. 2001. An experimental assessment of recent professional developments in nonstatistical audit sampling guidance. Auditing: A Journal of Practice & Theory (March): 81-96.

Nieschwietz, R. N., J. J. Schultz, Jr., and M. F. Zimbelman. 2000. Empirical research on external auditors' detection of financial statement fraud. Journal of Accounting Literature 19: 190-246.

Shelton, S. W., O. R. Whittington, and D. Landsittel. 2001. Auditing firms' fraud risk assessment practices. Accounting Horizons (March): 19-33.

Shibano, T. 1990. Assessing audit risk for errors and irregularities. Journal of Accounting Research (Supplement): 110-140.

Wilks, T. J., and M. F. Zimbelman. 2002. Academic research and auditors' detection of fraudulent financial reporting: Using audit policy to encourage strategic-reasoning in practice. Working paper, Brigham Young University.

Zimbelman, M. F. 1996. Assessing the risk of fraud in audit planning. Doctoral dissertation, The University of Arizona.

--. 1997. The effects of SAS No. 82 on auditors' attention to fraud risk factors and audit planning decisions. Journal of Accounting Research (Supplement): 75-97.

--, and W. S. Waller. 1999. An experimental investigation of auditor-auditee interaction under ambiguity. Journal of Accounting Research (Supplement): 135-155.

Submitted: January 2001

Accepted: December 2002

Steven M. Glover and Douglas F. Prawitt are Associate Professors, both are Brigham Young University, Joseph J. Schultz, Jr. is a Professor at Arizona State University, and Mark F. Zimbelman is an Assistant Professor at Brigham Young University.
COPYRIGHT 2003 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Glover, Steven M.; Prawitt, Douglas F.; Schultz, Joseph J., Jr.; Zimbelman, Mark F.
Publication:Auditing: A Journal of Practice & Theory
Date:Sep 1, 2003
Words:8299
Previous Article:An examination of different performance outcomes in an analytical procedures task.
Next Article:Audit committee composition and shareholder actions: evidence from voting on auditor ratification.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters