# The error detection of structural analytical procedures: a simulation study.

SUMMARYGiven the requirement of SAS No. 56 and the increasing pressures to minimize audit costs, there is a need to develop more sophisticated analytical procedures that can increase the effectiveness and efficiency of an audit. Prior research suggests that structural models including the futuristic concept of an "information dual" may be good for this purpose. This study extends the work of Wheeler and Pany (1990) and Wild (1987) and investigates the prediction and error detection performance of structural analytical procedures using the monthly financial statements of a large number of simulated companies. These companies represent various sales behavior patterns and degrees of economic stability.

We develop a generic structural model that explicitly incorporates interdependencies among the accounting numbers and key exogenous variables that drive the economic environment of the company. When compared to the ARIMA, X-11, and Martingale models, our structural model performs better from an overall perspective. However, it does not perform better than the stepwise model which indirectly incorporates information on the structure of an organization's economic activities. The results indicate that the performance of each model, with respect to alpha and beta risks, tends to be a function of the testing approach used. We use both the positive and negative testing approaches. The sales behavior pattern has a significant impact on the performance of each model. All models tend to perform better for companies that have a greater degree of stability in their business and economic activities. In general, our results suggest that auditors can improve the prediction and error detection capability of analytical procedures by using the information inherent in the natural structure of accounting systems which reflect business and economic activities.

Key Words: Analytical procedures, Structural model, Error detection, Simulation, Risk.

Data Availability: Contact the authors.

INTRODUCTION

Auditors use analytical procedures to test the "reasonableness" of recorded accounting numbers based on the company's history and concurrent economic conditions. According to Statement on Auditing Standards (SAS) No. 56 (AICPA 1988), analytical procedures are required in the planning and overall review stages of the audit and are recommended during substantive testing. Many prior studies have emphasized the significance of analytical procedures in detecting financial statement errors and improving audit effectiveness (Hylas and Ashton 1982; Biggs and Wild 1984; Tabor and Willis 1985). Effective analytical procedures can help auditors identify errors and risk areas that require more thorough tests. As Hylas and Ashton (1982, 764) state:

A large proportion of financial statement

errors are initially signaled by less rigorous

audit procedures such as analytical

review.... It appears that increased utilization

of less rigorous audit procedures might

improve the auditor's effectiveness and/

or efficiency in detecting errors, and also

allow a "fine tuning" of substantive tests

of detail.

Given the requirement of SAS No. 56 and the increasing pressures to minimize audit costs, the development of more sophisticated and effective analytical procedures is desirable.

The objective of this paper is to extend audit research on the effectiveness of analytical procedures by examining the ability of structural expectation models to predict account balances and to detect seeded errors. We test the predictive ability and the error detection performance of a structural model for a large number of simulated companies which represent various sales behavior patterns and degrees of stability in their economic activities. Specifically, we posit that structural models will generate predicted values significantly closer to simulated values and as a result have smaller mean absolute prediction errors than several previously examined analytical procedures. These procedures include stepwise, ARIMA, X-11, and Martingale expectation models. Structural models directly incorporate structural relationships among entities and with the economic environment of an organization. These capture the economic activities of the organization. Stepwise models consider similar exogenous and endogenous variables, but they do it in an indirect way using regression analysis. ARIMA is an auto-regressive moving average model founded on the balances of a particular account. The X-11 model is a time-series model which explicitly incorporates trends and seasonality in account balances. The Martingale model simply considers the balance from the previous period.

When the simulated companies' financial statements are seeded with material errors, we test each model's ability to detect these errors and assess the alpha and beta decision risks for each model. We further show how these alpha and beta risks are related to the confidence intervals of the expectation models and the testing approaches. In addition, we examine the effect of the degree of structure in business and economic activities and sales behavior pattern on error detection. While there are several ways to define decision risks, in this paper we define them from the auditor's traditional perspective (Duke, et al. 1985, 12, table 2.1; Smieliauskas 1990). Specifically, alpha risk is the likelihood that an auditor will conclude that an account is materially misstated (in error) when it is not and beta risk is the likelihood that an auditor will conclude an account is not materially misstated when it is indeed materially misstated (in error). From an auditor's perspective, alpha risk is related to audit efficiency and can have cost consequences, whereas beta risk is related to audit effectiveness and the assurance that an auditor will detect a material error when one is present. We assess these risks for each model using both the positive testing approach where we assume the account is fairly presented (not in error) and test to see if it is materially misstated (in error) and the negative approach where we assume the account is materially misstated and test to show that it is not materially misstated (not in error) (Beck and Solomon 1985a, 1985b). Most of the research regarding the performance of analytical procedures to date has assumed the positive approach and has called alpha and beta decision risks Type I and Type II errors, respectively. While errors refers to unintentional misstatements and irregularities refers to intentional misstatements (SAS No. 53), for convenience we refer to all misstatements henceforth as errors.

This research extends Wheeler and Pany (1990), Wilson and Colbert (1989), and Icerman et al.'s (1993) studies which tested several error detection models without structural relationships and extends Wild's (1987) research which used a structural model for one company. Following the suggestions of prior studies to increase the effectiveness of analytical procedures, we incorporate exogenous variables and use disaggregated monthly data in testing all our models.

REVIEW OF PRIOR RESEARCH

In the past, analytical procedures traditionally were performed with nonstatistical models such as account changes and simple trend and ratio analysis. A common characteristic of nonstatistical analytical procedures is that they utilize limited and readily available financial statement balances. Prior research, however, provides no evidence that nonstatistical analytical procedures are effective in reducing the extent of substantive tests (Kinney 1979, 1987; Holder 1983; Daroca and Holder 1985; Loebbecke and Steinbart 1987). The relatively poor predictive performance of nonstatistical models portends the need for the development of procedures which use more information and employ more sophisticated processing of that information. In attempting to improve the effectiveness of analytical procedures, two research directions have evolved. These have tried to address the following two research questions: what expectation model should be used and what information should be included in analytical procedures.

Researchers have examined the performance of several statistical models such as univariate regression, multivariate regression, univariate time-series, multivariate time-series, and X-11 models. Generally, research has shown multivariate time-series models with the greatest information requirements and computational sophistication to be superior in predictive power to any univariate models (Kinney 1978; Ang et al. 1983). In contrast, Lorek et al. (1992) found limitations to these results. Knechel (1988), Wilson and Colbert (1989) and Wheeler and Party (1990) have gone beyond the examination of prediction accuracy and investigated the error detection performance of statistical analytical procedures. They found that models requiring more relevant information and greater sophistication produced significantly more accurate expectation amounts and performed better in error detection. Again in contrast, Icerman et al. (1993), in an examination of predictive ability and error detection, found that the use of sophisticated time-series models on quarterly data could be questioned from a cost-benefit perspective. In particular, they found that the Martingale model exhibited relatively strong performance compared to the more sophisticated models. However, the expectation models these researchers examined did not take advantage of the month-to-month structural relationships among accounting data.

Structural models seem particularly well suited for auditors' analytical procedures. Structural relationships in accounting data are important because they reflect the fundamental economic activities of the organization. Structural models directly incorporate both structural relationships among accounting numbers (endogenous variables) and relevant exogenous economic variables into econometric time-series or prediction models. Nonstructural models, in contrast, do not explicitly incorporate these relationships. If accounting numbers, along with key exogenous variables, reflect the underlying interdependencies of the economic process, the structural model should provide more accurate predictions of accounting numbers and perform better in terms of error detection than nonstructural models.

As an example of a structural relationship, Lorek et al. (1992), in a cross-correlation analysis of quarterly account balances, found some evidence of a leading relationship between inventory and sales for a large sample of firms. This relationship was strong for a subset of their sample and they suggested additional exploration of similar relationships among other accounts. Using signals from various ratios of monthly data from one organization, Kinney (1987) did not find encouraging patterns among the ratios. Kaplan (1978), in an early investigation of the use of structural models, concluded that they predicted accurately for income statement accounts but not for certain balance sheet accounts. Wild (1987) investigated the prediction performance of improved structural models by using monthly data of one company and concluded that the structural model's prediction performance was superior to univariate regression models but not significantly better than multivariate stepwise models. Because he used a sample of only one company, this conclusion may be considered tentative. Dzeng (1994) indirectly incorporated structure using a vector auto-regressive technique with multiple data series for a mid-size university and found it to be superior to ARIMA and regression models.

In general, very limited research has been done to evaluate the structural model's prediction and error detection performance and the results are inconclusive. In a related discussion of future audit practice, Elliott (1994, 1995) suggests that organizations will develop virtual mathematical representations of their economic activities, which he calls "information duals." Structural relationships, like those tested in this paper, will comprise a major component of these information duals.

Other research has been devoted to determining what information should be included to improve the analytical procedures. It has been suggested that the use of external industrial and economic data can improve the predictive ability of analytical procedures (Neter 1980; Lev 1980; Loebbecke and Steinbart 1987; Wild 1987; Alien 1993). These exogenous variables are important because (1) they reflect industrial and economic events which impact organizations' activities, (2) they may improve prediction because they covary with target accounts, and (3) they may improve error detection because they are exogenous, and thus are not affected by an accounting error.

Prior studies also compared the use of monthly and quarterly data and demonstrated the merit of using disaggregated (monthly) data in analytical procedures. For example, Cogger (1981), Knechel (1988) and Dzeng (1994) found that the use of monthly data greatly increased the effectiveness of analytical procedures. Kinney (1987, 72) suggested that analytical procedures be founded on base data that are disaggregated. Allen (1993) also suggested that monthly data are preferable to quarterly data in analytical procedures, and he used monthly data for nine electric utilities in his study. Disaggregated data may be superior to aggregated data in analytical procedures for three reasons. First, disaggregated data yield a larger sample size, thus increasing statistical power. Second, disaggregated data are generally influenced less by structural changes in the organization because analysis involving disaggregated data often spans a shorter time period. Third, accounting numbers measure the characteristics of economic activities on a monthly basis more efficiently than on a quarterly basis (e.g., collections of accounts receivable in month t may be clearly related to sales in month t-1, but collections of accounts receivable in quarter t may be only remotely associated with sales in quarter t-1).

While several studies such as Kenny (1987), Knechel (1988), Wheeler and Pany (1990) and Dzeng (1994) investigate Type I and Type II errors, we use both the positive and negative testing approaches to detect the existence of these errors. These approaches were first identified in the audit framework by Roberts (1974). The positive approach represents the traditional application of analytical procedures in practice where auditors investigate large deviations from past performance. A great deal of prior literature, including that mentioned above, has adopted this view by assuming the account is not in error, and if the account balance is beyond a specific level of variation from the expected amount then further investigation is triggered. This positive approach uses [absolute value of E]=0 as a control point for statistical analysis, where E is an error amount. The negative approach, on the other hand, controls the risk of accepting the account balance when [absolute value of E]=M. It focuses more on the assurance that an account is not materially misstated and it can be argued that the negative approach is more consistent with audit objectives. The negative approach is implicitly used in practice by those who use DUS and attribute sampling with compliance testing (Smieliauskas 1990, 149). (1)

There are many ways to express the existence of an error and the decision risks associated with an analytical procedure or a statistical testing approach. A summary is provided in table 1. For the positive approach, the risk of rejecting the population as being in material error when in fact [absolute value of E]=0 is controlled at [alpha] as shown in table 1. E is the amount of error and [absolute value of E] is calculated as the absolute difference between the predicted account balance and the actual account balance. Using the positive approach, alpha risk (Type I error) is concluding that the account is in error ([absolute value of E]>0) when it is not in error ([absolute value of E]=0) as shown in table 1 panel A. Beta risk (Type II error) is concluding that the account is not in error (specifically concluding [absolute value of E]=0) when it is in error ([absolute value of E]>0). Beck and Solomon (1985b) call this the "audit value projection approach" which implicitly tests the null hypothesis that the account book value is fairly stated. For the negative approach, the risk of accepting the population as not being materially in error when in fact [absolute value of E] [greater than or equal to] M is controlled at [alpha], where M is the amount of materiality. Using the negative approach, statisticians define alpha and beta differently because the null hypothesis is defined differently as shown in table 1 panel B. Using the negative approach, alpha risk (Type I error) is concluding that the account is not in material error ([absolute value of E]<M) when it is in material error ([absolute value of E] [greater than or equal to] M). Beta risk (Type II error) is concluding that the account is in material error ([absolute value of E] [greater than or equal to] M) when it is not in material error ([absolute value of E]<M). Beck and Solomon (1985b) call this the "error value projection approach" which implicitly tests the null hypothesis that the account book value is misstated by an amount greater than or equal to materiality. These two definitions have caused a great deal of confusion because alpha and beta risks and Type I and Type II errors are defined differently. As a result, an effort to reconcile this difference was attempted in SAS No. 39 (AICPA 1981)and in AGAS (AICPA 1982). This reconciliation is shown in table 1 panel C. Duke et al. (1985) simply call the incorrect rejection of the book amount the sampling risk at 0 as SR(0) and the incorrect acceptance of the book value the sampling risk at M as SR(M). In this manuscript, we follow Smieliauskas (1990) and define the incorrect rejection of the book amount as the alpha risk and the incorrect acceptance of the book amount as the beta risk, as has been traditionally done in auditing (see table 1 panel D). This is similar to the positive approach definition of risks.

TABLE 1 Testing Approach Outcome Matrices (a) Panel A: Positive Approach [H.sub.0]: [absolute value of E]=0 (control point) [H.sub.a]: [absolute value of E]>0 Decision rule: conclude [H.sub.0] if [absolute value of E] [less than or equal to] Z[(1 - [alpha]).sup.*] [sigma] and conclude [H.sub.a] if [absolute value of E] > Z[(1 - [alpha]).sup.*] [sigma], where [sigma] is the standard error of prediction for the past 24 months. Actual State: Actual State: [absolute [absolute Conclusion value of E]=0 value of E]>0 [absolute value of E]=0 Correct Decision Beta Risk (Type II Error) [absolute value of E]>0 Alpha Risk Correct Decision (Type I Error) Panel B: Negative Approach [H.sub.0]: [absolute value of E] [greater than or equal to] M (control point) [H.sub.a]: [absolute value of E] < M Decision rule: conclude [H.sub.0] if [absolute value of E] [greater than or equal to] M - Z[(1 - [alpha]).sup.*] [sigma] and conclude [H.sub.a] if [absolute value of E] < M - Z[(1 - [alpha]).sup.*] [sigma] , where [sigma] is the standard error of prediction for the past 24 months. Actual State: Actual State: [absolute value [absolute of E] [greater Conclusion value of E]<M than or equal to] M [absolute value of E]<M Correct Decision Alpha Risk (Type I Error) [absolute value of E] Beta Risk Correct Decision [greater than or (Type II Error) equal to] M Panel C: SAS No. 39 / AGAS Outcome Matrix Actual State: [absolute value Actual State: of E] [less than [absolute Conclusion or equal to] M value of E]>M [absolute value of E] Correct Decision Incorrect [less than or Acceptance of the equal to] M Account Book Value or Overreliance [absolute value of E] Incorrect Rejection Correct Decision >M of the Account Book Value or Underreliance Panel D: Traditional Audit Perspective (b) Actual State: Actual State: [absolute value [absolute of E] [greater than Conclusion value of E]<M or equal to] M [absolute value of E]<M Correct Decision Beta Risk [absolute value of E] Alpha Risk Correct Decision [greater than or equal to] M E is the amount of error and M is the amount of materiality. (a) Beck and Solomon (1985b). (b) Duke et al. (1985) and Smieliauskas (1990).

In summary, the literature suggests there is a need to develop more sophisticated analytical procedures that can provide reliable predictions of accounting numbers and detect material errors in an audit. Prior research suggests that structural models including the futuristic concept of an information dual may be good for this purpose. However, very limited research has been done to evaluate structural model performance, and the results have been inconclusive. The following section describes the research method employed. Next, several propositions are tested and reported. The paper concludes with a discussion, limitations and final comments. The appendixes provide data acquisition details and sensitivity analysis results.

RESEARCH METHODOLOGY

There are four phases to our study. Our objective for the first phase is to obtain monthly data for five representative companies. For phase two, we establish integrated simulation models based on the relationships among the accounts of the five companies and key exogenous economic variables. Using these models, we then generate 60 months of complete financial statements for 150 companies. These companies have different sales behavior patterns and degrees of structure in their business and economic activities. We use simulated monthly data because: (1) disaggregated data enable analytical procedures to perform better as noted earlier; (2) an auditor is able to assess departures from expected economic activities by using its natural structure inherent in these data; and (3) disaggregated monthly data are not available for a large number of organizations (Kinney 1987, 60). Moreover, simulation of these data enables us to control our analysis. (2)

In phase three, we use these 150 simulated companies to test five expectation models' predictive ability for a 12-month prediction period based on a 48-month estimation period. In phase four, we seed errors in the 12-month prediction period and use statistical investigation rules to determine the auditor's alpha and beta risks. We then replicate the above phases for another period of time, resulting in the testing of a grand total of 300 simulated companies. Detailed explanations of each of these four phases are provided in the following sections on research methodology and in appendix A. We further assess the sensitivity of these results in appendix B by comparing them to other results not fully presented here for a 36-month estimation period.

Phase 1: Data Acquisition and Generation of Approximate Monthly Data

As detailed in appendix A, we use endogenous variables from five actual single-industry companies that represent a variety of sales behavior patterns. (3) Through curve-fitting techniques, we transform published quarterly data for these companies into approximate monthly data. Our objective here is not to test these five actual companies. It is simply to use a reasonable procedure to obtain representative financial statement data for a variety of companies so we can develop relationships to simulate a large number of companies.

Phase 2: Simulation Process

The objective of phase 2 is to simulate complete sets of monthly financial statements for a large number of different companies from the five sets of monthly data derived in the first phase, as described in appendix A. The following procedures are used to simulate financial statements. First, we model the accounting and economic relationships among variables for each of the five sets of monthly data. Second, regressions based on these relationships are run on the five companies to obtain the coefficients for each independent variable. Some of these regressions are constrained to make economic sense and comply with accounting practice (e.g., in some situations the regression line goes through the origin, i.e., there is no intercept term). Third, using simulation software, we build integrated simulation models based on the relationships and parameters generated from the five companies above.4 The simulation equations based on the coefficients derived above are further constrained as appropriate to reflect economic and accounting activities (e.g., the collection of accounts receivable of month t should be less or equal to the sum of the balance of accounts receivable of month t-1 and the net sales made in month t). Finally, for each of the five companies, we simulate monthly financial statements for 30 different companies for a 60-month period, yielding a total of 150 sample companies. This is then replicated to double the sample to 300 companies.

The general form of each model used in the simulation of these companies is listed in table 2 panel A. These models are founded on logical accounting and economic relationships such as those derived in the prior research (e.g., Elliott and Uphoff 1972; Kaplan 1978; and Wild 1987). For example, collections are a function of past sales. The general models shown below are varied slightly or are simplified based on regression results that reflect the characteristics of each of the five organizations and its economic and accounting relationships. These models are only used to generate representative monthly financial statements for the 150 sample companies for each time period. They are not the same as the integrated structural analytical procedures based on entities and their relationships used for subsequent estimation and testing.

TABLE 2 Simulation Model's Equations and Variables For Five Representative Companies Panel A: Simulation Model's Equations (a) 1. [FS.sub.t] = [b.sub.0] + [b.sub.1]([PFS.sub.t]) + [b.sub.2] ([TPI.sub.t]) + [b.sub.3] ([IND.sub.t]) + [b.sub.4]([SEL.sub.t]) + [b.sub.5]([ITS.sub.t]) + [e.sub.t1] 2. [COG.sub.t] = [b.sub.1]([FS.sub.t]) + [b.sub.2]([RM.sub.t]) + [b.sub.3]([EMP.sub.t]) + [e.sub.t2] 3. [ADM.sub.t] = [b.sub.0] + [b.sub.1]([FS.sub.t]) + [e.sub.t3] 4. [DEP.sub.t] = [b.sub.1](([PPE.sub.t]+[PPE.sub.t-1])/2) + [e.sub.t4] 5. [INT.sub.t] = [b.sub.1]([CL.sub.t]) + [b.sub.2] ([LTD.sub.t]) + [e.sub.t5] 6. [AR.sub.t] = [AR.sub.t-1] + [FS.sub.t] - [COL.sub.t] where [COL.sub.t] = [b.sub.1]([FS.sub.t]) + [b.sub.2]([FS.sub.t-1]) + [b.sub.3] ([FS.sub.t-2]) + [e.sub.t6] (where [b.sub.1] + [b.sub.2] + [b.sub.3] = 1) 7. [INV.sub.t] = [INV.sub.t-1] + [PRD.sub.t] - [COG.sub.t] where [PRD.sub.t] = [b.sub.0] + [b.sub.1] ([FS.sub.t+1]) + [e.sub.t7] 8. [AP.sub.t] = [AP.sub.t-1] + [AP-INC.sub.t] - [AP-DEC.sub.t] where [AP-INC.sub.t] = [PRD.sub.t] and [AP-DEC.sub.t] = [b.sub.0] + [b.sub.1] ([AP.sub.t-1]) + [b.sub.2]([PRD.sub.t]) - [e.sub.t8] (b) 9. [CASH.sub.t] = [CASH.sub.t-1] + [CASH-IN.sub.t] - [CASH-OUT.sub.t] where [CASH-IN.sub.t] = [COL.sub.t] + increase in liabilities other than AP + decrease in assets other than AR and INV and [CASH-OUT.sub.t] = [AP-DEC.sub.t] + [SEL.sub.t] + [ADM.sub.t] + [INT.sub.t] + [TAX.sub.t] + increase in assets other than AR and INV + decrease in liabilities other than AP 10. [GM.sub.t] = [FS.sub.t] - [COG.sub.t] 11. [OP.sub.t] = [GM.sub.t] - [ADM.sub.t] - [SEL.sub.t] 12. [IBT.sub.t] = [OP.sub.t] - [DEP.sub.t] - [INT.sub.t] - [NOP.sub.t] 13. [NI.sub.t] = [IBT.sub.t] - [TAX.sub.t] - [ETR.sub.t] 14. [TCA.sub.t] = [CASH.sub.t] + [AR.sub.t] + [INV.sub.t] + [OCA.sub.t] 15. [TA.sub.t] = [TCA.sub.t] + [PPE.sub.t] + [OA.sub.t] 16. [TCL.sub.t] = [AP.sub.t] + [CL.sub.t] + [ITP.sub.t] + [OCL.sub.t] 17. [TL.sub.t] = [TCL.sub.t] + [LTD.sub.t] + [DT.sub.t] + [OL.sub.t] Panel B: Simulation and Testing Model's Information Set Endogenous Corporate Variables ADM: Administrative Expenditures AP: Accounts Payable AR: Accounts Receivable AS: Actual Sales CASH: Cash CASH-IN: Cash Inflow CASH-OUT: Cash Outflow CL: Debt in Current Liabilities COG: Cost of Goods Sold DEP: Depreciation Expense DIV: Cash Dividends DT: Deferred Taxes ETR: Extraordinary Items FS: Forecast of Sales GM: Gross Margin IBT: Income Before Tax INT: Interest Expense INV: Product Inventory ITP: Income Tax Payable LTD: Total Long-Term Debt NOP: Nonoperating Income OA: Other Assets NI: Net Income OCA: Other Current Assets OCL: Other Current Liabilities OL: Other Liabilities UP: Operating Profit PPE: Property Plant and Equipment SEL: Selling Expenditures TAX: Income Taxes TA: Total Assets TCA: Total Current Assets TCL: Total Current Liabilities TL: Total Liabilities Calculated Endogenous Variables (Equations used in simulation only are in the appendix.) AP-DEC: Accounts Payable Decrement AP-INC: Accounts Payable Increment AP-NET: Accounts Payable Net Change COL: Collections of Accounts Receivable PFS: Preliminary Forecast of Sales PRD: Production Costs Exogenous Variables RM: Producer Price Index of Raw Materials, 1982=100, source: U.S. Department of Commerce (U.S. Economic Analysis Bureau, Survey of Current Business). EMP: Average Earnings per Hour of a General Employee in Product Area, in real dollars, source: U.S. Department of Commerce (U.S. Economic Analysis Bureau, Survey of Current Business). PRM: Prime Rate of Interest (bankers' 90-day acceptance), in real percentage, source: U.S. Department of Commerce (U.S. Economic Analysis Bureau, Survey of Current Business). IND: Industrial Production Index, 1977=100, source: U.S. Department of Commerce (U.S. Economic Analysis Bureau, Survey of Current Business). TPI: Total U.S. Personal Income, in real dollars, source: U.S. Department of Commerce (U.S. Economic Analysis Bureau, Survey of Current Business). ITS: Industry Total Sales, in real dollars, source: Compustat tape. (a) Equations 1-8 are stochastic, whereas 9-17 are deterministic. The subscript t refers to the month t. (b) The coefficients of AP-DEC model are indirectly obtained from the regression model of AP. Since [AP-DEC.sub.t]= [AP.sub.t-1]-[AP.sub.t]+[PRD.sub.t], where [AP.sub.t]= [d.sub.0]+[d.sub.1]([AP.sub.t-1])+[d.sub.2]([PRD.sub.t])+ [e.sub.t8], then [AP-DEC.sub.t]=[AP.sub.t-1]-[[d.sub.0] +[d.sub.1]([AP.sub.t-1])+[d.sub.2]([PRD.sub.t])+[e.sub.t8]]+ [PRD.sub.t]=-[d.sub.0]+(1-[d.sub.1])([AP.sub.t-1])+ (1 - [d.sub.2])([PRD.sub.t])-[e.sub.t8]

We use the residuals e and their respective standard deviations [sigma] for each of the regression equations for each of the five representative companies to model the degree of structure for the 150 simulated companies for each time period. Each equation for each company has its own unique [sigma]. Multiples of [sigma] are used to introduce various levels of structural uncertainty (unexplained variation) in the regression relationships used in the simulation equations. By manipulating [sigma], the degree of structure in an organization's accounting system is manipulated, which in turn models the degree of stability and predictability in the business and economic activities of the organization. (5) For each of the five actual companies, there are ten simulations with e~NID(0,1/2[sigma]) to reflect a higher degree of structure, ten simulations with e~NID(0,1 [sigma]) to reflect a normal degree of structure, and ten simulations with e~NID(0, 1 1/2 [sigma]) to reflect a lower degree of structure. (6) The residuals are independent and normally distributed. (7)

In summary, 150 (10 simulations x 3 degrees of structure x 5 companies with different sales behavior patterns) different companies are simulated. For each company a complete set of financial statements is generated. As in previous studies, we assume these statements are correct before errors are seeded. This is then replicated for another time period resulting in 300 companies.

Phase 3: Expectation Models

In phase 3, we use five expectation models to predict account balances and compare these balances with recorded (simulated) amounts. These models represent analytical procedures used in current practice (i.e., the Martingale) and those suggested by prior research for improving the predictive ability. Expectation models are based on the first 48 months of simulated data and predictions are made for the next 12 months.

Nonstructural Models

* Multivariate Stepwise Regression Model: E ([Y.sub.t]) = [b.sub.0] + [b.sub.1] [X.sub.1t] + [b.sub.2] [X.sub.2t] + ... + [e.sub.t]

where:

E([Y.sub.t]) is the expected value of account Y in month t, and Xs are variables that significantly contribute to the increase in the explanatory power of the model. (8)

* Griffin-Watts ARIMA [(011).sub.12] x (011)

Model:

E([Y.sub.t]) = [alpha] + [summation][PHI] [sub.n](B)[X.sub.nt] + [theta] (B)/[OMEGA] (B)[e.sub.t]

where:

E([Y.sub.t]) is the expected value of account Y in month t,

[alpha] is the constant term,

[PHI] [sub.n](B) is the transfer function weight for the nth input series,

B is the backshift operator, B[X.sub.t] = [X.sub.t-1],

[X.sub.nt] is the nth input time series (or a difference theory) at time t,

[theta] (B) is the moving average operator, [theta] (B) = 1 - [theta] [sub.1](B)- ... - [theta] [sub.q][(B).sup.q],

[OMEGA] (B) is the autoregressive operator, [OMEGA] (B) = 1 - [OMEGA] [sub.1](B) - ... - [OMEGA] [sub.p][(B).sup.p], and

[e.sub.t] is the error term.

(Allen 1993; Lorek et aL 1992)

* Modified X-11 Model:

E([Y.sub.t]) = [T.sub.t] x [S.sub.t] x [I.sub.t]

where:

E([Y.sub.t]) is the expected value of account Y in

month t,

T is the trend-cycle component,

S is the seasonal component, and

I is the irregular component.

(Wheeler and Pany 1990)

* Martingale Model:

E([Y.sub.t]) = [Y.sub.t-1]

where:

E([Y.sub.t]) is the expected value of account Y in month t, and [Y.sub.t-1] is the simulated value of account Y in month t-1.

Structural Model

While there are many ways to develop a structural analytical procedure, such as the regression based procedures suggested by Kaplan (1978), Neter (1980) and Wild (1987), we have elected to use the entity-relation (E-R) format because it can closely model a firm's economic activities. E-R diagrams are used to design many contemporary database accounting systems. As a result, these diagrams, along with their associated relational tables, can play a significant role in the auditors' review process (Amer 1993). Moreover, using CASE (computer-aided software engineering), auditors can derive E-R diagrams from relational databases (Bachman Information Systems 1988; Knowledge Ware 1990). The generic structural model is patterned after the entity-relation diagram in figure 1 and expressed by the equations that are set forth in the following discussion. All variable definitions are found in table 2 panel B.

[FIGURE 1 OMITTED]

The structural model incorporates the interdependencies among the accounting numbers that are characteristic of the nature of a company and key exogenous variables that drive the economic events of the company. The generic structural model we use defines the relationships affecting economic resources, such as changes in inventory, explicitly with accounting identities. Events, such as cash collection and production, are functions of the endogenous and exogenous variables that are likely to affect the event. Accruals like accounts receivable and accounts payable are defined as imbalances between the timing of the flow of economic resources. Each equation is noted below with its rationale. The structural model is recursive and follows the general sequence of the equations below. (9) These generic equations differ from the simulation models (table 2) in a number of ways. The differences entail the use of different sets of endogenous and exogenous variables, the presence or absence of intercept terms, different random error (source of variability) terms, and different constraints. The equations we use are generic and should be embellished and modified given a particular organization's environment, entities and activities. In the future, they may be modeled via an information dual as suggested by Elliott (1994, 1995).

1. Sales and production events. All equations are founded on actual sales, denoted in the expectation equation as [AS.sub.t]. Cash collections are a function of current and past sales where the coefficients are constrained for economic reasons. Collections are modeled by:

E([COL.sub.t]) = [b.sub.1]([AS.sub.t]) + [b.sub.2]([AS.sub.t-1]) + [b.sub.3]([AS.sub.t-2]) + [e.sub.t] (1)

where

[b.sub.1] + [b.sub.2] + [b.sub.3] = 1.

Production is a function of current sales, the materials price index, a labor price index, and an index of industrial production. The exogenous indices adjust for price changes, and the industrial production index reflects decisions to produce for the next period (i.e., adjust production for the seasonal and economic outlook). Production is modeled by:

E([PRD.sub.t]) = [b.sub.1]([AS.sub.t]) + [b.sub.2]([RM.sub.t]) + [b.sub.3]([EMP.sub.t]) + [b.sub.4]([IND.sub.t]) + [e.sub.t]. (2)

The cost of goods sold parallels actual sales with adjustments for material and labor price changes, as well as for a general measure of pressure on material and labor prices (i.e., the industrial production index). Cost of goods sold is modeled by:

E([COG.sub.t]) = [b.sub.1]([AS.sub.t]) + [b.sub.2]([RM.sub.t]) + [b.sub.3]([EMP.sub.t]) + [b.sub.4]([IND.sub.t]) + [e.sub.t]. (3)

2. Administrative activities (events). Administrative activities are modeled as a function of many factors like those illustrated in figure 1. Because some administrative expenses are committed and continue for a long period of time, and others are discretionary and likely to be a function of assets and profitability, we model administrative expense as:

E([ADM.sub.t]) = [b.sub.0] + [b.sub.1] ([AS.sub.t]) + [b.sub.2]([TA.sub.t]) + [b.sub.3]([GM.sub.t]) + [b.sub.4]([OP.sub.t]) + [b.sub.5] ([EMP.sub.t]) + [b.sub.6]([ADM.sub.t-1]) + [e.sub.t]

where the intercept term and ADM of last period capture the long-term components of ADM. (10)

Capital equipment acquisition events are captured in our model as changes in PPE and we calculate depreciation as a function of the average PPE. Specifically, we calculate expected depreciation as:

E([DEP.sub.t]) = [b.sub.1](([PPE.sub.t-1]+[PPE.sub.t])/2) + [e.sub.t]. (5)

We take the acquisition of financial capital as a given and use it to calculate interest expense. We model interest expense as a function of long-term debt, current liabilities, and the prime rate as:

E([INT.sub.t]) = [b.sub.1]([LTD.sub.t]) + [b.sub.2]([CL.sub.t]) + [b.sub.3]([PRM.sub.t]) + [e.sub.t]. (6)

3. Economic resources. We model ending inventory simply as beginning inventory plus production less cost of goods sold, which follow from the event equations above. This is an accounting identity with no exogenous variables or unexplained variation. Thus,

E([INV.sub.t]) = [INV.sub.t-1] + E([PRD.sub.t])-E([COG.sub.t]). (7)

As seen in figure 1, other resources such as CASH could be modeled in a similar fashion, but they are not used here for prediction or error detection.

4. Accruals. Accounts receivable is modeled as a delay in the flow of resources (cash) between the sales and collection events as noted in figure 1. Thus, accounts receivable is modeled as:

E([AR.sub.t]) = [AR.sub.t-1] + [AS.sub.t] - E([COL.sub.t]). (8)

Accounts payable is constructed from an accounts payable increase (AP-INC) and an accounts payable decrease (AP-DEC) and is modeled as:

E([AP.sub.t]) = [AP.sub.t-1] + E([AP-INC.sub.t]) - E([AP-DEC.sub.t]) = [AP.sub.t-1] + E([AP-NET.sub.t]). (9)

We assume that accounts payable increase is composed of production and administrative expenses and is expressed as:

E([AP-INC.sub.t]) = [b.sub.1i]([ADM.sub.t]) + [b.sub.2i]([PRD.sub.t]) + [e.sub.t] (10)

where the additional subscript indicates a coefficient for increase (i).

Further, we assume that a portion of production and all remaining payables from last period are paid in the current period and we define accounts payable decrease as:

E([AP-DEC.sub.t]) = [b.sub.1d]([AP.sub.t-1]) + [b.sub.2d]([PRD.sub.t]) + [e.sub.t] (11)

where the additional subscript indicates a coefficient for decrease (d).

To add a degree of mitigating exogenous economic influence, we also factor in interest rates and industrial production indices, resulting in a net change in accounts payable defined as:

E([AP-NET.sub.t]) = [b.sub.1]([ADM.sub.t]) + [b.sub.2]([PRD.sub.t]) + [b.sub.3]([AP.sub.t-1]) + [b.sub.4]([PRM.sub.t]) + [b.sub.5]([IND.sub.t]) + [e.sub.t] (11)

where [b.sub.1]>0, 0<[b.sub.2]<1, and-1<[b.sub.3]<0.

Because structural models incorporate interdependencies among accounting numbers and key exogenous variables that drive the economic environment of the company, they should perform better than ARIMA, X-11, and Martingale models which do not incorporate these relationships. Moreover, the structural model should perform better than multivariate models that do not explicitly specify the inherent nature of the resource, event and agent determinants that are characteristic of a particular organization. In particular, the comparison between the structural model and the stepwise regression model focuses on the degree to which the explicit incorporation of the characteristic structure of the organization's transactions into an analytical procedure enhances its predictability and error detection capability. In summary, Amer (1993) states "from the auditor's perspective, the use of the ER conceptual modeling ... will result in more effective audits of accounting database processing environments."

Phase 4: Error Seeding and Investigation Rules

Once the simulated data have been constructed, errors are systematically seeded into the data in phase 4. The same types of errors used by Wheeler and Pany (1990) are used in this study because these are errors commonly encountered by manufacturing and retail companies. The errors and the accounts affected are summarized in appendix A in table A.1. (11)

Errors are seeded in the 12-month prediction period. (12) Only the results for no error ([absolute value of E]=0) and material errors ([absolute value of E]=M) are reported in this manuscript. We use Warren and Elliott's (1986) empirically based materiality level for the sales-driven accounts (accounts receivable, inventory, accounts payable, and cost of goods sold) because each of these tends to be related to sales. This materiality level is 0.5 percent of annual sales. However, for other accounts that are not driven by sales, M is defined in a more relative way as 1 percent of the account's annual balance. This definition affords some scaling for these accounts, which may be overwhelmed by a more global definition based on sales revenue. In all cases examined, the definition of materiality for nonsales-driven accounts is less than Warren and Elliott's (1986) definition, making our test of each model's ability to detect a material error much more stringent than the use of 0.5 percent of annual sales. (13) As a result, our measure of beta risk is more conservative than if we had used 0.5 percent of the annual sales. We analyze our results separately so that any effect of using two definitions may be determined. However, because the same proportion of sales-driven and nonsales-driven accounts are present in each case, relative global comparisons are valid. In total, 2,100 (150 simulated companies x 2 materiality levels ([absolute value of E]=0 and [absolute value of E]=M)x 7 accounts tested) time-series are generated for each time period.

To determine to what extent the achieved alpha and beta risks are close to the nominal or expected level, we use the central limit theorem and assume a nominal level of alpha and beta risk of [alpha] =.33 for the positive and negative approach, respectively. (14) This seems reasonable given that auditors use additional collateral evidence, such as the test of details, in their assessment of detection risk. A combined detection risk of .05 can be achieved by combining an analytical review risk of .33 with a test of detail risk of .15. We also make our comparisons using the unmodified positive approach like Duke et al. (1982) and most prior analytical review studies. However, in practice, some modify the positive approach to achieve same ex ante and ex post results as the negative approach (Smieliauskas 1990, 163-165). (15)

PROPOSITIONS AND RESULTS

We test four propositions related to several factors which are believed to affect the performance of analytical procedures (expectation models). The propositions consider the predictive ability and error detection performance of models, as well as the effect of degree of structure and sales behavior pattern on the performance of different models. We do an ex post analysis of the error detection performance of the analytical procedures and compare our results to a nominal large sample ex ante criteria. (16) We assume that central limit theorem underlies all of the analytical procedures tested. This means that, ex ante, our tests should have alpha and beta risks equal to .33 at the control point for both the positive and negative approaches.

When a proposition is that the expectation models' prediction error or decision risks are equal, the proposition is tested directly and rejected if a significant difference is observed. When a proposition is that the models' prediction error or decision risks are not equal, we test the null and accept the proposition if the difference is significant. We use a significance level of .05 and two-tailed t-tests for our equality tests. The propositions are tested at a global and account group level. The global level includes all seven accounts (i.e., AR, INV, AP, COG, ADM, DEP and INT) together. The group level consists of two groups. The sales-driven group is composed of AR, INV, AP and COG, where the definition of materiality is a function of sales (0.5 percent of net annual sales). The nonsales-driven group is composed of ADM, DEP and INT, where the definition of materiality is a function of account balance (1 percent of annual account balance). (17)

To enhance the generalizability, we replicate this study for two periods of time (1986-1990 and 1989-1993) and we compare the results with those generated for a shorter (36-month) estimation period in appendix B. The results of both time periods are presented in the tables. The results are presented for both the positive and negative testing approaches and for more than one degree of structure in many cases.

Predictive Ability

Structural models incorporate not only the key exogenous variables that drive the economic events of the company, but also the interdependencies among the accounting numbers that are characteristic of the nature of a company. They are expected to have better predictive ability than the other models we test. Therefore, our first proposition is that structural models will generate predicted values significantly closer to the simulated values and they will have smaller mean absolute percentage prediction errors (MAPE) than stepwise, ARIMA, X-11, and Martingale models.

In order to completely analyze the behavior of each expectation model, we first need to consider both its confidence interval and prediction accuracy. The mean absolute percentage prediction errors (MAPE) and mean prediction errors (MPE) summarize the difference between confidence intervals and prediction accuracy of the models tested and their potential influence on the alpha and beta decision risks resulting from the models' utilization. A larger MAPE implies a larger standard deviation of the prediction errors and a wider confidence interval for the predicted value. On the other hand, the mean prediction error (MPE), which is calculated as the average of the differences between the predicted values and actual values, measures the distance between the mean of predicted values and the mean of actual values. The smaller the MPE, the closer the mean of the predicted values is to the mean of the actual values. Table 3 compares the MAPE and MPE for each model. (18)

TABLE 3 Mean Absolute Percentage Prediction Errors (MAPE) and Mean Prediction Errors (MPE) Higher Structure Global Sales (a) NonSales (b) MAPS (1986-1990) Structural 0.034 0.032 0.036 Stepwise 0.033 0.030 0.036 ARIMA 0.060 0.051 0.073 X-11 0.081 0.072 0.093 Martingale 0.058 0.048 0.071 MAPE (1989-1993) Structural 0.104 0.037 0.193 Stepwise 0.103 0.030 0.199 ARIMA 0.118 0.047 0.212 X-11 0.136 0.075 0.217 Martingale 0.115 0.041 0.212 MPE (1986-1990) Structural 0.054 0.092 0.029 Stepwise 0.319 0.553 0.008 ARIMA 0.108 0.152 0.050 X-11 1.285 2.202 0.067 Martingale 0.155 0.251 0.028 MPE (1989-1993) Structural 0.066 0.113 0.009 Stepwise 0.123 0.215 0.007 ARIMA 0.135 0.215 0.035 X-11 0.838 1.472 0.053 Martingale 0.115 0.189 0.017 Normal Structure Global Sales NonSales MAPS (1986-1990) Structural 0.108 0.132 0.075 Stepwise 0.094 0.104 0.082 ARIMA 0.160 0.185 0.125 X-11 0.199 0.229 0.159 Martingale 0.171 0.202 0.129 MAPE (1989-1993) Structural 0.216 0.075 0.405 Stepwise 0.218 0.072 0.411 ARIMA 0.256 0.099 0.465 X-11 0.430 0.243 0.679 Martingale 0.268 0.090 0.506 MPE (1986-1990) Structural 0.170 0.302 0.022 Stepwise 0.541 0.918 0.038 ARIMA 0.524 0.899 0.054 X-11 1.400 2.385 0.074 Martingale 0.355 0.600 0.027 MPE (1989-1993) Structural 0.086 0.138 0.018 Stepwise 0.128 0.229 0.012 ARIMA 0.316 0.527 0.039 X-11 0.898 1.537 0.058 Martingale 0.205 0.346 0.020 Lower Structure Global Sales Nonsales MAPS (1986-1990) Structural 0.127 0.124 0.131 Stepwise 0.134 0.121 0.149 ARIMA 0.182 0.181 0.184 X-11 0.526 0.726 0.259 Martingale 0.193 0.382 0.206 MAPE (1989-1993) Structural 0.423 0.118 0.831 Stepwise 0.412 0.110 0.813 ARIMA 0.474 0.143 0.915 X-11 0.708 0.307 1.244 Martingale 0.452 0.134 0.876 MPE (1986-1990) Structural 0.443 0.798 0.054 Stepwise 0.724 1.218 0.065 ARIMA 0.161 0.247 0.051 X-11 1.597 2.741 0.069 Martingale 0.276 0.455 0.037 MPE (1989-1993) Structural 0.244 0.407 0.028 Stepwise 0.291 0.503 0.020 ARIMA 0.370 0.644 0.080 X-11 1.409 2.534 0.091 Martingale 0.158 0.276 0.014 (a) The definition of materiality for the sales-driven group, which include AR, INV, AP and COG, is 0.5 percent of annual sales. (b) The definition of materiality for the nonsales-driven group, which include ADM, DEP, INT, is 1 percent of the account annual balance.

Based on the global MAPE and MPE numbers in the "Normal Structure" column of table 3, the distributions of predicted values and actual values for each expectation model are depicted in figure 2. The X-11 model has the largest MAPE and MPE when compared to the ARIMA, Martingale, stepwise, and structural models. Therefore, it has the greatest standard deviation and confidence intervals of the predicted values and the greatest distance between the means of predicted values and actual values. Icerman et al. (1993) likewise found that X-11 had the highest MAPE. On the other hand, the ARIMA and Martingale models have relatively large MAPE (uniformly larger than structural and stepwise) which can cause large confidence intervals. The distributions of actual and predicted values (figure 2) for each of the expectation models are also useful for interpreting the results for the positive (panel A) and negative (panel B) approaches. The distribution of the predicted values P is represented by the dashed line and the distribution of the actual values A without the seeded errors is represented by the solid line. Variants of these which include errors are also shown where appropriate. Confidence intervals are represented by vertical dashed lines.

[FIGURE 2 OMITTED]

The alpha risk at [absolute value of E]=0 is the likelihood that an auditor will conclude that an account is in error when it is not. This is the area of distribution A outside the confidence interval for the distribution P for the positive approach (figure 2 panel A) and the area of distribution A inside the confidence interval for the distribution P-M or P+M for the negative approach (figure 2 panel B). Generally, for the positive approach, a wider confidence interval can lead to a smaller likelihood of an actual amount A being outside the confidence interval resulting in a lower level of alpha risk at [absolute value of E]=0. However, alpha risk also depends on the accuracy of the expectation model. For example, the MPE error can be so large that the alpha risk can be very high due to poor prediction as it apparently is for the X-11 model as shown in figure 2 panel A. For the negative approach, a wider confidence interval can lead to a greater level of alpha risk because there is greater likelihood of an actual amount falling within the confidence interval of P-M or P+M. However, a very large prediction error (high MPE) can lead to a very small or very large alpha risk. For example, there may be very little chance that A will fall within the confidence interval of P-M or a large portion of A may fall within the confidence interval of P+M as can be seen in figure 2 panel B. This would be reversed for P>A.

The beta risk at [absolute value of E]=M, for both approaches, is the likelihood that an auditor will conclude that an account is not in material error when it is in material error. This is the area of distribution A-E or A+E inside the confidence interval for the distribution P for the positive approach and the area of distribution A-E or A+E outside the confidence interval for the distribution P-M or P+M, respectively, for the negative approach. For the positive approach, a wider confidence interval can lead to a greater likelihood of an actual amount A-E (or A+E) being inside the confidence interval, resulting in a greater beta risk at [absolute value of E]=M of not detecting a material error. For the negative approach, a wider confidence interval can actually lead to a lower level of beta risk as can be observed in figure 2 panel B, because there is less of a chance the area under the curve A-E or A+E will fall outside the confidence interval of P-M or P+M, respectively. Similar to the alpha risk, a large prediction error (MPE), such as that for the X-11 model, can lead to anomalies resulting in very low or high beta risks regardless of the width of the confidence interval. Table 4 provides a summary of the relationships among confidence intervals, risk and testing approach assuming that MPE is relatively small, i.e., P is close to A.

TABLE 4 Relationships Among Confidence Interval Width, Risk, and Testing Approach (a) Confidence Interval Width Risk Level Alpha Risk ([absolute value of E]=0): Positive Approach Wider Lower (Control Point) Negative Approach Wider Higher Beta Risk ([absolute value of E]=M): Positive Approach Wider Higher Negative Approach Wider Lower (Control Point) (a) Assuming the predicted values (P) and actual values (A) are reasonably close.

We use two-tailed t-tests to test the proposition that there is a significant difference in the mean absolute percentage prediction errors (MAPE) between the structural and other models. The results are summarized in table 5. (19) As can be seen, the structural models do not generate significantly smaller MAPE than stepwise models. In fact, for the higher degree of structure cases in 1986-1990, stepwise models even generate significantly smaller MAPE than the structural models at the global level. In contrast, structural models generate significantly smaller MAPE than ARIMA models for five out of six account groups for 1986-1990. There is no significant difference between structural and ARIMA models for 1989-1993 period.

TABLE 5 Equality Tests (a) on Models' Mean Absolute Percentage Prediction Errors (MAPE) For Structural Model (S) vs. Stepwise (T), ARIMA (A), X-11 (X), and Martingale (M) Models Higher Structure Global Sales Nonsales Panel A: 1986-1990 S-T X (.02) ns (.11) ns (.13) S-A * (.00) * (.00) * (.00) S-X * (.00) * (.00) * (.00) S-M * (.00) * (.01) * (.00) Panel B: 1989-1993 S-T ns (.95) ns (.15) ns (.92) S-A ns (.49) ns (.13) ns (.65) S-X ns (.20) * (.00) ns (.69) S-M ns (.68) ns (.48) ns (.76) Normal Structure Global Sales Nonsales Panel A: 1986-1990 S-T ns (.48) ns (.46) ns (.81) S-A ns (.28) ns (.60) * (.00) S-X * (.05) ns (.26) * (.00) S-M ns (.22) ns (.45) * (.00) Panel B: 1989-1993 S-T ns (.92) ns (.46) ns (.94) S-A ns (.29) ns (.37) ns (.40) S-X * (.01) * (.01) ns (.09) S-M ns (.29) ns (.65) ns (.35) Lower Structure Global Sales Nonsales Panel A: 1986-1990 S-T ns (.81) ns (.59) ns (.62) S-A * (.00) * (.01) * (.00) S-X * (.00) * (.00) * (.00) S-M * (.01) ns (.08) * (.00) Panel B: 1989-1993 S-T ns (.94) ns (.38) ns (.98) S-A ns (.54) ns (.21) ns (.65) S-X * (.04) * (.00) ns (.17) S-M ns (.70) ns (.43) ns (.79) (a) results of t tests (i.e., p-values) are reported in this table. The sample size n=4200 is made up of 5 companies x 10 simulations x 7 accounts x 12 months. * indicates that the MAPE of the former model (e.g., S) is significantly smaller than the MAPE of the latter models (e.g., T) at the .05 level. X indicates that the MAPE of the former model is significantly larger than the decision risk of the latter models at the .05 level. ns indicates not significantly different at the .05 level.

Structural models consistently generate smaller MAPE than X-11 models globally and for five out of six account groups across all degrees of structure in panel A of table 5 for 1986-1990. The results for 1989-1993 are in the same direction, particularly for sales-driven accounts, but not as strong. (20)

The MAPE for the structural models are significantly smaller than those of Martingale models for two of the three degrees of structure globally for 1986-1990. The proposition that structural models will generate smaller MAPE than Martingale models is more often supported for the nonsales-driven group than for the sales-driven group. For 1989-1993 there is no significant difference even though the MAPE for the structural models are uniformly smaller (table 3).

From the evidence above (tables 3 and 5), we conclude that, among the analytical procedures (expectation models) examined in this research, structural models perform significantly better than X-11, moderately better than ARIMA and Martingale models, but no better than stepwise models with respect to their predictive ability for the earlier time period. While the results in the later time period are not as significant, they all favor the structural model when compared to the ARIMA, X-11, and Martingale, as shown in table 3. The results also can be observed graphically in figure 2 where the actual distributions are shown for the 1989-1993 time period. In general, the prediction performance is similar to Wild's (1987, 158), who used one case and stated that the structural model "performed well at predicting the behavior of accounting numbers and was superior to univariate models on several dimensions (but) the prediction performance of the structural models was not significantly better than that of multivariate stepwise models."

A possible reason for the nonsignificant difference between structural and stepwise models is that a stepwise model indirectly incorporates the structure of the firm's business and economic activities in a manner similar to the structural models. Specifically, stepwise models incorporate plausible independent variables that contribute significantly to the explanatory power of the model. The independent variables in this research are chosen from a group of endogenous and exogenous variables that are plausible as shown in footnote 8, thus the stepwise equations are not completely contemporaneous. Given Wild's (1987) results and this potential indirect incorporation of the structural relationships among accounting and economic variables, the nonsignificant difference between the stepwise and structural models is not a surprise.

Error Detection Performance

In general, we posit that structural models will perform better than the other models in terms of error detection. This proposition is founded on the premise that structural models should generate predicted values closer to the simulated values, which should in turn lead to smaller standard deviations of the prediction errors and tighter confidence intervals. The precise nature of specific detection propositions depends on the testing approach used as shown in table 1. Testing for the positive approach is controlled at [absolute value of E]=0 to be [alpha] =.33. As a result, the alpha risk for the positive approach is expected to be the same for all the models because a common rejection area ([alpha] =.33) is applied to each model. While beta risk is not controlled at [absolute value of [absolute value of E]=M, tighter confidence intervals should result in less beta risk than for those models with wider confidence intervals for the positive approach. In contrast, the negative approach is controlled at [absolute value of E]=M to be [alpha] =.33. As a result, the beta risk for the negative approach is expected to be the same for all the models because the common rejection area ([alpha] =.33) is applied to each model. In contrast to that above, while the alpha risk is not controlled at [absolute value of E]=0, tighter confidence intervals should result in less alpha risk than for those models with wider confidence intervals for the negative approach. The logic for the direction of these risks at the noncontrol points was presented earlier. In general, auditors can predict decision risks at the control points, but they can not easily make predictions at noncontrol points. The decision risks at the noncontrol points depend on a host of factors including the expectation model, testing approach, degree of economic stability, and many firm-specific characteristics such as sales behavior patterns that we test in this manuscript. In summary, structural models are expected to generate significantly less beta risk for the positive approach and significantly less alpha risk for the negative approach than the stepwise, ARIMA, X-11, and Martingale models. All other risks are posited to be the same. The alpha and beta risks are obtained by dividing the number of errors by the number of all possible simulation iterations and error occurrences.

Wheeler and Pany (1990) suggested that combining the alpha and beta risks implicitly assumes that the auditors are equally averse to either type of error. Since auditors' risk preference for the two types of errors may vary between auditors, situations and testing approaches, we report and analyze alpha and beta decision risks both jointly and separately and by testing approach. Moreover, alpha and beta risks are reported by degree of structure in an organization's economic activity. The relative incidence of each type of risk reflects the characteristics of each model as portrayed in figure 2. An auditor can then use these results in selecting an analytical procedure (model) and testing approach based on his or her audit objectives.

The alpha and beta risks generated globally are shown in table 6. (21) As can be seen, the majority of the average risks are less than 0.5, a random process benchmark, for the structural and stepwise procedures which may make them beneficial for an auditor to use. This result holds for both the positive and negative approaches. The comparative combined positive approach risks are lower than those reported by Kinney (1987) for all the models and lower than Wheeler and Pany (1990) for the structural and stepwise approaches for [alpha] =.33. (22) The positive approach results are comparable to Wheeler and Pany (1990) for the X-11 model. (23) The following discussion of the alpha and beta risks presented in table 6 is supplemented with results of two-tailed t-tests of the propositions of the differences in alpha, beta, and combined alpha and beta risks between the structural and other models, where significance is defined at the .05 level. Details can be obtained from the authors.

TABLE 6 Comparison of Average Global Alpha, Beta and Combined Alpha and Beta Decision Risks For Each Degree of Structure (48-Month Estimation Period) Higher Structure Alpha Beta Com. (a) [absolute value [absolute value of E]=0 of E]=M Panel A: 1986-1990 Positive Approach Structural 0.368 0.170 0.269 Stepwise 0.407 0.158 0.283 ARIMA 0.330 0.282 0.305 X-11 0.579 0.256 0.417 Martingale 0.323 0.318 0.321 Panel B: 1989-1993 Positive Approach Structural 0.367 0.225 0.296 Stepwise 0.391 0.185 0.288 ARIMA 0.315 0.303 0.309 X-11 0.605 0.198 0.402 Martingale 0.325 0.279 0.302 Panel C: 1986-1990 Negative Approach Structural 0.169 0.307 0.238 Stepwise 0.056 0.314 0.185 ARIMA 0.321 0.265 0.293 X-11 0.344 0.336 0.340 Martingale 0.305 0.291 0.298 Panel D: 1989-1993 Negative Approach Structural 0.238 0.292 0.265 Stepwise 0.199 0.290 0.245 ARIMA 0.321 0.261 0.291 X-11 0.351 0.335 0.343 Martingale 0.291 0.275 0.283 Normal Structure Alpha Beta Com. (a) [absolute value [absolute value of E]=0 of E]=M Panel A: 1986-1990 Positive Approach Structural 0.388 0.282 0.335 Stepwise 0.458 0.230 0.345 ARIMA 0.353 0.405 0.379 X-11 0.628 0.267 0.448 Martingale 0.313 0.445 0.379 Panel B: 1989-1993 Positive Approach Structural 0.345 0.338 0.342 Stepwise 0.385 0.279 0.332 ARIMA 0.307 0.435 0.371 X-11 0.589 0.275 0.432 Martingale 0.324 0.426 0.375 Panel C: 1986-1990 Negative Approach Structural 0.346 0.290 0.318 Stepwise 0.184 0.350 0.267 ARIMA 0.466 0.234 0.350 X-11 0.467 0.281 0.374 Martingale 0.460 0.265 0.363 Panel D: 1989-1993 Negative Approach Structural 0.430 0.230 0.330 Stepwise 0.372 0.226 0.299 ARIMA 0.519 0.201 0.360 X-11 0.523 0.266 0.395 Martingale 0.487 0.196 0.342 Lower Structure Alpha Beta Com. (a) [absolute value [absolute value of E]=0 of E]=M Panel A: 1986-1990 Positive Approach Structural 0.504 0.322 0.413 Stepwise 0.492 0.264 0.378 ARIMA 0.371 0.472 0.422 X-11 0.545 0.339 0.442 Martingale 0.444 0.457 0.451 Panel B: 1989-1993 Positive Approach Structural 0.358 0.416 0.387 Stepwise 0.391 0.343 0.367 ARIMA 0.334 0.513 0.423 X-11 0.590 0.332 0.461 Martingale 0.344 0.498 0.421 Panel C: 1986-1990 Negative Approach Structural 0.487 0.239 0.363 Stepwise 0.304 0.281 0.293 ARIMA 0.631 0.172 0.402 X-11 0.631 0.215 0.423 Martingale 0.635 0.205 0.420 Panel D: 1989-1993 Negative Approach Structural 0.543 0.186 0.365 Stepwise 0.483 0.191 0.337 ARIMA 0.630 0.156 0.393 X-11 0.612 0.199 0.406 Martingale 0.613 0.161 0.387 (a) "Com." represents average combined alpha and beta decision risk.

There are several common patterns in the results. For the positive approach, it is apparent from table 6 that the ARIMA and Martingale models yield the lowest alpha risk. The alpha risk for both of these models is closer to the specified control point ([absolute value of E]=0) risk of [alpha] =.33. When compared with the structural model, the ARIMA and Martingale models both have a significantly lower alpha risk than the structural model for all three degrees of economic stability for the early time period and for the highest level of economic stability for the later time period. But, this is coupled with a greater, often significantly greater, MAPE (table 5) than the structural and stepwise model. Thus, as discussed earlier relative to figure 2, the apparent better performance of ARIMA and Martingale could be simply due to a larger confidence interval (MAPE). In contrast, the stepwise, structural and X-11 models seem to perform better in terms of the beta risk for the positive approach. When we compare the structural model with other models, we find that the structural model has a significantly lower beta risk than either the ARIMA or the Martingale model for all degrees of structure and for both time periods. But, except for the earlier time period for the higher degree of structure, the stepwise model has a significantly lower beta risk for the positive approach than the structural model. These results are consistent with the earlier discussion of alpha and beta risks as seen in figure 2 and tables 3 through 5.

It is also apparent from table 6 that the beta risks are close to or below the specified control point ([absolute value of E] [greater than or equal to] M) risk of [alpha] =.33 for all the models for all degrees of structure for the negative approach. When we compare the structural model with others, the differences are not significant for most cases except that the ARIMA model has a significantly lower beta risk than the structural model in some cases and the stepwise model has a significantly higher beta risk than the structural model in a few lower degree of economic stability cases. In term of alpha risk, the structural and stepwise models yield the lowest alpha risk for both time periods for the negative approach. This is in part due to a smaller MAPE (table 3) and smaller confidence interval (figure 2) for the structural and stepwise models as discussed earlier. When we compare the structural model with other models, we find that except for the Martingale model in the later time period, the alpha risk for the structural model is significantly lower than that for the ARIMA, X-11, and Martingale models for all degrees of economic stability and for both time periods. Again, these results are consistent with our earlier discussion of alpha and beta risks.

From the combined perspective, the structural and stepwise models consistently perform the best for both approaches and for both time periods. The structural model consistently has a significantly lower combined risks than all the models except the stepwise model, where the pattern is reversed with the stepwise model having a significantly lower combined risk as can be observed in table 6. The performance of the X-11 model in table 6 appears to be due to an anomaly. The small beta risk for the positive approach is not caused by good performance, but by very poor performance. This poor performance, as measured by relatively high MAPE and MPE, may follow from predicted values that are far away from the simulated values as illustrated in table 3 and figure 2. As a result, the X-11 model frequently concludes that there is an error whether errors are seeded or not. This would cause a large alpha risk and a small beta risk. Icerman et al. (1993, 13) likewise concluded that the X-11 model's poor performance may have resulted in a small beta risk.

It is also apparent that in terms of audit assurance, with respect to the likelihood of not detecting a material error when there is one, the structural and stepwise models clearly outperform the ARIMA and Martingale models with respect to beta risks for the positive approach. However, from a statistical perspective, the negative approach may be more appropriate for audit assurance because the risk is controlled at M, although some in practice adjust positive approach results to approximate negative approach results. Our negative approach results indicate that all five models achieve satisfactory assurance and control the beta risk close to or below the nominal level of .33. Given this fact, the structural and stepwise models outperform the ARIMA, X-11, and Martingale models in terms of audit efficiency by obtaining lower alpha risks for the negative approach. Using the structural and stepwise models, auditors can potentially save a significant amount of excess audit effort when no material error is present.

Although the testing details are not shown here, we test the general proposition that a greater degree of economic structure will result in less alpha and beta risks at the noncontrol points and the risks will be equal at the control points. We find that the detection of error can be significantly affected by the degree of structure in an organization's economic activities. Alpha risks are significantly lower in the more structured cases for all the models when the negative approach is used as shown in table 6. For the positive approach, more structure leads to some alpha risk reduction even though such an improvement is unexpected because the positive approach controls for alpha risk at [absolute value of E]=0. Further, as the degree of structure in economic activity declines, the structural and stepwise models, as well as the X-11 model, have trouble controlling alpha risk for the positive approach. Apparently, more economic structure helps all the models perform better in terms of minimizing alpha risks for the auditors. However, for the beta risks the picture is mixed. Generally, more structure leads to a significant beta risk reduction for the positive approach as posited. Beta risk for the negative approach, however, is higher for the more structured cases. If we assume that more structure leads to a smaller confidence interval as it seems to do given the reduction in MAPE shown in table 3, we should be able to use the logical relationships among confidence intervals, risk and testing approach to explain these results. This relationship does in part explain the paradox where less structure can lead to a lower beta risk in table 6 for the negative approach. However, it does not seem to follow for alpha risk at the control point for the positive approach. Therefore, we must conclude that this logical mapping can not explain all that affects risk behavior, especially when predictions are not very accurate as indicated by a high MPE.

Although the details are not presented here, there is also some evidence that the nature of the sales behavior pattern and other company characteristics have an effect on the alpha and beta risks. Using ANOVA we test the proposition that the alpha and beta risks would be different across the five different sales behavior patterns. In general, we find that the sales behavior pattern has a significant effect on alpha risk for the negative approach and on the beta risk for both approaches for each expectation model and this influence is consistent across all expectation models. We also find that the rankings of companies, by alpha and beta risks for sales behavior patterns across companies, are consistent across the five models tested. Analytical procedure models tend to perform better (generate less beta risk for positive and less alpha risk for negative approach) when the sales pattern contains less variation (e.g., trend effect only) and perform worse when the sales pattern has more variation or a strong seasonal effect. If we assume that a simpler sales behavior pattern would result in a smaller confidence interval for both approaches, these results would follow from the logical relationships among confidence intervals, risk and testing approach discussed earlier. These results also suggest that auditors may choose to select different criteria when using analytical procedures on audits for companies with different sales patterns. It may also be that a procedure's absolute and relative performance, in terms of predictability, can be affected by overall business activity as shown in tables 3 and 5 and a particular time period. In summary, we can conclude that the performance of the analytical procedures depends very much on the sales behavior pattern which is an indicator of the character of an organization's economic activity.

Materiality and Prediction Error

As a final analysis, we assess the signaling ability of the five analytical procedures. Like Loebbecke and Steinbart (1987), Wheeler and Pany (1990) and Lorek et al. (1992) we compare the "swamping" effect of the prediction errors to materiality. Table 7 shows the frequency with which the prediction error exceeds materiality for all degrees of structure. As can be observed, the structural model using monthly data performs better than the other models with the exception of the stepwise model. Although not detailed here, all models have fewer prediction errors exceeding materiality for the higher degree of structure as would be expected. Indeed, all the models using simulated monthly data performed better than those reported in the above studies which used quarterly data. This may follow in part from the use of monthly data as suggested in prior research. These results are encouraging, particularly for the structure and stepwise models.

TABLE 7 Frequency of Prediction Errors Exceeding Materiality (Average of All Degrees of Structure) 1986-1990 Global Sales Driven (a) Nonsales (b) Structural 24.8% 31.6% 15.7% Stepwise 24.0 30.5 15.3 ARIMA 39.5 45.6 31.4 X-11 46.7 54.8 35.9 Martingale 35.0 40.2 28.0 1989-1993 Global Sales Driven (a) Nonsales (b) Structural 26.0% 23.8% 29.1% Stepwise 26.2 22.6 31.1 ARIMA 33.3 31.3 36.0 X-11 40.6 42.6 38.0 Martingale 32.0 26.4 39.4 (a) The definition of materiality for the sales-driven group is 0.5 percent of annual sales. (b) The definition of materiality for the nonsales-driven group is 1 percent of the account annual balance.

DISCUSSION, LIMITATIONS AND CONCLUDING COMMENTS

Discussion

Our overall research hypothesis is that structural analytical procedures are better predictors of account balances and they are better at detecting errors than other commonly suggested procedures. The results suggest that the structural model's predictive performance is superior to nonstructural models (X-11, ARIMA and Martingale) for a wide range of companies with various characteristics (tables 3 and 5). However, consistent with Wild's (1987) finding for a single company, the structural model is no better than the stepwise model because the stepwise model indirectly incorporates the structure of a firm's financial and economic activities as part of the variable selection process. In addition, our results clearly suggest that sophisticated analytical procedures, such as the structural and stepwise models, may be beneficial when monthly data are used. Perhaps, in contrast to Icerman et al.'s (1993) findings using quarterly data, the use of monthly data, which can capture economic activities on a more timely basis, aids these more sophisticated models' predictive ability.

Structural models generally perform better than ARIMA, X-11 and Martingale models (table 6) in terms of overall error detection performance (combined risks) for both time periods and for both testing approaches. This overall comparative advantage does not follow when it is compared to the stepwise models, where the stepwise is often significantly better than the structural model (table 6). It achieves the same overall objective as the structural model which is founded on expected relationships among economic events, resources and agents as illustrated in figure 1.

If an auditor's objective is not overall performance, our results suggest that the error detection performance of the structural model relative to the other models is very much dependent on the nature of testing approach and type of decision risk. As a result, auditors must select their expectation model in light of the testing approach used and their relative risk preferences. Auditors must also be careful in interpreting one model's alpha and beta risks relative to another model's.

It is apparent from the discussion earlier (table 6) that in most of the positive testing approach cases the ARIMA and Martingale models yield a significantly lower alpha risk of error detection than the structural model (table 6) and the other models (table B.1 in appendix B). In contrast, the stepwise and structural models tend to result in lower beta risks compared to the ARIMA and Martingale models (table 6) for the positive approach. It also seems that the X-11 model is preferable to the structural model in terms of beta risk; however, this is actually due to the poor prediction performance of the X-11 model resulting in prediction errors so large that a material error is so often concluded and there is little chance that a material error will be missed. Considering the above observations, one could conclude that ARIMA and Martingale models are preferable in terms of minimizing alpha risk, which is the risk of concluding that an account is in material error when it is not. However, if an auditor is interested in minimizing beta risk, which is concluding that an account is not in material error when it is, our results suggest using the the stepwise and the structural models in terms of protection against beta risk (table 6) for the positive approach. In general, for the positive approach, the ARIMA and Martingale models may expose auditors to unnecessary beta risks and their lower alpha risks may simply be due to larger confidence intervals.

If an auditor uses the negative testing approach in controlling beta risk, our results suggest that all five models tested achieve beta risks close to or lower than the designated level of .33. Among them, ARIMA and Martingale have slightly lower beta risks than other models. This, again, may be due to larger confidence intervals. While all provide satisfactory protection against beta risk, it is apparent that both the structural and stepwise models lead to a lower level of alpha risk for the negative testing approach. These are the two models that use collateral evidence from other accounts and exogenous economic indicators. The structural model incorporates the evidence directly and the stepwise incorporates it indirectly. It is still an open question as to whether in a specific environment a customized structural model will perform better than the stepwise model. Auditors often have access to the information needed to develop customized models for their clients based on an E-R model such as the generic one shown in figure 1.

It is noteworthy that the relative alpha and beta risks for the expectation models tested can be related to the logical relationships among each model's confidence intervals, risk and testing approach. These relationships can be predicted by the MAPE and MPE for each expectation model because MAPE and MPE can be mapped into an estimation model's respective confidence interval and prediction accuracy. In particular, it has been shown that ARIMA and Martingale minimize the risk (alpha and beta for the positive and negative testing approaches, respectively) at the control point. We argue here that this may be illusory. It may due to wide confidence intervals, represented by large MAPEs, and may not be due to the fact that they are better estimators than the stepwise or the structural models. They may be sufficient in many cases, however, because they may be more economical and they do exhibit a satisfactory beta risk at [absolute value of E]=M for the negative approach. An auditor must balance the costs associated with the ease of use of these estimators against the potential costs of more audit effort due to the high alpha risk when using the negative approach.

In general, auditors can select the appropriate testing approach and model which will result in achieved risks being close to planned levels. From table 6, the negative approach should be used to achieve a low beta risk ([absolute value of E]=M) and the positive approach should be used to achieve a low alpha risk ([absolute value of E]=0). While we do not explicitly test the relative differences between the positive and negative approaches, it nevertheless is apparent from table 6 that all the models either achieve or come close to achieving the desired level of assurance (1-[beta] risk) at [absolute value of E]=M for the negative approach. However, very few do so for the positive approach. This, coupled with the observation that the positive approach displays some apparent difficulty in achieving expected levels of alpha risk at the control point, tends to favor using the negative approach and selecting the procedure which yields the lowest alpha risk. We also find that the beta risk and alpha risk at the noncontrol points ([absolute value of E]=M for the positive approach and [absolute value of E]=0 for the negative approach) can be significantly reduced with increased economic structure.

From another perspective, our results support the potential audit applicability of a virtual mathematical model of an organization's economic activities which Elliott (1994, 1995) calls the "information dual." Such a model would be constructed very much like the generic structural model used in this study, except it would be much more comprehensive. Given our findings, one could conclude that such a sophisticated model would enable auditors to perform the real-time monitoring of an organization's economic activities suggested by Elliott (1995). Such monitoring could sense material errors and unusual economic events, and alert auditors and management as to their occurrence on a timely basis.

Limitations and Future Research Directions

This research has several limitations. First, this research uses five base companies which are all single-industry companies. This, however, is considered appropriate for this study as it is likely the level of inquiry of field auditors, e.g., a specific profit center, division, or product line. Second, even though the simulation data are founded on real company data, more research will ultimately be needed using actual monthly data for a large number of companies to confirm the simulated results of this study. Third, auditors may have some reluctance to use unaudited accounting data. This may be less of a concern because of the increased accuracy of accounting systems, as noted by Elliott (1995, 118).

Fourth, the performance of the structural model over other models may be understated in this research. The following reasons may cause this potential understatement of the advantages of structural models.

1) Compustat quarterly database does not provide some detailed information (e.g., research and development expenditures and marketing expenditures) which may improve the explanatory power of structural models.

2) If real companies instead of simulated companies are used, there should be more exogenous variables available that are not subject to accounting errors. With more error-free exogenous variables built in via the event equations, the structural models might perform even better. Allen (1993), for example, found significant support for the inclusion of nonfinancial variables in regression and time-series models.

3) A generic structural model was tested in this study. Practitioners and auditors have access to information about the characteristics and the background of their clients. In particular, for many clients the auditor may have access to the E-R structure that underlies the database accounting system. With this valuable information, they should be able to establish unique structural models for each company that are better than the generic set of structural models used in this research.

Fifth, we have probably assessed best case scenarios in this paper where material errors are seeded in one time period. Future research could be done to assess worst case scenarios for these analytical procedures where small errors are dispersed throughout the year making them difficult to detect. In these situations it will be difficult for any procedure to distinguish between small errors, which can accumulate into a material error, and normal fluctuations in account balances. The challenge lies in determining an appropriate procedure that an auditor can use to accumulate these small errors. Some progress in resolving these issues has been accomplished by Knechel (1988) and Dzeng (1994) using STAR and other detection procedures.

Sixth, the ex ante risks used in this paper were based on the central limit theorem. An assessment of ex ante risks is important for audit planning for assessing the assurance associated with analytical procedures (Smieliauskas 1990). There is ample evidence that the central limit theorem assumption is probably violated leading to different ex ante risk expectations. Most of this evidence is in the statistical sampling literature, i.e., Neter and Loebbecke (1975) and Johnson et al. (1982). Smieliauskas (1990) further shows that the testing approach has a substantial effect on sampling risk when other audit evidence is considered. Future research is needed to assess the ex ante risks associated with the analytical procedures tested in this manuscript. This research, can follow the lead in statistical sampling; but, it will be more complex due to the interrelationships among the accounts and their relationships with exogenous variables. The ARIMA, X-11 time series and Martingale models may be easier to track with respect to ex ante risks; but we have shown them to be less desirable from an ex post assessment of their alpha and beta risks. Thus, an important area of future research is the assessment of ex ante risks of structural and stepwise models which are far more complex because they incorporate collateral evidence.

The structural model, because all the accounts are interrelated, may be an effective vehicle for an auditor to begin a preliminary explanation analysis as suggested by Kinney (1987, 69). This would be an interesting area for future research. Leitch and McKeown (1991) provide some ideas on how this may be accomplished in the accounting environment. Given the findings of Turner (1997) with respect to beta risk and materiality, future research could incorporate the use of financial statement information, namely ratios, in the assessment of analytical procedures. Finally, we have only considered risk at [absolute value of E]=0 and [absolute value of E]=M. Thus, future research can be directed to evaluate each expectation model based on their entire power curve as suggested by Duke et al. (1982).

Conclusions and Contributions

Past research suggests the need for developing sophisticated expectation models like structural models that can improve the efficiency and effectiveness of analytical procedures. However, very limited research has been done to evaluate the performance of structural models, and the results have been inconclusive. In this research we build and test a generic structural model based on a general set of entities (events, resources and agents) and their endogenous and exogenous relationships. It incorporates the logical structure of a generic organization and its economic activities as measured by accounting numbers and driven by exogenous economic variables. The structural expectation model tested in this study includes more relevant information and requires greater computational skill than ARIMA, X-11 and Martingale models. We test all the models on complete sets of simulated monthly financial statements. Moreover, we test the models using both the positive and negative approaches for assessing alpha and beta risks. We define alpha and beta risks from the auditor's perspective.

This study provides evidence that the predictive ability and overall error detection performance (in terms of the combined alpha and beta risks) of a generic structural model is better than nonstructural models (i.e., ARIMA, X-11 and Martingale). It, however, is no better than the stepwise model which indirectly incorporates an organization's business and economic activity structure. There is evidence that, together, the structural and stepwise models outperform the ARIMA, X-11 and Martingale models. They clearly lead to lower combined alpha and beta risks than the other models for both testing approaches. The results for the positive approach suggest that if an auditor wants to minimize the risk of concluding that an account is not in material error when it is in material error, he or she should use a model that incorporates the structure (directly or indirectly) of an organization's economic events and relevant endogenous variables, such as the structural or the stepwise model. The results for the negative approach also favor the structural or the stepwise models. With the lower alpha risk, these two procedures can enhance audit efficiency by reducing excessive audit effort when no material error is present, while still providing the needed assurance of detecting a material error. In general, we find that the error detection performance of all the models with respect to beta risk is a function of sales behavior patterns and the stability of an organization's business and economic activities. Also, we find that the performance of each model with respect to alpha and beta risks also tends to be a function of the testing approach used. In summary, if we assume that an auditor would first select the analytical review procedures that achieves a specified level of assurance (1-[beta] risk) that a material error will be detected, then selects a procedure within that set which minimizes his or her alpha risk and extra audit effort, our results clearly favor the use of either the stepwise or the structural analytical procedure coupled with the negative testing approach.

From the practicing auditor's perspective, the results here show that a generic structural model is no better than a stepwise model. This suggests that a cost-effective way to perform analytical review would be to select all the variables that might affect a particular account using generalized audit software, combine these with appropriate and available exogenous variables which affect a client's general business environment, and run a stepwise regression to obtain an estimate for an account's balance in a particular month. With this information they could then use the negative testing approach to determine when an account appears not to be in material error. Then they could reduce the audit effort in those cases where there is reasonable assurance that the account is not in material error. From another perspective, our results clearly show that practicing auditors who rely on the balance from the previous period, as expressed by the Martingale model as a basis for analytical procedures, can greatly improve their prediction and error detection capabilities by using the natural structure of the accounting system which reflects the organization's business and economic activities.

Finally, the results presented here are based on a generic set of structural equations. In practice, auditors have access to the specific structural relationships of a client. The natural structure can be readily obtained from an increasing number of companies who use relational data base accounting systems. These companies .often document their accounting systems using E-R diagrams or if they don't, auditors can derive E-R diagrams from clients' database management systems using tools like computer-aided software engineering (CASE). In general, structural models, including stepwise models, have the potential of increasing the effectiveness and the efficiency of analytical procedures. These attributes should help auditors increase the effectiveness of their analytical procedures as well as reduce costs associated with additional audit efforts.

APPENDIX A

Data Acquisition and Generation of Approximate Monthly Data (Phase 1)

Endogenous Corporate Variables

We use the following steps to collect endogenous variables. First, from a subset of single-industry companies we select five that have different sales behavior patterns. Following Wheeler and Pany (1990), the five companies represent different time-series sales behavior patterns commonly encountered by auditors. These patterns differ in the combination of time-series properties (i.e., trend, cycle and seasonality). Figure A.1 graphically depicts the time-series behavior of each company's sales for the 20 quarters ending December 31, 1990. Second, for each of the five companies we obtain quarterly data from Compustat for the years 1986 through 1990 (24) (20 quarters) for endogenous variables listed in table 2 panel B. We repeat this for 1989-1993. The endogenous variables selected are those used by prior research and available in Compustat. (25) Third, we estimate monthly data (for simulation purposes) from quarterly data using a cubic curve-fitting technique. (26) This is done because monthly accounting data are not available from Compustat or other public sources, but they are required for the purposes of this research. Fourth, we remove seasonal and trend effects from the data to obtain a standard deviation. Fifth, to approximate actual monthly data we randomized the original curve fitted monthly data based on their deseasonalized and detrended standard deviations. Figure A.2 shows the procedures of curve fitting removal of seasonal and trend effects, and generation of sample randomized account data. (27) In summary, endogenous variables noted in table 2 panel B for five different representative (not actual) companies are generated using this approach. This is then replicated for 1989-1993.

[FIGURES A.1-A.2 OMITTED]

Calculated Endogenous Variables

To eventually simulate a number of companies and their complete monthly financial statements, one must calculate (infer from the approximate monthly financial data) other endogenous variables. The calculated endogenous variables are:

PFS: Preliminary Forecast of Sales (28)

PRD: Production Costs ([PRD.sub.t] = [INV.sub.t] - [INV.sub.t-1] + [COG.sub.t])

COL: Collections of AR ([COL.sub.t] = [AR.sub.t-1] - [AR.sub.t] + [NS.sub.t])

AP-INC: Accounts Payable Increment ([AP-INC.sub.t] = [PRD.sub.t])

AP-DEC: Accounts Payable Decrement ([AP-DEC.sub.t] = [AP.sub.t-1] - [AP.sub.t] + [PRD.sub.t])

The preliminary forecast of sales (PFS) is used later with other exogenous variables to generate the forecast of sales (FS). The forecast of sales (FS) is then used as a major driver to simulate companies' monthly data. Forecast of sales (FS) instead of actual net sales (NS) is used because it includes exogenous variables that help drive the sales of these representative companies and it includes the random term used to control the degree of structure for each representative company. Also, net sales numbers may not be audited and error-free. Again this is replicated for the second time period.

Exogenous Economic Variables

Monthly exogenous variables for the same periods are collected. These variables are selected because they may have a relationship to art organization's economic activities. They are also selected here because they were commonly used in previous studies (Elliott and Uphoff 1972; Kaplan 1978; Wild 1987). A list of these exogenous variables, their sources, (29) and measuring units is provided in table 2 panel B. The industry total sales (ITS) is obtained by summing the net sales of all companies with the same industrial code on quarterly Compustat tapes. Then, monthly ITS data are obtained using the same curve-fitting procedures illustrated earlier. Variation is likewise added to the curve-fitting monthly data using the standard deviation and random numbers in the manner used to generate monthly endogenous variables.

TABLE A.1 Impact of Accounting Errors on Accounts Unrecorded Inventory Interest Expense Fictitious Credit Cutoff Error Accounts Sales Purchase (Over/Under) AR [up arrow] -- -- INV -- -- [up arrow] AP -- [down arrow] -- COG -- [down arrow] [down arrow] ADM -- -- -- DEP -- -- -- INT -- -- -- Interest Expense Cutoff Error Miscalculation Accounts (Over/Under) (Under/Over) AR -- -- -- INV [down arrow] -- -- AP -- -- -- COG [up arrow] -- -- ADM -- -- -- DEP -- -- -- INT -- [down arrow] [up arrow] Unrecorded Depreciation G & A Miscalculation Accounts Expense (Under/Over) AR -- -- -- INV -- -- -- AP [down arrow] -- -- COG -- -- -- ADM [down arrow] -- -- DEP -- [down arrow] [up arrow] INT -- -- -- Modified from Wheeler and Pany (1990, 575).

APPENDIX B

Sensitivity Study

We conducted a sensitivity analysis using a 36-month instead of a 48-month estimation period. With a shorter estimation period, slightly more combined alpha and beta risks were observed for almost all models with the exception of the Martingale model, which is indifferent to the length of estimation period. The relative order among the models remains about the same. In most cases, the stepwise and structural models still have the smallest combined risks and they are followed by the Martingale, ARIMA and X-11 models. The order of Martingale and ARIMA models is reversed in a few cases. A 36-month vs. 48-month comparison of the ranking of the alpha and beta risks across the five expectation models is shown in table B.1. Detailed alpha, beta and combined risks based on a 36-month estimation period are provided in table B.2 which can be compared with table 6.

TABLE B.1 Ranking From the Smallest (Left) to the Largest (Right) on Alpha, Beta and Combined Alpha and Beta Decision Risks Across Structural (S), Stepwise (T), ARIMA (A), X-11 (X) and Martingale (M) Models Higher Structure Normal Structure 36-month 48-month 36-month 48-month Panel A: 1986-1990 Positive Approach Ranking of Alpha SMATX MASTX MASTX MASTX Ranking of Beta TXSMA TSXAM TSXAM TXSMM Ranking of Combined TSMAX STAMX TSMAX STMAX Panel B: 1989-1993 Positive Approach Ranking of Alpha AMSTX AMSTX AMSTX AMSTX Ranking of Beta XTSMA TXSMA XTSMA XTSMA Ranking of Combined SMTAX TSMAX TSMAX TSAMX Panel C: 1986-1990 Negative Approach Ranking of Alpha TSMAX TSMAX TSMXA STMAX Ranking of Beta MASXT AMSTX ASMXT AMXST Ranking of Combined TSMAX TSAMX TSAMX TSAMX Panel D: 1989-1993 Negative Approach Ranking of Alpha TSMXA TSMAX TSMXA TSMAX Ranking of Beta AMTSX AMTSX AMTSX MATSX Ranking of Combined TSMAX TSMAX TSMAX TSMXA Lower Structure 36-month 48-month Panel A: 1986-1990 Positive Approach Ranking of Alpha AMSTX AMTSX Ranking of Beta TXSMA TSXMA Ranking of Combined TSAMX TSAMX Panel B: 1989-1993 Positive Approach Ranking of Alpha AMSTX AMSTX Ranking of Beta XTSMA XTSMA Ranking of Combined TSMAX TSMAX Panel C: 1986-1990 Negative Approach Ranking of Alpha TSMXA TSXAM Ranking of Beta AXMST AMXST Ranking of Combined TSAMX TSAMX Panel D: 1989-1993 Negative Approach Ranking of Alpha TSMXA TSXMA Ranking of Beta AMTSX AMTSX Ranking of Combined TSAMX TSMAX TABLE B.2 Comparison of Average Global Alpha, Beta and Combined Alpha and Beta Decision Risks For Each Degree of Structure Higher Structure Alpha Beta Com. [absolute value [absolute value of E]=0 of E]=M Panel A: 1986-1990 Positive Approach Structural 0.288 0.232 0.260 Stepwise 0.357 0.124 0.241 ARIMA 0.331 0.359 0.345 X-11 0.676 0.192 0.434 Martingale 0.323 0.318 0.321 Panel B: 1989-1993 Positive Approach Structural 0.336 0.236 0.286 Stepwise 0.427 0.187 0.307 ARIMA 0.300 0.358 0.329 X-11 0.698 0.173 0.436 Martingale 0.325 0.279 0.302 Panel C: 1986-1990 Negative Approach Structural 0.202 0.298 0.250 Stepwise 0.093 0.335 0.214 ARIMA 0.381 0.250 0.316 X-11 0.383 0.306 0.345 Martingale 0.305 0.211 0.298 Panel D: 1989-1993 Negative Approach Structural 0.237 0.290 0.264 Stepwise 0.208 0.287 0.248 ARIMA 0.337 0.254 0.296 X-11 0.318 0.355 0.337 Martingale 0.291 0.275 0.283 Normal Structure Alpha Beta Com. [absolute value [absolute value of E]=0 of E]=M Panel A: 1986-1990 Positive Approach Structural 0.395 0.245 0.320 Stepwise 0.493 0.127 0.310 ARIMA 0.367 0.429 0.398 X-11 0.650 0.255 0.452 Martingale 0.313 0.445 0.379 Panel B: 1989-1993 Positive Approach Structural 0.395 0.355 0.375 Stepwise 0.415 0.266 0.341 ARIMA 0.293 0.495 0.394 X-11 0.669 0.248 0.459 Martingale 0.324 0.426 0.375 Panel C: 1986-1990 Negative Approach Structural 0.333 0.264 0.299 Stepwise 0.171 0.329 0.250 ARIMA 0.521 0.192 0.357 X-11 0.495 0.302 0.399 Martingale 0.460 0.265 0.363 Panel D: 1989-1993 Negative Approach Structural 0.423 0.237 0.330 Stepwise 0.368 0.227 0.298 ARIMA 0.519 0.170 0.345 X-11 0.504 0.257 0.381 Martingale 0.487 0.196 0.342 Lower Structure Alpha Beta Com. [absolute value [absolute value of E]=0 of E]=M Panel A: 1986-1990 Positive Approach Structural 0.531 0.327 0.429 Stepwise 0.557 0.201 0.359 ARIMA 0.381 0.502 0.441 X-11 0.707 0.265 0.486 Martingale 0.444 0.457 0.451 Panel B: 1989-1993 Positive Approach Structural 0.412 0.430 0.421 Stepwise 0.636 0.330 0.383 ARIMA 0.295 0.560 0.428 X-11 0.650 0.302 0.476 Martingale 0.344 0.498 0.421 Panel C: 1986-1990 Negative Approach Structural 0.502 0.282 0.392 Stepwise 0.329 0.389 0.359 ARIMA 0.679 0.150 0.415 X-11 0.657 0.198 0.428 Martingale 0.635 0.205 0.420 Panel D: 1989-1993 Negative Approach Structural 0.565 0.206 0.386 Stepwise 0.488 0.206 0.347 ARIMA 0.644 0.136 0.390 X-11 0.633 0.239 0.436 Martingale 0.613 0.161 0.387

REFERENCES

Allen, R. D. 1993. Analytical procedures using financial and nonfinancial information: A comparison of alternative methods. Unpublished monograph, University of Utah.

Amer, T. S. 1993. Entity-relationship and relational database modeling representations for the audit review of accounting applications: An experimental examination of effectiveness. Journal of Information Systems 7 (Spring): 1-15.

American Institute of Certified Public Accountants (AICPA). 1981. Audit Sampling. Statement on Auditing Standards No. 39. New York, NY: AICPA.

--. 1982. AGAS. Audit and Accounting Guide: Audit Sampling. New York, NY: AICPA.

--. 1988. Anaytical Procedures. Statement on Auditing Standards No. 56. New York, NY: AICPA.

Ang, J. S., J. H. Chua, and A. F. Fatemi. 1983. A comparison of econometric, time series, and composite forecasting methods in predicting accounting variables. Journal of Economics and Business (August): 301-311.

Bachman Information Systems. 1988. Bachman/re-engineering product set. Business Software Review (March): 67.

Bails, D. G., and L. C. Peppers. 1982. Business Fluctuations: Forecasting Techniques and Applications. Englewood Cliffs, NJ: Prentice Hall.

Beck, P. L., and I. Solomon. 1985a. Ex post sampling risks and decision rule choice in substantive testing. Auditing: A Journal of Practice & Theory(Spring): 1-10.

-- and --. 1985b. Sampling risks and audit consequences under alternative testing approaches. The Accounting Review (October): 714-723.

Biggs, S. F., and J. J. Wild. 1984. A note on the practice of analytical review. Auditing: A Journal of Practice & Theory 3 (Spring): 68-79.

Cheney W., and D. Kincaid. 1985. Numerical Mathematics and Computing. New York, NY: Dekker.

Cogger, K. O. 1981. A time-series analytic approach to aggregation issues in accounting data, Journal of Accounting Research 19 (Autumn): 285-298.

Daroca, F., and W. Holder. 1985. The use of analytical procedures in review and audit engagements. Auditing: A Journal of Practice & Theory 4 (Spring): 80-92.

Duke, G. L., J. Neter, and R. A. Leitch. 1982. Power characteristics of test statistics in the auditing environment: An empirical study. Journal of Accounting Research 20 (Spring): 42-67.

--, R. A. Leitch, and L. J. Neter. 1985. A Study of Statistical Estimations. Studies in Accounting Research No. 33. Sarasota, FL: American Accounting Association.

Dzeng, S. C. 1994. A comparison of analytical procedure expectation models using both aggregate and disaggregate data. Auditing: A Journal of Practice & Theory 13 (Fall): 1-24.

Elliott, J. W., and H. L. Uphoff. 1972. Predicting the near term profit and loss statement with an econometric model: A feasibility study. Journal of Accounting Research 10 (Autumn): 259-274.

Elliott, R. K. 1994. Confronting the future: Choices for the attest function. Accounting Horizons 8 (September): 106-124.

--. 1995. The future of assurance services: Implications for academia. Accounting Horizons 9 (December): 118-127.

Holder, W. W. 1983. Analytical review procedures in planning the audit: An application study. Auditing: A Journal of Practice & Theory 2 (Spring): 100-107.

Hylas, R. E., and R. H. Ashton. 1982. Audit detection of financial statement errors. The Accounting Review 57 (October): 751-765.

Icerman, R. C., K. S. Lorek, S. W. Wheeler, and D. Fordham. 1993. An investigation of the feasibility of using statistical-based models as analytical procedures. Unpublished monograph, Florida State University.

Johnson, J. R., R. A. Leitch, and J. Neter. 1981. Characteristics of errors in accounts receivable and inventory audits. The Accounting Review (April): 270-293

Kaplan, R. S. 1978. Developing a financial planning model for an analytical review: A feasibility study. In Proceedings of the Symposium on Auditing Research IlI. Champaign, IL: University of Illinois.

Kinney, W. R. 1978. ARIMA and regression in analytical review: An empirical test. The Accounting Review 16 (January): 48-60.

--.1979. The predictive power of limited information in preliminary analytical review: An empirical study. Journal of Accounting Research 17 (Supplement): 148-165.

--. 1987. Attention-directing analytical review using accounting ratios: A case study. Auditing: A Journal of Practice & Theory 7 (Spring): 59-73.

Knechel, W. R. 1988. The effectiveness of statistical analytical review as a substantive auditing procedure: A simulation analysis. The Accounting Review 63 (January): 74-95.

Knowledge Ware. 1990. Application Development Workbench. Atlanta, GA: Knowledge Ware, Inc.

Leitch R. A., and P. G. McKeown. 1991. Data editing: Transaction processing systems. Applications in Management Science 6:115-136.

Lev, B. 1980. On the use of index models in analytical review by auditors. Journal of Accounting Research 18 (Autumn): 524-549.

Loebbecke, J. K., and P. J. Steinbart. 1987. An investigation of the use of preliminary analytical review to provide substantive audit evidence. Auditing: A Journal of Practice & Theory 6 (Spring): 74-89.

Lorek, K. S., B. C. Branson, and R. C. Icerman. 1992, On the use of time-series models as analytical procedures. Auditing: A Journal of Practice & Theory 11 (Fall): 66-87.

Neter, J., and J. K. Loebbecke. 1975. Behavior of major statistical estimators in sampling accounting populations--An empirical study. New York, NY: AICPA.

-- 1980. Two case studies on use of regression for analytical review. In Proceedings of the Symposium on Auditing Research III. Champaign, IL: University of Illinois.

Roberts, D. 1974. A statistical interpretation of SAP No. 54. Journal of Accountancy (March): 47-53.

Silhan, P. 1982. Simulated mergers of existent autonomous firms: A new approach to segmentation research. Journal of Accounting Research 20 (Spring): 255-262.

Smieliauskas, W. 1990. A reevaluation of the positive testing approach in auditing. Auditing: A Journal of Practice & Theory 9 (Supplement): 149-166.

Tabor, R., and J. Willis. 1985. Empirical evidence on the changing role of analytical review procedures. Auditing: A Journal of Practice & Theory 4 (Spring): 93-109.

Turner, J. L. 1997. The impact of materiality decision of financial ratios: A computer simulation. Journal of Accounting, Auditing & Finance 12: 123-147.

Warren, C., and R. Elliott. 1986. Materiality and audit risk: A descriptive study. Unpublished monograph, University of Georgia.

Wheeler, S., and K. Pany. 1990. Assessing the performance of analytical procedures: A best case scenario. The Accounting Review 65 (July): 557-577.

Wild, J. J. 1987. The prediction performance of a structural model of accounting numbers. Journal of Accounting Research 25 (Spring): 139-160.

Wilson, A. C., and J. Colbert. 1989. An analysis of simple and rigorous decision methods as analytical procedures. Accounting Horizons (December); 79-83.

(1) Specifically, Deloitte & Touche use thw negative approach in their application of STAR.

(2) In contrast, Dzeng (1994) used a monthly data set and its variance-covariance matrix to simulate 100 sets of data for four time series.

(3) The five companies are selected from the list of companies identified by Silhan (1982) as public companies that essentially operate in one industry. Three of them are the same as those used by Wheeler and Pany (1990). They are Betz Laboratories Inc., Cooper Tire & Rubber, and A. M. Castle & Co.

(4) A software package called STELLA II is used to simulate data. It is available for the Mac and Windows.

(5) The major reason for using multiples of [sigma] is to add uniform control to the simulation process so we can assess the results. All the stochastic equations used in the simulation model contain, not only an [??] for that equation; but also contain other variables which in turn are functions of different [??]s as stated in other equations. To start with, equation (1) (table 2) is a function of several exogenous random variables such as industry sales. Second, dependent stochastic relationships can be deduced from equations like those set forth in table 2. For example, the sales forecast [FS.sub.t] is based on PFS, industry and economic indices, and a randomly generated error term, say [e.sub.t1], based on the unexplained variation in the underlying company. In turn, COL (equation 6) in table 2 is not only a function of [FS.sub.t], which is stochastic by virtue of [e.sub.t1]s for several periods, but [e.sub.t6], which is also based on the unexplained variation in the underlying company's collection patterns. Thus, the varying degree of structure induced by the random variables in each equation is much more complex in reality than that which would result from a simple normally distributed random variable at the end of each equation. Therefore, each stochastically simulated number is a function of several random variables including those exogenous to the organization.

(6) The use of 0.5 and 1.5, a 50 percent decrease and increase respectively, tends to yield significant but not excessively large differences (both positive and negative) in the characteristics of the simulated data. In other words, they enable us to control the degrees of structure yet produce reasonable financial statements with no apparent asymptotic behavior in any of the accounts. These increases (decreases) are determined after some experimentation with the simulation models, but before any prediction or error detection testing.

(7) STELLA II provides a procedure to generate NID(0, [sigma]) random numbers, where NID represents normal and independent distribution.

(8) Potential variables (Xs) are manually chosen for the stepwise procedures. They, in the order of their entering into the selection procedure, are:

AR: FS, [FS.sub.t-1], [FS.sub.t-2], COL. [AR.sub.t-1], AP, COG

INV: FS, [FS.sub.t-1], PRD, COL, [INV.sub.t-1], AP, COG, IND 7 AP: FS, [FS.sub.t-1], PRD, AR, INV, [AP.sub.t-1], COG, PRM

COG: FS, PRD, INV, AP, [COG.sub.t-1], EMP, RM, GM

ADM: FS, [FS.sub.t-1], AR, INV, [ADM.sub.t-1], OP

DEP: AR, INV, AP, COG, ADM, DEP, PPE

INT: FS, INV, AP, COG, [INT.sub.t-1], PRM, LTD, CL

(9) The simultaneous determination of sets of equations could also be used and will be the subject of future research. We use the recursive model here, however, because it follows the general transaction processing cycle and it is easily tractable in a simulation environment.

(10) Because selling expenses are not modeled in our data. ADM would also capture this component of the E-R diagram.

(11) Some of the six error types have both understated and overstated situations, yielding nine different error types in total.

(12) This amounts to the seeding of a material error in one period for each iteration. While it is the auditor's responsibility to determine the aggregate effect of small errors in each period which may accumulate to a material amount, it is unreasonable for attention directing analytical procedures to identify these small errors using disaggregated data. An error is seeded in each month so that the results are less sensitive to random occurrences for each month and the percentages represent a larger sample of possibilities.

(13) The definition of materiality used in this study for ADM, DEP and INT is, in average, 44.9 percent, 6.8 percent and 1.4 percent, respectively, of Warren and Elliott's (1986) definition of 0.5 percent of annual sales. It is also less than 2 percent of an account balance suggested by Knechel (1988a).

(14) The investigation rule, [alpha] = 0.33, was suggested by Kinney (1987) and used by Wheeler and Pany (1990). Kinney (1987, 65) suggests the use of [alpha] = 0.33 to reduce beta risk because sample size can not always be increased in analytical procedures.

(15) Smieliauskas (1990, 163-165) shows in an appendix how the ex post results of the positive approach are modified to achieve the ex post results of the negative approach. We do not present the modified positive approach here; we simply recognize that those results would be the same as the negative results ex ante and ex post.

(16) Differences between ex ante and ex post analysis are a function of the robustness of the assumptions underlying the analytical procedure, the audit objectives of each approach (positive or negative), and the point at which the test is controlled under each approach.

(17) Account level details are available from the authors.

(18) The formulae for MAPE and MPE are listed below:

MAPE = [1/[3x5x10]][3.summation over (k=1)][5.summation over (j=1)] [10.summation over (i=1)] [absolute value of [[P.sub.kji] - [A.sub.kji]]/[A.sub.kji]]

MAPE = [1/[3x5x10]][3.summation over (k=1)][5.summation over (j=1)] [absolute value of [10.summation over (i=1)] ([P.sub.kji] - [A.sub.kji])]

where A is the actual value. P is the predicted value, i is the number of simulations of each sales behavior pattern and degree of structure, j is the number of sales behavior patterns, and k is the number of degrees of structure. The mean square percentage prediction error (MSPE) results, which are similar to the MAPE results, are also available from the authors.

(19) We ran the Durbin Watson (DW) test to ascertain the appropriateness of the model specification. The average DW test statistics are 1.9656 and 1.9356 for the structural and stepwise models, respectively. As a result, we conclude that there is no apparent auto-correlation of the residuals and that both models are appropriately specified.

(20) The large MAPE (poor prediction) of the nonsales-driven accounts for 1989-1993 (table 3) is probably due to the unpredictable fluctuation of "Interest Expense" in the original data for Betz Laboratories. Inc. and Cooper Tire & Rubber. This resulted in large standard deviations of the residuals e for the simulation models based on these two companies. This may have resulted in small denominators which were used to calculate MAPE. As a result, large prediction errors occurred for this time period for nonsales driven accounts. The multiples of e for high and low degrees of structure could have been adjusted to accommodate this for the second time period, but for consistency, we used the same multiples for both time periods.

(21) Wheeler and Pany (1990) also adjust this sum by making an allowance for double-counting a Type 1 (alpha risk) and Type II (beta risk) error for the positive approach for the same situation.

(22) A complete description of all alpha and beta risks is available from the authors. They may also be compared with Icerman et al. (1993, table 5), where they added the error rates and compared them to a 1.0 random process benchmark for the positive approach.

(23) This comparison is made prior to any adjustment for possible double counting; see table 2 in Wheeler and Pany (1990, 568). Even though combined rates must be used with caution, they provide the only basis of comparison due to different definitions of materiality.

(24) Because of the dramatic increase of net sales in 1986, data of Standard Register are collected from Compustat for the years 1987 through 1991.

(25) Letters were sent to the subset of single-industry companies to acquire variables not available in Compustat (e.g., marketing and promotion expenses, administrative expenses, and research and development expenses). However, adequate responses were not obtained.

(26) The cubic spline is used to fit data points for several reasons. First, it is the most frequent choice for a curve-fitting function. Second, discontinuities cannot be visually detected in the third-degree curve. Third, research shows that using splines of degrees greater than three seldom yields any advantage. Finally, from a smoothness point of view, the cubic interpolation function is the best function to employ for curve fitting (Cheney and Kincaid 1985).

(27) It is possible that our effort to retain some natural variation via the randomization of deseasonalized and detrended data may have reduced the level of association among accounts of our five base companies. We believe that this problem is minimal because it was performed on deseasonalized and detrended data as can be observed in the final panel showing the randomized curve-fitted monthly data in figure A.2. As a result, the natural structure of the simulated companies may be slightly less than observed in practice; thus somewhat limiting the advantage of the structural approach.

(28) Winters' seasonal exponential smoothing model which incorporates both trend and seasonal patterns in time-series forecasts is used to generate a preliminary forecast of sales (PFS). The following equations are used to derive the preliminary forecast of sales at time t+l. [PFS.sub.t+1]=([L.sub.t]+[T.sub.t]) x [S.sub.t-11] where [L.sub.t] = [alpha] ([D.sub.t]/[S.sub.t-12])+(1 - [alpha])([L.sub.t-1] + [T.sub.t-1]), [T.sub.t]=[beta]([L.sub.t]-[L.sub.t-1])+(1-[beta])[T.sub.t-1] and [S.sub.t] = [gamma]([D.sub.t]/[L.sub.t])+(1 - [gamma])[S.sub.t-12]. [L.sub.t] is the estimate of the intercept of the trend line at time t, [T.sub.t] is the estimate of the slope of the trend line at time t, [S.sub.t] is the seasonal factor, and [D.sub.t] is the actual value of sales. [alpha], [beta] and [gamma] are the exponential smoothing constants, where 0<[alpha] ,[beta], [gamma] <1 and their exact value is determined to minimize the difference between actual value and forecast value. For more details see Bails (1982).

(29) The Bureau of Economic Analysis of U.S. Department of Commerce releases monthly personal income and trade in goods and services statistics at the beginning the month following the event's occurrence. The news releases are available at http://www.bea.doc.gov/bea/newsinf.htm.

Yining Chen is an Assistant Professor at Ohio University and Robert A. Leitch is a Professor at the University of South Carolina.

Printer friendly Cite/link Email Feedback | |

Author: | Chen, Yining; Leitch, Robert A. |
---|---|

Publication: | Auditing: A Journal of Practice & Theory |

Date: | Sep 22, 1998 |

Words: | 20095 |

Previous Article: | The effect of relationship and reward on reports of wrongdoing. |

Next Article: | An empirical investigation of the auditor's decision to project errors. |

Topics: |