Printer Friendly

Computational models as a knowledge management tool: a process model of the critical judgments made during audit planning.

ABSTRACT: Effective management of knowledge is essential for a CPA firm to remain competitive. Use of computational models of judgment processes and outcomes causes knowledge to be available for use and analysis. We present a comprehensive and integrated computational model of the difficult and knowledge-intensive judgments needed for successful audit planning. The model concludes on a client's going-concern status, applicable levels of inherent, control, and planned detection risk, and appropriate levels of statement- and account-level materiality. Most importantly, the model validly identifies the cause of significant fluctuations given causal hypotheses. The context is the sales and collection cycle of a manufacturing client. The model consistently replicates causal hypothesis judgments generated by the modeled auditor who exhibits considerable judgment expertise, i.e., his (1) judgments typically coincide with actual causes. Concerning judgment expertise, the model reveals numerous linkages among judgments, subtle interdependencies in cue importance across judgments, and new findings concerning cue diagnosticity.

Keywords: analytical procedures; audit planning; audit risk; computational model; going concern; knowledge management.

Data Availability: Contact the first author.


Effective management of knowledge within a CPA firm is essential for superior performance (Havens and Knapp 2001). Traditional methods of making knowledge available include large investments in extensive training, mentoring, and review practices. Contemporary knowledge management technologies are more proactive using virtual group collaboration tools and access to previous work products, as well as directories that facilitate access to professionals within a finn. However, making the contextual details of expert reasoning processes available interactively to audit personnel for specific client engagements may result in more effective auditing.

Computational modeling is designed to capture and communicate interactively an expert's detailed knowledge, including the context-dependencies of an expert's reasoning and judgment conclusions (Bigg's et al. 1993; Meservy et al. 1986). Fundamentally, computational modeling makes available a process-oriented theory of how to perform a semi-structured judgment task that requires considerable task-specific knowledge (Bailey et al. 1988; Biggs 1991; Peters 1990, 1993; Wright and Willingham 1997). (2) Planning of an audit is such a task: an opportunity for a computational model to contribute to effective knowledge management exists.

A series of crucial risk-oriented judgments is required when an auditor plans an audit (e.g., Koonce 1993; Arens and Loebbecke 1997, 218-228, 330-331). Researchers have addressed how, and how well, auditors perform several of these judgments. One productive approach is to conduct an experiment, concentrate on a single judgment, or an aspect of a single judgment, and investigate chosen cognitive hypotheses concerning judgment outcomes (e.g., Bell and Wright 1995; Libby 1995). Alternatively, more comprehensive computational models that include more than one judgment have been researched, e.g., concentration on the going-concern judgment (Biggs et al. 1993), materiality judgments (Steinbart 1987) and risk assessments (Peters 1990). Since auditors reach several crucial risk-oriented judgments simultaneously during planning of an audit, we report a computational model of audit planning that includes them.

We report new findings concerning judgment expertise, i.e., interdependencies among several of these risk-oriented judgments and new ideas concerning the importance of information cues (see Section V). For example, identification of account balances that may be misstated, and the causes thereof, may be based on going-concern, risk, and materiality judgments. Therefore, we model simultaneously use of knowledge for the judgments and the linkages among the judgments (see Figures 1 and 2). To our knowledge, this is the first study to research simultaneously the entire sequence of crucial audit-planning judgments.


The remainder of the paper is organized as follows. In Section II, we elaborate on computational modeling research and explain its contribution to knowledge management and CPA firm effectiveness. The specifics of the research method are presented in Section III. Section IV presents the details of the model's reasoning and use of information. Several new findings concerning cognitive interdependencies are reported in Section V. Section VI provides results of testing of the model's conclusions. The final section summarizes our findings, particularly concerning cognitive linkages among judgments and the impact of risk factors, as well as limitations of the study and ideas for future research.


Capturing the Reasoning of Experts Using Computational Modeling

Computational modeling results in a detailed judgment process representation that is specific and internally consistent, including likely use of both qualitative and quantitative information. The representation can reflect the actual complexity of audit judgments (Gibbins and Jamal 1993) compared with other methodologies such as a series of experiments or statistical modeling of past judgments. Testing of the model will determine whether the model emulates the expert's judgment process and produces the judgments expected from the modeled expert.

While the potential of computational modeling to reveal judgment processes is substantial, so are the costs (Biggs 1991, 22-27; Biggs et al. 1993, 83; O'Leary 1993; Peters et al. 1989). The method is effective in producing a process theory for complex, demanding judgment tasks. However, given the complexity of the method, the effort required by the researcher is considerable and the process is time-consuming. Also, less control over the data gathering from experts is achieved relative to other knowledge-capturing methodologies, e.g., carefully controlled experiments.

The design of computational models, especially as knowledge management tools, is different from the design of "expert" systems. Expert systems (and decision support systems), provide optimal conclusions using whatever knowledge, analytical or statistical tools may be helpful (Vinze et al. 1991). In contrast, a computational model is a detailed, authentic representation of one person's reasoning, including any imperfect heuristic mental procedures (Meservy et al. 1986). Such a model must be "true to" the reasoning of the expert; researchers of expert systems are not so constrained. Messier (1990, 104-106) provide s additional insights on this distinction.

Computational Modeling as a Knowledge Management Tool

The objective of researching a computational model is to reveal the detailed, contextual reasoning of an expert; the objective of knowledge management is to make this knowledge available to others. Knowledge management practices include making information contextually available from databases, using knowledge repositories (Markus 2001), managing social networks to facilitate awareness of, and access to, specialists (McDonald and Ackerman 1998), and exchanging ideas and work products via enterprise software applications (e.g., Lotus Notes and intranets). Knowledge-oriented decision support systems also facilitate knowledge management and organization learning (Bhatt and Zaveri 2002). However, these practices do not permit interactive access to the detailed, contextual reasoning of different experts--expertise that, given the composition of an audit team, may not otherwise be available.

Use of computational judgment models (Biggs 1991; Meservy et al. 1986) as knowledge delivery tools provides several advantages. First, each model indicates interactively expert reasoning and judgments for each specific client situation, including the information that would be used. The modeled experts would achieve high levels of recognition and differential compensation, providing incentives for them to continue to share their knowledge. Second, an auditor could achieve learning of reasoning via "what if" changes in the client situation, assumptions made and/or the information available. In this context, computational models minimize the "psychological cost of asking," i.e., the loss of status and expected reciprocity in the future, when knowledge requests are made (McDonald and Ackerman 1998, 2). Third, the models constitute part of the organization's learning and memory (Huber 1990); auditors may discuss, compare, and critique the reasoning in each model. Bhatt and Zaveri (2002, 298) indicate that "organizational learning is not a simple aggregate sum of individual learning but is an exchange and sharing of individual assumptions and models throughout the organization." CPA firms could use computational models to centralize the desired uses of evidence and knowledge, and to ensure more consistent and appropriate reasoning, more judgment consensus and fewer errors (Huber 1990, 56-57). Fourth, opportunities for making knowledge available during staff training are apparent.


To acquire the necessary judgment process data for a computational model, Peters et al. (1989, 362-364) recommend five phases: (1) task analysis, (2) exploratory interviews, (3) structured interviews, (4) detailed problem-solving sessions, and (5) final synthesis--with model building occurring throughout the process (cf., Biggs 1991, 13-15; Gibbins and Jamal 1993, 459-462). We followed these five phases with 42 formal sessions averaging three hours each with the expert (see below).

A thorough task analysis can be helpful because "We can often learn a great deal about how problems are solved by considering how they could be solved. That is, a task analysis of problems can provide information about constraints that the nature of the problem imposes on the nature of the problem solver" (Holyoak 1990, 118). We based the task analysis on the research literature (e.g., Biggs et al. 1988; Bedard and Biggs 1991 a; Koonce 1993), Statements on Auditing Standards, and auditing textbooks (e.g., Arens and Loebbecke 1997; Messier 1997; Knechel 1998), and reviewed previous research findings. The task analysis revealed important reasoning concepts.

The second phase consisted of a series of exploratory interviews conducted with a highly experienced "expert" audit partner from a Big 4 firm who decided to start teaching auditing. He continues to be very active professionally. For example, he served as an on-site reviewer for CPA firm peer reviews conducted by the Public Oversight Board. As a result of the exploratory interviews, the expert revealed: (1) the amount of judgment process detail needed and (2) different judgment processes that could be used (cf., Gibbins and Jamal 1993; Peters et al. 1989, 364). It also became apparent that focusing only on specific judgments does not capture the interdependencies of the expert's judgments during planning of an audit.

During the third phase, we used structured interviews, an effective and efficient knowledge acquisition technique (Agarwal and Tanniru 1990). The expert reviewed, and suggested changes to, an initial representation of his reasoning. He also considered information and advice from both an auditing professor and a former audit senior. The expert provided his detailed reasoning for each judgment, what information was required, and how this information should be used. We then transformed the initial representation into a more complete model.

Next, we conducted an extensive series of focused problem-solving sessions with the expert. Use of risky audit situations with considerable contextual detail focused the expert's attention on the applicable reasoning. We refined the model to reflect more accurately the expert's contextual reasoning.

In the final phase, we reviewed all of the judgment process information. This included the type of information required for each judgment, different conclusions given different situations, and the linkages among the judgments and the evidence. We discussed and resolved any inconsistencies. In addition, we performed testing to assure that the model's conclusions were consistent with those of the expert (see Section VI below).


An Overview of the Model

The model first generates a representation of the client's situation (Koonce 1993) by applying declarative and procedural knowledge (Anderson 1983). Declarative knowledge, being experience-based and situational, adds contextual meaning to audit evidence. Application of procedural knowledge generates intermediate and final conclusions. Starting with the goal of completing analytical procedures, the model backward chains to obtain contextually implied subgoal conclusions. Active information therefore includes facts, declarative knowledge, direct contextual judgments, and subgoal conclusions (Holland et al. 1986, 41). Frames represent declarative knowledge and productions represent procedural knowledge. (3) The model handles missing information by applying default values suggested by the expert. Consistent with this representation, the expert frequently expressed his reasoning in if-then language (see the final section).

Concerning applicable client situations, the expert suggested that the following challenging auditing situations are manageable given our research objectives. First, the model is designed for a manufacturing client with one major product line, little revenue from services, i.e., less than 10 percent, and annual revenue of $100-$500 million. Second, the client is publicly held and has been audited by the current audit firm for at least one year prior to the year being audited. We did not include new audits because audit procedures can vary significantly for a first-time audit. Third, the client has not completed a major acquisition or disposition during the past three years. A recent acquisition or disposition could result in significant changes in the operations of the client and, therefore, would significantly affect how the audit is performed. Finally, all detailed testing is performed after the year-end numbers are available.

Assessment of the Preliminary Going-Concern Status of the Client

Research on clients' going-concern status has evolved in two directions: professional judgment (e.g., Ho 1994; Choo and Trotman 1991; Asare 1992; Ricchiute 1992; Selfridge et a1.1992; Rosman et al. 1993; Hopwood et al. 1994; Choo 1996) and statistical models (e.g., Levitan and Knoblett 1985; Bell et al. 1990; Knapp 1991). An auditor's judgment performance depends primarily on his or her knowledge (e.g., Biggs et al. 1993; Ho and Keller 1994; Choo 1996). Statistical models yield more accurate predictions of business failure (e.g., Hopwood et al. 1994; Altman and McGough 1974; McKeown et al. 1991; Choo 1996; George and Dugan 1995).

The expert's going-concern and financial health conclusion (subgoal judgment #1; see Figures 1 and 2) is based on both professional judgment and the result of a statistical model. Three phases are involved: (1) interpretation of the client's financial status using Altman's (1968) bankruptcy model (Blocher and Willingham 1985), (2) analysis of cash flow from operations, and (3) analysis of the client's operating performance. (4) First, the expert assigns the client's Z-score of bankruptcy potential into one of four risk categories. Next, the expert makes two direct contextual (versus subgoal) judgments: (1) the extent of significant recurring negative net cash flow from operations, and (2) the extent of significant recurring operating losses (cf., Casey and Bartczak 1985). If the direct judgment is that the client has experienced significant recurring negative cash flow from operations or significant recurring operating losses over the past two years, the model will adjust the assigned financial difficulty risk category one level higher. A numerical value of risk is assigned to each situation conclusion. Also, a link between judgments exists: the expert uses this numerical level of risk later when calculating the level of inherent risk; see Figure 2 and Section V. The expert weights recurring negative cash flow more heavily than recurring operating losses (Casey and Bartczak 1985).

If sufficient evidence exists to suggest substantial doubt about the client's ability to continue as a going concern, the model stops and recommends meeting with client management to inquire about plans for dealing with the adverse conditions and events. If the expert were to conclude that management will be able to mitigate the negative effects, he would restart his analysis and revise the input parameters. However, if substantial doubt remains, the expert would consider appropriate action, e.g., proceed with an applicable audit program or withdraw from the engagement. The conclusion on the client's going-concern status indicates the financial health of the client later in the model.

Assessment of Audit Risks

Concerning statement-level audit risks (AICPA 1983), the expert first evaluates the level of business risk (subgoal judgment #2). Consistent with the auditing literature (Arens and Loebbecke 1997, 259), he uses two direct judgments, i.e., (1) the likelihood that the client will have financial difficulties after the audit report is issued, and (2) the extent to which external users rely on the client's financial statements, with the first factor being twice as important as the second. The level of business risk is used to adjust the input level of acceptable audit risk (AAR) such that a higher level of business risk implies a lower level of AAR (see Brumfield et al. 1983; Steinbart 1987, 108-109):

Adjusted acceptable audit risk = (1 - Business risk) x Input level of acceptable audit risk.

Concerning a conclusion on the level of planned detection risk (PDR, subgoal judgment #5), although most auditors place more emphasis on control risk than inherent risk (e.g., Houghton and Fogarty 1991), the expert weights these two statement-level risks equally as is also implied in SAS No. 47 (AICPA 1983). The expert assumes that these two risk components of the audit risk model are independent versus inherent risk being contingent on some (perhaps implicit) level of control risk.

To assess the levels of inherent and control risk (subgoal judgments #3 and #4), the expert evaluates risk factors implied by the client's industry, business and management attributes (cf., Dhar et al. 1988; Peters 1990; Peters et al. 1989; Biggs et al. 1993). The risk factors are listed in Table 1. Research suggests that auditors perceive, and respond to, use of semantic categories better than numerical scales (Peters 1990). However, in order to compute a numerical level of planned detection risk, the expert transforms each of the four risk categories per factor into numerical values.

Application of the Concept of Materiality

SAS No. 39 and SAS No. 47 (AICPA 1981, 1983) require auditors to establish, first, a level of materiality at the statement level and, then, allocate this materiality to individual accounts. The expert performs a three-step process (also see Williams and Ricchiute 1987; Zuber et al. 1983): (1) judgment of preliminary materiality at the statement level, (2) adjustment of statement-level materiality with the conclusions on statement-level inherent and control risk, and (3) allocation of the adjusted statement-level materiality to individual account balances.

Preliminary Statement-Level Materiality

While AAR and PDR are measures of the likelihood of a misstatement, materiality determines the magnitude of a significant misstatement. The expert uses expected revenue as the base number instead of net income because of its relative stability over time (cf., Moriarity and Barron 1979; Steinbart 1987, 99-100). Also, use of net income would be inappropriate when the level approaches the break-even point or the client has experienced a loss. The expert uses expected revenue instead of the client's reported revenue to avoid any potential "anchoring" bias (see Kinney and Uecker 1982).

The percentage applied by the expert to the expected revenue number varies given the magnitude of the expected revenue (Messier 1997, 77). The expert uses the materiality multiplier table included in the AICPA (1996) audit sampling guide; the percentages vary by the size of the materiality base. For clients with expected revenue over $100 million, a materiality multiplier of .003 is suggested.

Preliminary statement-level materiality = .003 x Expected revenues.

The expert explained use of 0.3 percent of expected revenue by reasoning that, given the average net profit margin of manufacturing companies is approximately 6-7 percent, 0.3 percent of expected revenue would be equivalent to approximately 5 percent of net income, another commonly used measure of preliminary statement-level materiality. Finally, the expected revenue judgment is based upon historical client data and industry-specific information (subgoal #6, see below).

Adjusted Statement-Level Materiality

Next, the expert adjusts the preliminary statement-level materiality (subgoal #6) given levels of risk factors such as the nature of the client's business, past audit history, and/or its current financial condition (cf., Steinbart's eight factors (1987, 105-106)). For example, statement-level materiality would be decreased if the client's business is very competitive, there is evidence during the previous year's audit of debt covenants being violated, and/or if several misstatements were discovered during the prior year audit.

The expert eventually concluded that, instead of addressing each relevant risk factor (as is done in Steinbart's (1987) model), he would adjust the preliminary statement materiality with his subgoal judgments of general (statement-level) inherent and control risk (yielding subgoal #7); e.g., statement-level materiality would be decreased if the client's inherent risk is relatively high. Moreover, consistent with the conservatism principle, only downward adjustments are made, i.e., no upward adjustment would be made given a low-risk conclusion. Note that adjusting statement-level materiality based on an assessed level of risk is consistent with Steinbart's (1987) model of materiality judgments.

The expert adjusts preliminary statement-level materiality for risk using, essentially, a nonlinear relationship. We modeled his judgment process as follows (see Figure 3). When inherent and control risk levels are very low (i.e., IR x CR < [Z.sub.1]-level), no adjustment is made. However, when [Z.sub.1] < IR x CR < [Z.sub.2], a downward adjustment will be made, first at an increasing rate and then at a decreasing rate. In addition, when inherent and control risks are very high (IR x CR > [Z.sub.2]-level), materiality will be at the lowest possible level, implying maximum testing procedures.


Allocation of Statement-Level Materiality to Account Balances

The expert allocates the adjusted statement-level materiality to the applicable account balances to provide a level of tolerable misstatement (subgoal #8). He independently allocates the same magnitude of statement-level materiality to income statement items and to balance sheet items (cf., Williams and Ricchiute 1987). The expert distinguishes among accounts when he allocates account-level materiality:

1. Set account-level materiality to zero for the account balances that will be audited 100 percent at a reasonable cost, e.g., common stock, long-term debt or wages payable, and certain long-term assets for the balance sheet items; and depreciation expense, interest expense, and income tax expense for the income statement items.

2. Using Zuber et al.'s (1983) procedure, allocate the adjusted statement-level materiality to the remaining individual account balances as follows: (5)

Account-level materiality = Adjusted Statement-Level Materiality x [square root of (Individual Account Balance / Total Account Balances)]

Application of Analytical Procedures

The expert applies analytical procedures by first developing expected client balances (subgoal #9), which are then compared with the client's reported balances. Using the account-level materiality amounts (subgoal #8), he identifies any significant unusual fluctuations. Then, the expert considers what might have caused the significant fluctuations given both the general and, obtained at this point as necessary, transaction-cycle-specific inherent and control risk factors.

The expert estimates an account balance by applying his knowledge of the client's industry and of the client. (6) Specifically, he adjusts last year's audited account balance given two weighted direct judgments, one based on industry-specific information and one based on client-specific information. The parameters include an overall understanding of the client's business, changes in demand and the price of a certain product, introduction (or discontinuance) of a certain product, gain (or loss) of major customers, etc. The expert provided a procedure for each account (subgoal #9). For example, consider the sales account:

Expected Sales = (1 + [X.sub.sales]) Last Year's Sales

where: [X.sub.sales] = [alpha] (expert's estimate of the percent change in industry sales); + [beta] (expert's estimate of the client's percent change in sales);

[[alpha].sub.sales] = .33;

[[beta].sub.sales] = .67.

The expert varies the relative importance weights [alpha] and [beta] account-by-account given economic indicators and the variability of the account balance over time. For example, for estimation of the client's sales number, the client-specific direct judgment is twice as important as the direct industry judgment. When estimating cost of goods sold, the expert weights his client judgment as being four times as important (.80 versus .20). Client information is also important when estimating an account with high variability such as research and development expense (his weights are .85 versus. 15).

Patterns of deviations from expectations are used by auditors to generate causal hypotheses (Biggs et al. 1988; Bedard and Biggs 1991a, 1991b). Using his expected client balances and his account-level materiality judgments, the expert identifies significant unusual fluctuations including their direction, i.e., overstatement or understatement, and magnitude. He also considers the fact that an identified fluctuation in one account may lead to a discovery of a fluctuation in a related account (e.g., sales and gross accounts receivable). However, the expert excludes: (1) financial statement amounts that are considered to be residuals or subtotals (e.g., gross profit, operating income, net income, total assets), and (2) accounts that the expert would audit completely (e.g., common stock, long-term debt, retained earnings).

Inference of Causes of Unusual Fluctuations

According to Peters et al. (1989, 363), identification of critical cues is the first step in the causal hypothesis generation process. Consistent with such identification, the sequence of activities in the expert's (and the model's) causal reasoning process is as follows. When a significant unusual fluctuation, or a pattern of fluctuations, between the expected and reported client balances is detected (see Figures 1 and 2, judgment #10), the model infers the most likely cause of the fluctuation(s) based on a subset of the general inherent and control risk factors (see Table 2 and Table 1), plus now requested transaction-cycle-specific risk factors. The model then suggests the likelihood that the detected significant unusual fluctuations are caused by each of the model's seven causal hypotheses (see below). The identified fluctuations and the salient general and cycle-specific risk factors are the antecedent conditions: the conclusions are the likelihoods of the hypotheses. This pattern-recognition processing is consistent with findings reported by Biggs and his colleagues (Bedard and Biggs 1991a, 1991b; Biggs et al. 1989). For example, Bedard and Biggs (1991b) suggest that a precondition for correct causal hypothesis generation is an auditor's ability to recognize the full pattern of fluctuations and critical cues.

Consistent with previous findings on hypothesis evaluation (Wright 1993; Biggs et al. 1988), the expert evaluates multiple causal hypotheses simultaneously. He indicated a set of seven causal hypotheses for unusual fluctuations. (7) The selection criteria were that all major feasible explanations are included and each hypothesis is distinctive in terms of the underlying motivation and its subsequent audit implications (e.g., further audit procedures required). Compared to considering a subset of the hypotheses, focusing on all possible causes may diminish, if not eliminate, any potential interference effects (Bell and Wright 1995, 134-135). (8) The seven possible causes are:

Cause 1: Earnings management or fraudulent reporting by executive management without defalcation;

Cause 2: Misappropriation of assets by executive management with or without an attempt to conceal the misappropriation;

Cause 3: Earnings management or fraudulent reporting by manager(s) or professional(s) without defalcation;

Cause 4: Misappropriation of assets by manager(s) or professional(s) with or without an attempt to conceal the misappropriation;

Cause 5: Judgment errors by executive(s), manager(s) or professional(s);

Cause 6: Ineffective operation of the client's accounting information system caused either by personnel--or system-related problems;

Cause 7: Unexpected economic conditions (nonerror).

Although Causes 1 and 3 are similar, the expert separated these Causes because of the different motivations of alternative levels of management to manipulate the account balances. Similar reasoning resulted in separation of Cause 2 from Cause 4.

Base Rate Probabilities of Causes

Auditors acquire frequency knowledge of the occurrence of errors and causes thereof (Butt 1988; Ashton 1991). The expert provided base rate probabilities for each of the seven causes that would explain any (1) significant overstatements and (2) understatements of the client's revenue, gross accounts receivable, and allowance account balances. The most likely cause of significant revenue and accounts receivable overstatements is unexpected economic conditions that were not incorporated in the auditor's client balance predictions (subgoal #9); therefore, Cause 7 has a base rate probability of .777. The most likely error cause that would explain overstated revenue is earnings management or fraudulent reporting performed by executive management versus midlevel management, resulting in base rate probabilities of. 12 for cause 1 and .03 for Cause 3. Within Cause 1, the expert further differentiated earnings management, a legal but perhaps unethical act, from fraudulent reporting, an illegal act. The expert uses a 3:1 likelihood ratio for these two behaviors; however, the model combines them given their similar audit implications.

Concerning Cause 2 and Cause 4, causes resulting from misappropriation of assets, the expert pointed out that thefts would be associated with understated (not overstated) revenue and accounts receivable; therefore, their probabilities given overstatements should be essentially zero. Also, manufacturing companies typically use simple point-of-sale revenue recognition: it is unlikely that judgment error would result in an unusual fluctuation (overstatements or understatements); thus, the base rate probability of Cause 5 is also extremely low. In addition, the expert indicated a base rate probability of .07 that Cause 6 (i.e., ineffective operation of the client's information system) would explain overstated revenue and accounts receivable.

The expert indicated similar reasoning for significantly understated revenue and gross accounts receivable. The expert assigned the highest base rate probability of .813 to Cause 7, i.e., unexpected economic conditions. The most likely error cause of understated revenue and accounts receivable is "earnings management or fraudulent reporting by executive management" (Cause 1) with a base rate probability of .10, e.g., when executive management has an incentive to "move" earnings to the following year. The next mostly likely error cause is Cause 6, i.e., ineffective operation of the client's information system, with a base rate probability of .07. The expert assigned very low probabilities to the other four causes, for a total base rate of less than .02.

Concerning the allowance for doubtful accounts balance, as is the case for understated revenues and accounts receivable, the most likely cause of a significantly understated allowance account is unexpected economic conditions (i.e., Cause 7) with the highest probability of .67. The highest error probability is earnings management or fraudulent reporting by executive management (i.e., Cause 1) with a .20 probability. Next, an understated allowance account could be due to judgment errors by the client's executives, managers, or professionals (i.e., Cause 5) with a base rate probability of. 10. The other causes are extremely unlikely.

Audit-Planning Recommendations

Based on the relative likelihoods of the causes of any detected significant fluctuations and the pattern of risk factors, the model provides recommendations on the nature, extent, and timing of audit procedures. The expert provided recommendations as departures from a "standard" audit program, i.e., the audit plan that would have been implemented if no significant fluctuations had been detected and no critical risk factors had been flagged (Arens and Loebbecke 1997, 191).

The model recommends what should be done to confirm or disconfirm the causal hypotheses, particularly when an irregularity or an error-oriented cause is indicated. For example, assume that the model suggests a significant overstatement of the gross accounts receivable balance and, based on the auditor's assessment of the inherent and control risk factors, the model indicates a relatively high likelihood of earnings management or fraudulent reporting by executive management. Consequently, the model will recommend sending confirmations (i.e., nature) in a larger quantity than usual (i.e., extent) and closer to year-end (i.e., timing).


Complex Relationships between Judgments, and the Impact of Evidence

The detailed process-oriented research conducted to generate the computational model revealed several new findings concerning use of auditing knowledge. In particular, described in this section are several interdependencies among levels of subgoal judgments, subtle yet pervasive impacts of levels of conclusions, and a highly contextual view of cue diagnosticity.

Interdependent Linkages among Levels of Subgoal Judgments

The expert's hypothesis probability conclusions for detected fluctuations depend on the levels of the interdependent subgoal judgments; lower-level subgoal judgments become antecedents for, and may affect, subsequent higher-level judgments (see Figure 2). For example, the expert uses his conclusion on the client's preliminary going-concern status, a measure of financial health (i.e., subgoal conclusion #1, see Figure 2) when he assesses the level of the client's inherent risk (i.e., subgoal conclusion #3). His business risk conclusion (subgoal #2) also becomes a determinant of overall inherent risk. Another example is that conclusions on the client's inherent and control risks are used to adjust the level of preliminary statement-level materiality, providing a link between subgoal judgments #3 and #4, and judgment #7 (cf., Steinbart 1987). Finally, when concluding on the likelihoods of causes of any detected financial fluctuations (subgoal judgment # 11), the expert typically uses several subgoal conclusions including his going concern judgment (i.e., subgoal conclusion # 1), inherent and control risk assessments (i.e., subgoal conclusions #3 and #4), detected significant financial fluctuations (i.e., subgoal conclusion #10, which is a product of subgoal #8 [account-level materiality] and subgoal #9 [development of expected account balances]), and his general and transaction-cycle-specific risk assessments (see Figures 1 and 2, and Table 2).

This structure of linkages among the subgoal judgments reveals the subtlety of how a meaningful change in a lower-level subgoal judgment can permeate subsequent subgoal conclusions. For example, a change in the auditor's conclusion on the client's going concern/financial health status would affect the auditor's inherent risk conclusion, materiality conclusions, and, most importantly, the probability assessments for possible causes of detected fluctuations (see Figure 2). Therefore, the impact and importance of a subgoal judgment should be considered in terms of its total impact on all subsequent judgments.

Evidence May Impact Multiple Subgoal Judgments

The expert uses certain evidence items and direct judgments for multiple subgoal conclusions. Therefore, in terms of the total impact of such a cue, the idea of cue diagnosticity (e.g., Trotman and Sng 1989) takes on a different and broader cumulative meaning. Instead of the impact of different values of a cue on a specific judgment, now the issue becomes the cumulative impact of different cue values on subgoal judgments and the eventual causal hypothesis conclusions. For example, the expert considers three characteristics of the client's executive management, i.e., the extent of management's expressed concern about managing the company's earnings, the extent to which executive management is dominant, and management's inclination toward use of "liberal" accounting methods (see Tables 1 and 2) when he assesses the client's inherent risk and when he infers the most likely cause of any detected financial fluctuations. Similarly, he considers evidence about the knowledge and competence of accounting personnel when he evaluates the overall level of control risk, as well as when he infers the probability of causes of detected fluctuations. Also, he considers industry-related information (e.g., industry sales growth and the client's overall performance compared to the industry peers) in several subgoal judgments including the preliminary going-concern conclusion, the overall inherent risk assessment, and development of expected client account balances. In sum, in addition to the linkages among subgoal conclusions, these examples demonstrate evidence effect linkages across subgoal conclusions and their cumulative impact.

Highly Contextual Weighting of Cues within Single Judgments

The two proceeding subsections address, respectively, the upward impact of linked subgoal judgments and the impact of evidence items on multiple subgoal conclusions. Also of interest is revealed highly contextual weighting and combining of cue values, as well as nonlinear cue usage.

The expert revealed that: (1) different cue values and (2) different contextual patterns of other cues result in very different effects. A single cue can be highly diagnostic at one value and much less diagnostic at another in terms of a subgoal conclusion. Therefore, a highly diagnostic cue value can have a large impact on several subgoal conclusions. An example is a very negative result ("bad") on the going-concern/financial health subgoal judgment (#1) and its large effect on the inherent risk (#3) and relative causal probability conclusions (#11)--see Tables 1 and 2. The other increasingly positive levels of the going-concern conclusion have a decreasing impact on the other judgments.

A cue can be minimally or highly diagnostic, contingent on the levels of other cues, or patterns of other cue values. Consider the three risk-oriented characteristics of executive management listed in Table 2 (and Table 1). The probability of Cause 1 as the explanation for an unexpectedly high revenue number increases significantly when all three factors are at the highest risk level; alternatively, when "earnings management' and "domination" are at lower levels of risk, the Cause 1 probabilities are much lower.

A third interesting aspect of contextual cue usage is highly nonlinear combining of cue values into a subgoal judgment. Contextual cue interactions are implied by items 1 and 2 above. In addition, recall the expert's use of the product of inherent and control risk levels when he concludes on the level of adjusted preliminary statement-level materiality (see the comments above that relate to subgoal #7). We present his nonlinear use of the risk conclusions as Figure 3.


Model Evaluation Objectives and Materials

It is important to test computational models to determine whether appropriate conclusions are being reached (Lewis 1993). (9) First, we test whether the modeled expert's causal judgments are consistent with known causes, i.e., the appropriateness of the expert's judgments (O'Keefe and O'Leary 1993). This test addresses the ecological validity of the expert's knowledge given actual audit situations. Second, we test for the validity of the model's conclusions, i.e., does the model generate the same causal conclusions as would the modeled expert?

We applied two criteria to select the nine contextually different cases--see the Appendix. First, each case must be based on an actual audit situation, with a known cause (or causes) of significant unusual fluctuations, particularly for the revenue and gross accounts receivable balances. Second, each case must contain sufficient descriptive information concerning the audit situation, e.g., concerning evidence for determination of critical inherent and control risk factors, such that the expert can reach conclusions and the model can be applied (see Table 2).

The nine cases (10) reflect the following situations (see Tables 3 and 4): (1) the detected unusual fluctuations are due to unexpected economic conditions (one case--B1); (2) the fluctuations are intentional and due to earnings management or fraudulent reporting by executive management (four cases--C 1-C4); (3) the fluctuations are unintentional and are due to ineffective operation of the client's accounting information system caused either by system- or personnel-related problems (two cases--D1 and D2), or (4) both (2) and (3) (two cases--E1 and E2). We obtained the cases from auditing textbooks and casebooks, and publications from auditing-related professionals (e.g., Association of Certified Fraud Examiners). Six different manufacturing industries are represented, i.e., toys (1), disk-drives (1), drugs (1), parts for other manufacturers (2), household products (2), and chemical products (2).

For each case, the expert assessed the probability that the detected significant unusual fluctuations for the revenue and gross accounts receivable account balances were due to each of the four possible causes. The seven causes in the model (see Section IV) were simplified by combining Causes 2, 3, 4 and 5 into one cause. (11) Therefore, the expert was presented with four possible causal explanations, i.e., Causes 1, 6, 2-5 and 7 presented as Causes 1-4, respectively. The nine cases were presented in a random order. The expert evaluated the nine cases in 1999 and in May 2003; the later probabilities are used for current testing of the model (see the comments in Section VII).

The Expert's Causal Judgments and the Actual Causes

Is the modeled auditor really an expert? We compare his four causal probability judgments with the known cause(s) for each of the nine cases (see Table 3). Two evaluation criteria are employed: (1) the highest probability assigned by the expert to a cause, and (2) his largest conditional probability revision relative to his base rates for the four causes. In addition, when two causes prevail, we include the expert's two highest causal probabilities.

The modeled auditor displays considerable judgment expertise. His most likely causal judgment(s) coincide with the correct cause(s) in eight of the nine cases, yielding an accuracy rate of 88 percent (see columns two and three, Table 3). The exception is case B-I, an audit situation for which the nonerror explanation, cause 4 (i.e., Cause 7) is the correct answer; the expert assigned the highest probability to Cause 1 (and the second highest probability to Cause 4).

Applying the second criterion of the largest conditional probability revision, the modeled auditor again achieves an accurate percentage of 88 percent (see column four, Table 3). Again, the exception is case B-1 where the expert's largest error probability revision is for Cause 1; a larger revision did occur for nonerror Cause 4 but the sign of the probability revision is negative even though Cause 4 is the correct cause (see below).

For eight of the nine audit cases, including all of the error-oriented situations, both the expert's causal probabilities and his probability revisions are consistent with the correct cause(s). Consideration of the magnitudes of his judgments further supports the judgment expertise of the modeled auditor. Among the four cases for which Cause 1, i.e., "earnings management or fraudulent reporting by the client's executive management," is the correct answer (cases C1-C4), his probability revision is large and consistent for all four cases (see Panel A, Figure 4). Similar results occur for cases DID2, for which Cause 2, i.e., the same as cause six of the seven, "ineffective of operation of the client's information system" is correct (see Panel B, Figure 4). For the two cases for which both Cause 1 and Cause 2 are the correct explanations (E1-E2), the expert reports both relatively large probabilities and probability revisions for both causes (see Panel C, Figure 4).


The Model's versus the Modeled Expert's Causal Conclusions

Given that the modeled auditor demonstrated his credibility with a high level of judgment accuracy--he consistently assigned the highest probability and largest probability revision to the correct cause(s) (see Figure 4)--the question is whether the model will validly produce essentially the same conclusions as would the expert. Results reported in Tables 3 and 4 indicate that the model achieves this goal.

Based on the four possible causes for the significantly overstated revenue and gross accounts receivable amounts, we employ two criteria to evaluate the model's judgment performance: (1) Does the model consistently indicate the highest causal probability for the correct cause(s) as does the expert? (2) Do the model's conditional probability distributions for the potential causes correspond with the expert's conditional probabilities?

First, if the same cause receives the highest probability from both the expert and the model, the two sources are considered to be in agreement. Using the base rates specified by the expert, the model suggests that "unexpected economic conditions" (i.e., Cause 4/7) is the most probable cause of the significant unusual fluctuations for every case because of its high base rate probability. Given our primary concern about potential problems when errors exist (and Cause 4/7 is the nonerror cause), when applicable, we emphasize the error cause with the highest probability (see columns 26, Table 3). Since the modeled auditor assigned a higher probability to an error-oriented versus a nonerror cause for all of the nine cases, error-oriented probabilities will be used for both the model and the expert in this comparison.

Results reported in Table 3 reveal that the model does indeed capture the reasoning of the expert. Based on the highest error-oriented causal probability criterion, the model's causal conclusion is the same as the expert's for eight of the nine cases (see columns 3 and 5). The expert, however, typically assigned a higher probability then did the model. Identical results occur for the deviations from the base rates (see columns 4 and 6, Table 3). The exception is again case B-1, the nonerror case, for which the model indicates the correct cause 4 while the expert assigned the highest probability to Cause 1.

Second, in addition to the most probable (error-oriented) cause, we also simultaneously compare the conditional probability distributions of the model and the expert for each of the nine cases. Parametric and nonparametric correlations for both the four cause probabilities and the four deviations from the (same) base rates, as well as the three error-oriented probabilities, for the model and the expert are reported in Table 4. The results at first are perplexing. For the four causal probabilities, the Pearson correlations are relatively low in magnitude while the Spearman correlations are high with a few low correlations (see columns 2 and 3, Table 4). Examination of the scatter plots of the probability values indicates why: using the base rates assigned by the expert (in 1999) the expert (in 2003) revises his Cause 4 nonerror probabilities considerably such that they are much lower than those produced by the model (see Section VII). Equivalently, the expert revises his error-oriented causal probabilities to be much higher than does the model, especially for Cause 1, which is "earnings management or fraudulent reporting by executive management." These effects are clearly indicated visually in Figure 4; they also become apparent when just the three error-oriented causal probability distributions are considered for the model and the expert. Both the Pearson and Spearman correlations increase dramatically and reveal very high associations (columns 4 and 5 in Table 4) for the three error-oriented causes (although the magnitudes of the expert's probabilities are larger). Also, when the conditional probability revision magnitudes are considered for all four causes, a very high degree of agreement is revealed for the expert and the model (columns 6 and 7 in Table 4).


Summary and Importance of Findings

We present a process model of how several critical audit judgments are made during planning of an audit and why significant deviations from what the auditor expected may have occurred (see Figures 1 and 2). Revealed by the model are the knowledge elements and evidence used, and how they are used, to reach conclusions. New findings include interdependencies being revealed among judgments and forms of evidence, and highly contextual nonlinear cue usage (see Section V). Complex client situations where substantial audit risk prevails are emphasized. Model evaluation results indicate that the model produces judgments similar to those of the modeled expert--and the judgments are consistent with the correct environmental outcomes (see Section VI).

Additional process information provided by the expert during analysis of audit cases confirms the consistency of his reasoning with that of the model. First, he evaluated the overall financial situation of the client. Then, as he read through the case, he pointed out key information and risk factors; he focused more on client-specific information versus relative industry comparisons. Third, while assigning a probability to each of the four possible causes of significant unusual account balance fluctuations, the expert mentioned risk factors associated with each cause and noted the level of risk of those factors. On several occasions the expert reasoned using the phrase, "Because of (such and such conditions), I think (certain conclusions were drawn)." This line of reasoning is consistent with the if-then representation used for the model.

The validity of the model is supported by several considerations. First, as suggested by Peters et al. (1989), we used a five-step process of knowledge elicitation and representation and included over 125 hours with an audit partner of a Big 5 firm (i.e., the "expert"). Second, we employed proven knowledge representation concepts: a frame-based structure to represent declarative knowledge (i.e., factual knowledge), and production rules to represent procedural knowledge (i.e., knowledge of how to reason). Third, the model provides an integrated sequence of subgoal and final conclusions that is consistent with audit practice (Arens and Loebbecke 1997, 205-214, 314; Messier 1997, 149-157). Finally, and most important, evaluation of the model (see Tables 3 and 4) indicates that model's causal conclusions and its reasoning are consistent with those of the model expert.

Computational models of reasoning and use of evidence are advocated here as a means of providing and managing knowledge, an auditing firm's intellectual capital. For the same complex task or a series of interrelated complex tasks (the case here), one would expect differences in judgment processes and judgments. The benefit of computational models as knowledge-sharing tools is apparent: each distributed model would make available the reasoning of a highly experienced auditor--an auditor could study the models, interact with a model given the details from a specific client situation, and conclude what judgment is most applicable in a client situation--while learning the applicable reasoning from the models. This aspect also makes clear the distinction between a computational model and an expert system: a computational model preserves the judgment processes of a single auditor; designers of expert systems usually "combine" insights from different experts (cf., Steinbart 1987, 11).


We assume that the modeled auditor is an "expert." His experience, credentials, and current activities suggest that he is highly accomplished. The model evaluation results confirm the accuracy of his judgments (see Section VI and Tables 3 and 4). But one can always question whether an accomplished professional possesses sufficient knowledge to be treated as being an expert.

Another limitation is our use of (types of) interviews to elicit the judgment process data (see Section III). The expert rapidly became comfortable with the approach and he provided very convincing evidence. For each of the 42 sessions, we structured and typed the content of the previous session, and presented it to the expert. We encouraged the expert to make any changes that he thought were necessary. We would not proceed until he was comfortable that the model reflected what he would do. We obtained considerable process data. Nevertheless, the expert generated his insights retrospectively with well-known potential problems of nonvalid process indications (Ericsson and Simon 1984).

Future Research

The model presents one overall reasoning approach; it is possible that other experienced auditors will reason differently while reaching the same (or different) conclusions. For example, the expert weights client-specific information more heavily than industry information for most subgoal judgments, e.g., prediction of client account balances (subgoal #9). Other experienced auditors may concentrate more on industry (and general macroeconomic) information.

While the model produces likelihoods of unexpected account balances that are highly consistent with those of the modeled expert (see Tables 3 and 4), a limitation is that, in several cases, the expert reports larger causal probabilities than those of the model. Specifically, relative to the transcripts completed as of late 1999, the expert apparently now assigns a higher base rate probability to Cause 1, i.e., "earnings management or fraudulent reporting by executive management," and a lower base rate probability to Cause 4 (which is Cause 7 in Section IV), i.e., the fluctuations are caused by unexpected economic conditions. One explanation may be the impact on causal probabilities of the dramatic, detrimental, and highly publicized audit failures that have occurred during 2001-2003. This may be the case especially because many of the scandals have involved overstated revenue (and income). An alternative hypothesis is that the expert and model are applying different loss functions for Type I errors (i.e., over auditing) and Type II errors (i.e., under auditing). That is, while the expert (and, therefore, the model) assesses a high base rate probability to a nonerror cause, which is justified given the majority of audits, the expert also assesses a much higher cost to incorrectly eliminating an error cause and, therefore, he raised his base rates for error causes when he completed the test cases. We plan to address these issues and change the model as may be necessary.

More generally, we plan to enhance and refine the model--and, fortunately, the expert remains available and interested. First, the contextual richness of the model could be enhanced. Situation analysis (Jamal et al. 1995; Harvey 1992; Wallsten 1996), a variant of sensitivity analysis, may be used to examine detailed client situations (e.g., a client's major competitor releases a new breakthrough technological product). This situation analysis capability could also be the basis for enhanced case-based reasoning insights (Tang and Solomon 1998). Second, while the model provides "proof of concept" analysis of the implied audit planning recommendations (see Section IV), more research must be conducted to refine this aspect of the model.

When computational models are used as knowledge management tools, what are the group dynamics and incentives that would affect use of such models by members of an audit team? Knowledge sharing by audit teams is a complicated group process. The literatures on decision making by face-to-face groups (e.g., Guzzo and Dickson 1996; Levine and Moreland 1990) and how people communicate electronically (Kiesler and Sproull 1992; Finholt and Sproull 1990) suggest many operational limitations and contextual subtleties. For example, since team leaders provide rewards (and penalties) for subordinates to communicate and collaborate, team leadership is an important determinant of the productivity of a team (e.g., Ilgen et al. 1993). Also, another determinant of team collaboration is the extent of team cohesion (Guzzo and Dickson 1996; Levine and Moreland 1990). A high level of team cohesion can result in better communication and knowledge sharing, and superior decision performance, particularly given time pressure (Guzzo and Dickson 1996, 310), a typical situation in auditing. Knowledge sharing and the dynamics of audit team evidence use and decision making remain relatively unexplored areas of auditing research (but see Bamber et al. 1996; Solomon 1987).

In summary, the current research demonstrates the feasibility of modeling the entire set of crucial judgments made during planning of an audit (cf., Vinze et al. 1991). As stated by the modeled expert, "Because these (audit planning) judgments are initially performed by audit seniors, use of this model brings expertise to performance of the task--as if a partner were performing the judgments." This is the essence of knowledge management.



AcquaGlass (AG) manufactured fiberglass sport and fishing boats. It had been operating since 1991 and, after a sluggish start, had shown a modest profit for the fiscal years of 1993 and 1994. The AG's executive management consisted of four officers: Buck Bass, President; Shirley Shore, Vice-President of marketing; Taylor Tide, Vice-President of manufacturing; and Rubin Shore, Vice-President of finance and controller. They were college classmates who together founded the company in 1989. They sold the business in 1995 but all four remained as the management team because of their expertise in the pleasure boat industry. The agreement set an annual salary for each officer and provided for a substantial bonus if the company achieved operating profits that exceeded a stipulated amount.

For the audit of the fiscal year ended 1996, the AG's auditors, Lake, Waters & Stream, CPAs, noticed significant fluctuations in the AG's sales and accounts receivable from their expectations. In planning the audit program, the auditors had acquired the following information:

* During 1996, the company applied for (and was granted) a substantial bank loan to maintain its working capital. A substantial portion of this loan was used to support the continued production and marketing of the new boat product that began in 1995. Although the orders for this new product had not met the company's expectation, AG's management believed that there had not been sufficient time for marketing strategies to pay off.

* The new owners had asked management officers to form a board of directors to oversee operations and financial reporting. The officers, however, believed that forming a board then would require substantial management attention, which was currently needed to successfully complete the new boat project. The officers proposed that a forming of such a board be postponed for two years.

* The chief accountant, Connie Wave, who reported to Rubin Shore, had been with the company for 15 months. She was responsible for the entire accounting function, including preparation of financial statements, and was also responsible for establishing controls over the finance function. Shore hired Wave immediately after her graduation from Waterford University and had delegated virtually all accounting responsibilities to her. This allowed Shore time to manage the company's capital and negotiate bank loans.

* In past audits, the company's management had been receptive to the auditors' suggestions about improving internal controls, although not all the suggestions had been implemented. Generally, the accounting system had functioned fairly adequately although the accounting staff seemed to always be working long hours to get everything done.

* Employees in all areas were hired after completing a job application form, taking an aptitude test, and undergoing a brief interview with the area supervisor. No formal training programs had been established, but supervisors and other employees did conduct on-the-job training. In the accounting area, four of the five accounting positions had been filled three times each within the period of two years. The most common reason for this turnover was employees leaving for jobs with better pay or with better employee benefits. Because some positions were vacant for as long as six months, Rubin Shore purchased an accounting software package eight months ago and three microcomputers to speed up the processing of accounting information and preparation of financial statements. However, no formal evaluation of a use of this new software package (e.g., ease of use, accuracy) had been conducted.

Summary of Critical Risk Factors

* AG's executive management might have strong incentives to overly managing the company's earnings. Such incentives included management's compensation that was tied to reported earnings, the company was applying for substantial loans.

* All business operations and financial reporting had controlled by small group of management members. No board of directors was established to oversee management's operations.

* All accounting functions were under responsibility of a relatively less experienced (and newly graduated) chief accountant.

* AG's internal controls were moderately reliable.

* It appeared that the company might not acquire sufficiently skilled and competent personnel in all areas, particularly in accounting department. This reflected in the past high employee turnover rates, lower pay than industry average, and a lack of formal training.
Factors for Assessment of Statement-Level Inherent and Control Risk
(Also see Table 2)

Inherent Risk Factors

1. Relative to all CEOs, to what extent are the client's CEO and
other members of executive management concerned about managing
the company's earnings?

(low / moderate / high / very high)

2. Relative to all CEOs, to what extent are the client's CEO and
other members of executive management dominant?

(low / moderate / high / very high)

3. Evaluate the attitude of the client's CEO and other members of
executive management regarding choices of accounting estimates
and/or policies. (Selection of conservative versus liberal
accounting methods.)

(very conservative / somewhat conservative / somewhat
liberal / very liberal)

4. What is the client's going-concern status? (Also the
financial health variable.)

(very good/good/moderate/bad)

5. Relative to all businesses, evaluate the relative risk of
the client's operations.

(little risk / moderately risky / significant risk / very risky)

6. Evaluate the attitude of the client's CEO and other members of
executive management toward making risky decisions regarding the
client's business operations.

(very conservative / conservative / risky / very risky)

Control Risk Factors

1. Evaluate management's commitment to a strong internal
control environment.

(very high / high / moderate/low)

2. How effective are the client's internal control procedures
in terms of adequacy and appropriateness?

(very high / high / moderate / low)

3. How competent are the accounting personnel in performing
their duties as well as the supervisors?

(very high / high / moderate / low)

4. How effective is the client's monitoring process over the
established control procedures to ensure that the
procedures are operating as intended?

(very high / high / moderate / low)

General (Statement-Level) Risk Factors Used for Causal Analysis of
Overstated Revenue and Gross Accounts Receivable

Inherent Risk Factors (also see Table 1)

1. (Earnings Management) To what extent do the CEO and other members
of executive management attempt to manage earnings over time for their
own financial benefit and/or for the benefit of the company? Managing
earnings should be interpreted as a desire to achieve a smooth rate
of increase in earnings over time. If executive management has a
heavy financial investment in the company, they may have a strong
incentive to manage the company's earnings for their own current and
future financial returns. Also possible is an attempt to manage
earnings to achieve benefits for the company such as: (1) an
increasing trend of earnings increases affect stock prices
via P/E multiples, especially when they are relatively high
and (2) levels of earnings affect the ability to raise capital.

Relative to all CEOs, to what extent are the client's CEO and
other members of executive management concerned about
managing the company's earnings?

(low / moderate / high / very high)

2. (Domination) How dominant is the CEO? A dominant CEO is the one
who exerts unusual pressure on subordinates to achieve goals set by
executive management and/or one who does not delegate many important
activities to other personnel.

Relative to all CEOs, to what extent are the client's CEO and other
members of executive management dominant?

(low / moderate / high / very high)

3. (Accounting Choices) Evaluate the attitude of the CEO and other
members of executive management toward making conservative versus
liberal choices of accounting estimates and/or policies. Executive
management may be inclined to choose more liberal accounting
choices or be too liberal (optimistic) in the subjective areas
such as estimating allowances. Higher levels of liberal choices
imply higher levels of risk for misstatements.

Evaluate the attitude of the client's CEO and other members of
executive management regarding choices of accounting estimates
and/or policies, i.e., conservative versus liberal

(very conservative / somewhat conservative / somewhat
liberal / very liberal)

4. (Financial Health) What is the status of the financial health
of the client? The more potential financial difficulty, the more
likely it is that executive management may be inclined to make
much more risky decisions. (Same as going-concern status assessment.)

What is the status of the client's overall financial health?
(Subgoal #1)

(very good / good / moderate / bad)

Transaction-Cycle-Specific Control Risk Factors

1. (Accounting System Effectiveness) Effectiveness of the accounting
system-related and other personnel (e.g., shipping), as well as the
supervision of personnel. This factor addresses the competency (i.e.,
knowledge and skills) of personnel who are processing revenue and
revenue-related transactions (e.g., maintenance of shipping
documents), as well as the supervisors. Relevant client's
characteristics include employee turnover rate and financial
compensation compared to those of their industry peers.

How would you evaluate the effectiveness of the client's
accounting personnel responsible for processing
of revenue and revenue-related transactions, as well as the
supervisors? Relevant client's characteristics include
employee turnover rates and financial compensation compared
to those of their industry peers. Also considered is an
evaluation of the effectiveness of personnel from a prior
year audit.

(very high / high / moderate / low)

2. (Changes in Transaction Processi0ng) Changing of the way in which
transactions are processed implies that new control procedures are
required. The more significant changes in transaction processing, the
more likely it is that improper operations may occur (e.g.,
inconsistent or inaccurate results). Examples include changes in
procedures for processing of transactions (e.g., a set of procedures
to perform the task) and changes in software/hardware used to process

How would you categorize the significance of(potentially disruptive)
changes, if any, in the processing of revenue and revenue-related

(insignificant changes / moderately significant changes / highly
significant changes)

Evaluation of the Expert's and Model's Causal Conclusions

     (1)                                        (3)
Case Type (see                               Expert's
Table 4) and              (2)              Conclusion(s)
Presentation            Actual                Highest
Order                Cause(s) (a)           Probability

B-1 Case 2              Cause 4               Cause 1
C-1 Case 1              Cause 1               Cause 1
C-2 Case 3              Cause 1               Cause 1
C-3 Case 5              Cause 1               Cause 1
C-4 Case 8              Cause 1               Cause 1
D-1 Case 4              Cause 2               Cause 2
D-2 Case 7              Cause 2               Cause 2
E-1 Case 6            Causes 1, 2           Causes 1, 2
E-2 Case 9            Causes 1, 2           Causes 1, 2
# cases in
(maximum of 9)            NA                 8% (88%)

                     Conclusion(s)              (5)
     (1)                Largest               Model's
Case Type (see         Deviation         Conclusion(s) (b)
Table 4) and           from Base              Highest
Presentation             Rate               Probability
                         * (c)                Cause 4
B-1 Case 2              Cause 1               Cause 1
C-1 Case 1              Cause 1               Cause 1
C-2 Case 3              Cause 1               Cause 1
C-3 Case 5              Cause 1               Cause 1
C-4 Case 8              Cause 1               Cause 2
D-1 Case 4              Cause 1               Cause 2
D-2 Case 7            Causes 1, 2           Causes 1, 2
E-1 Case 6            Causes 1, 2           Causes 1, 2
E-2 Case 9             8% (88%)              9 (100%)
# cases in
(maximum of 9)

     (1)               Deviation
Case Type (see         from Base
Table 4) and        Rate (for error
Presentation           causes--
Order              excludes B-1) (b)

B-1 Case 2              Cause 4
C-1 Case 1              Cause 1
C-2 Case 3              Cause 1
C-3 Case 5              Cause 1
C-4 Case 8              Cause 1
D-1 Case 4              Cause 2
D-2 Case 7              Cause 2
E-1 Case 6            Causes 1, 2
E-2 Case 9            Causes 1, 2
# cases in             9 (100%)
(maximum of 9)

(a) The actual causes were determined based on analysis of real-world
audit situations.

(b) The model, in fact, suggested that "unexpected economic
conditions" (i.e., Cause 4) be the most likely cause of the detected
significant unusual fluctuations for every case because of its high
base rate probability (however, see Section VII). Since we are
evaluating the model on the basis of its ability to detect potential
problem area(s), the error cause with the highest probability is used
in the comparison with the expert's judgments (column 4).

(c) The expert's largest probability revision for case B-1 is for
Cause 4, the correct cause of the fluctuations, but it is negative.

Correlations for Expert and Model Causal Probabilities:
Nine Cases of Overstated Revenue and Overstated Gross Accounts
Receivable (Tests of statistical significance are reported only
for the probabilities that are not required to sum to one,
i.e., columns 4 and 5)

       (1)               (2)           (3)           (4)
Case and Actual        Pearson:      Spearman      Pearson:
Cause(s)              (4 probs.)    (4 probs.)    (3 probs.)

Client B-1               .352          .800         .996 *

Client C-1               .103          .400         .979
Management and

Client C-2               .132          .738        1.000 **
Management and

Client C-3               .246          .105         .972
Management and

Client C-4               .151          .738         .996 *
Management and

Client D-1              -.077          .738         .814
System and

Client D-2             -0.99          -.211         .724
System and

Client E-1               .293          .316         .700
Both Earnings
Management and
Reporting and
Ineffective System

Client E-2              -.144          .000         .932
Both Earnings
Management and
Reporting and
Ineffective System

                                        (6)           (7)
                                      Pearson:     Spearman:
                                        Four          Four
       (1)                (5)        Deviations    Deviations
Case and Actual         Spearman     from Base     from Base
Cause(s)               (3 probs.)      Rates         Rates

Client B-1              1.000 **       -.869         -.800

Client C-1              1.000 **        .977         1.000
Management and

Client C-2               .866           .985         1.000
Management and

Client C-3              1.000 **        .938         1.000
Management and

Client C-4               .866           .974         1.000
Management and

Client D-1               .866           .993          .800
System and

Client D-2               .500           .975          .800
System and

Client E-1               .866           .948          .800
Both Earnings
Management and
Reporting and
Ineffective System

Client E-2               .866           .996          .800
Both Earnings
Management and
Reporting and
Ineffective System

*, ** Significant at 0.05 level and 0.01 level, respectively,

Expert's Probability Revisions for Types of Causes

Panel A: Cause 1 (4 Cases: C1-C4)

Cause 1         0.705
Cause 2         0.0175
Cause 3        -0.005
Cause 4        -0.7175

Panel B: Cause 2 (2 Cases: D1-D2)

Cause 1        -0.045
Cause 2         0.63
Cause 3         0.095
Cause 4        -0.68

Panel C: Causes 1 and 2 (2 Cases: E1-E2)

Cause 1         0.255
Cause 2         0.305
Cause 3         0.045
Cause 4        -0.605

We presented this research at the 1999 Accounting, Behavior and Organizations Research Conference, Costa Mesa, California, October 8-9, 1999. Helpful comments concerning this research were provided by Bill Glezen, Ted Mock, Jim Peters, Karen Pincus, and the editor.

(1) We use male pronouns throughout the manuscript because the expert is a male.

(2) A decision to be made when using computational modeling is the level of abstraction (Peters 1993, 391-393). "There is an inverse relationship between the depth of the understanding achievable and the breadth of the domain studied" (Bailey et al. 1988). In this study, a comprehensive model was desired and, therefore, the modeling is focused more on use of knowledge and reasoning and less on actual memory access processes (cf., Biggs et al. 1993; Meservy et al. 1986).

(3) The model consists of three major components: (1) a knowledge base, which contains both declarative and procedural knowledge in the form of if-then-else rules that represent the domain- and task-specific knowledge used to reach a particular judgment; (2) an inference engine, which guides the application of those facts and production rules; and (3) a working memory, which serves as a scratch-pad to keep track of the goals being pursued and the progress made toward their attainment or final goal (Steinbart 1987).

(4) O'Leary (1993) raises the issue of a lack of a prediction model in the computational model of going-concern judgments researched by Biggs et al. (1993).

(5) Note that the sum of the allocated materiality amounts will be greater than the statement-level materiality depending on the number of individual accounts. The expert recommended six income statement and eight balance sheet accounts to which the statement-level materiality should be allocated; therefore, the sum of allocated amounts is approximately 2.35 times the statement-level materiality for the income statement items, and 2.7 times for the balance sheet items.

(6) The expert indicated that, although he recognizes the benefits of using statistical methods to predict account balances, there are limitations that discourage auditors from applying these techniques, e.g., extensive data requirements and difficulty in interpreting the results. Therefore, the expert uses a judgmental approach.

(7) Although prior empirical studies provide a basis for classification of error causes (e.g., Wright and Ashton 1989; Kreutzfeldt and Wallace 1986), causes of errors (e.g., insufficient accounting knowledge) are intermingled with types of errors (e.g., cutoff error). A single error type may be caused by several error causes. For example, a cutoff error may be due to insufficient employee knowledge or an intentional act by management. Moreover, different studies classified the same error cause into different categories. For example, incompetence of accounting personnel was categorized as a personnel problem by Kreutzfeldt and Wallace (1986), but as insufficient accounting knowledge by Wright and Ashton (1989).

(8) Another source of interference is an auditor's undue reliance on management's representation before independently considering his/her self-generated causal hypotheses (Anderson and Koonce 1995). The expert indicated that relying on management's explanations may inappropriately influence an auditor's causal judgments (although he realizes that it is common in practice that the auditors normally first consult with the client's management). Consequently, the model does not allow the auditor to consult with the client's management when assessing causal hypotheses.

(9) The testing of the model was conducted at the causal conclusion level, not at the intermediate subgoal conclusion levels such as materiality judgments or audit risk assessments. The fact that there are several subgoal conclusions, each of which can be based on a large number of possible combinations of cues and direct judgments, makes an exhaustive test of the model very complex. In addition, difficulties may arise from the fact that, for some judgments such as materiality levels, audit risk assessments, or development of expected account balances, there are no "correct" conclusions against which to evaluate the conclusions from the model. As a result, the testing of the model is focused on the appropriateness of the model's causal conclusions for why significant unusual fluctuations may have occurred.

(10) In addition to the nine cases, two additional cases in the "no unusual fluctuation" category were also developed to test the model. The model provides a correct conclusion in both cases, i.e., the model does not indicate any significant unusual fluctuations.

(11) For overstated revenue and gross accounts receivable, the primary context of the nine cases, the expert assigned a prior probability of .03 to cause 3 and essentially zero probability was assigned to causes 2, 4, and 5.


Agarwal, R., and M. R. Tanniru. 1990. Knowledge acquisition using structured interviewing: An empirical investigation. Journal of Management Information Systems 7 (1): 123-140.

Altman, E. I. 1968. Financial ratios, discriminant analysis, and the prediction of corporate bankruptcy. Journal of Finance: 589-609.

--, and T. P. McGough. 1974. Evaluation of a company as a going concern. Journal of Accountancy 138 (6): 50-57.

American Institute of Certified Public Accountants (AICPA). 1981. Audit Sampling. Statement on Auditing Standards SAS No. 39. AU 350. New York, NY: AICPA.

--. 1983. Audit Risk and Materiality in Conducting an Audit. Statement on Auditing Standards SAS No. 47. AU 312. New York, NY: AICPA.

--. 1996. Auditing Study, Auditing Sampling. New York, NY: AICPA.

Anderson, J. R. 1983. The Architecture of Cognition. Cambridge, MA: Harvard University Press.

Anderson, U., and L. Koonce. 1995. Explanation as a method for evaluating client-suggested causes in analytical procedures. Auditing: A Journal of Practice & Theory 14 (2): 124-132.

Arens, A. A., and J. K. Loebbecke. 1997. Auditing: An Integrated Approach. Seventh edition. Englewood Cliffs, NJ: Prentice Hall.

Asare, S. K. 1992. The auditor's going-concern decision: Interaction of task variables and the sequential processing of evidence. The Accounting Review 67 (2): 379-393.

Ashton, A. H. 1991. Experience and error frequency knowledge as potential determinants of audit. The Accounting Review 66 (2): 218-239.

Bailey, A., K. Hackenbrack, P. De, and J. Dilliard. 1988. Artificial intelligence, cognitive science, and computational modeling in auditing research: A research approach. In Artificial Intelligence in Accounting and Auditing: The Use of Expert Systems, edited by M. A. Vasarhelyi, 3-32. New York, NY: Markus Wiener.

Bamber, E. M., R. T. Watson, and M. Callahan-Hill. 1996. The effects of group support system technology on audit group decision making. Auditing: A Journal of Practice & Theory 15 (1): 122-134.

Bedard, J., and S. F. Biggs. 1991a. Pattern recognition, hypothesis generation, and auditor performance in analytical task. The Accounting Review 66 (3): 622-43.

--, and --.1991 b. The effect of domain-specific experience on evaluation of management representations in analytical procedures. Auditing: A Journal of Practice & Theory 10 (Supplement): 77-90.

Bell, T. B., G. S. Ribar, and J. R. Verchio. 1990. Neural Nets vs. Logistic Regression: A Comparison of Each Model's Ability to Predict Commercial Bank Failures. University of Kansas Auditing Symposium. Lawrence, KS: Deloitte & Touche.

--, and A. Wright, eds. 1995. Auditing Practice, Research, and Education: A Productive Collaboration. New York, NY: AICPA.

Bhatt, G. D., and J. Zaveri. 2002. The enabling role of decision support systems in organizational learning. Decision Support Systems 32: 297-309.

Biggs, S. F., T. J. Mock, and P. Watkins. 1988. Auditors' use of analytical review in audit program design. The Accounting Review 63 (1): 148-161.

--, and --. 1989. Analytical Review Procedures and Processes in Auditing. Vancouver, Canada: The Canadian Certified General Accountant's Research Foundation.

-- 1991. Computational modeling: A research strategy for developing a cognitive theory of decision making in accounting. Revised, August 26. Working paper, School of Business Administration, University of Connecticut.

--, M. Selfridge, and G. R. Krupka. 1993. A computational model of auditor knowledge and reasoning processes in the going-concern judgment. Auditing: A Journal of Practice & Theory 12 (Supplement): 82-112.

Blocher, E., and J. J. Willingham. 1985. Analytical Review: A Guide to Evaluating Financial Statements, New York, NY: McGraw-Hill, Inc.

Brumfield, C., R. K. Elliott, and P. D. Jacobson. 1983. Business risk and the audit process. Journal of Accountancy (April): 60-68.

Butt, J. L. 1988. Frequency judgments in an auditing-related task. Journal of Accounting Research 26 (2): 315-330.

Casey, C. J., and N. Bartczak. 1985. Using operating cash flow data to predict financial distress: Some extensions. Journal of Accounting Research 23 (1): 384-401.

Choo, F., and K. T. Trotman. 1991. The relationship between knowledge structure and judgments for experienced and inexperienced auditors. The Accounting Review 66 (3): 464-485.

--. 1996. Auditors' knowledge content and judgment performance: A cognitive script approach. Accounting, Organizations and Society 21 (4): 339-359.

Dhar, V., B. Lewis, and J. Peters. 1988. A knowledge based model of audit risk. A1Magazine (Fall): 56-63. Ericsson, K. A., and H. A. Simon. 1984. Protocol Analysis: Verbal Reports as Data. Cambridge, MA: MIT Press.

Finholt, T., and L. S. Sproull. 1990. Electronic groups at work. Organizational Science (1-1): 41--64.

George, F. K., and M. T. Dugan. 1995. Substantial doubt: Using artificial neural networks to evaluate going concern. Advances in Accounting Information Systems 3:137-159.

Gibbins, M., and K. Jamal. 1993. Problem-centered research and knowledge-based theory in the professional accounting setting. Accounting, Organizations and Society 18 (5): 451-466.

Guzzo, R. A., and M. W. Dickson. 1996. Team in organizations: Recent research on performance and effectiveness. Annual Review of Psychology 47: 307-338.

Harvey, L. O., Jr. 1992. The critical operating characteristic and the evaluation of expert judgment. Organizational Behavior & Human Decision Processes 53 (2): 229-251.

Havens, C., and E. Knap. 1999. Easing into knowledge management. Strategy and Leadership 27 (2): 4-9.

Ho, J. L. 1994. The effect of experience on consensus of going-concern judgments. Behavioral Research in Accounting 6: 160-177.

--, and L. R. Keller. 1994. The effect of inference order and experience-related knowledge on diagnostic conjunction probabilities. Organizational Behavior and Human Decision Processes 59 (1): 51-74.

Holland, J., K. Holyoak, R. Nisbett, and P. Thagard. 1986. Induction: Processes of Inference, Learning and Discovery. Cambridge, MA: MIT Press.

Holyoak, K. J. 1990. Problem solving. In Thinking: An Invitation to Cognitive Science, Vol. 3, edited by D. N. Osherson, and E. E. Smith, 117-146. Cambridge, MA: MIT Press.

Hopwood, W., J. C. MeKeown, and J. F. Mutchler. 1994. A reexamination of auditor versus model accuracy within the context of the going-concern opinion decision. Contemporary Accounting Research 10 (2): 409-431.

Houghton, C. W., and J. A. Fogarty. 1991. Inherent risk. Auditing: A Journal of Practice & Theory 10 (1): 1-21.

Huber, G. P. 1990. A theory of the effects of advanced information technologies on organization design, intelligence, and decision making. Academy of Management Review 15: 47-71.

Ilgen, D. R., D. A. Major, J. R. Hollenbeck, and D. J. Sego. 1993. Team research in the 1990s. In Leadership Theory and Research: Perspectives and Directions. San Francisco, CA: Jossey-Bass.

Jamal, K., P. E. Johnson, and G. Berryman. 1995. Detecting framing effects in financial statements. Contemporary Accounting Research 12 (1): 85-105.

Kiesler, S., and L. Sproull. 1992. Group decision making and communication technology. Organizational Behavior and Human Decision Processes 52 (1): 96-123.

Kinney, W. R., Jr., and W. C. Uecker. 1982. Mitigating the consequences of anchoring in auditor judgments. The Accounting Review 57 (1): 55-69.

Knapp, M. C. 1991. Factors that audit committee members use as surrogates for audit quality. Auditing: A Journal of Practice & Theory 10 (1): 35-52.

Knechel, W. R. 1998. Auditing: Text & Cases. First edition. Cincinnati, OH: South-Western College Publishing.

Koonce, L. 1993. A cognitive characterization of audit analytical review. Auditing: A Journal of Practice & Theory 12 (Supplement): 57-81.

Kreutzfeldt, R. W., and W. A. Wallace. 1986. Error characteristics in audit populations: Their profile and relationship to environmental factors. Auditing: A Journal of Practice & Theory 6 (1): 20-43.

Levine, J. M., and R. L. Moreland. 1990. Progress in small group research. Annual Review of Psychology 41: 585-634.

Levitan, A. S., and J. A. Knoblett. 1985. Indicators of exceptions to the going concern assumption. Auditing: A Journal of Practice & Theory 5 (1): 26-39.

Lewis, B. L. 1993. Discussion of: A computational model of auditor knowledge and reasoning processes in the going-concern judgment. Auditing: A Journal of Practice & Theory 12 (Supplement): 100-102.

Libby, R. 1995. The role of knowledge and memory in audit judgment. In Judgment and Decision Making Research in Accounting and Auditing, edited by A. Ashton, and R. Ashton. New York, NY: Cambridge University Press.

Markus, M. L. 2001. Toward a theory of knowledge reuse: Types of knowledge reuse situations and factors in reuse success. Journal of Management Information Systems 18 (1): 57-93.

McDonald, D. W., and M. S. Ackerman. 1998. Just talk to me: A field study of expertise location. Proceedings of the 1998 A CM Conference on Computer Supported Cooperative Work (CSCW).

McKeown, J. C., J. F. Mutchler, and W. Hopwood. 1991. Toward an explanation of auditor failure to modify the audit opinions of bankrupt companies. Auditing: A Journal of Practice & Theory 10 (Supplement): 1-13.

Meservy, R. D., A. D. Bailey, Jr., and P. E. Johnson. 1986. Internal control evaluation: A computational model of the review process. Auditing: A Journal of Practice & Theory 6 (1): 44-74.

Messier, W. F. 1990. Discussion of: A Cognitive Computational Model of Risk Hypothesis Generation. Journal of Accounting Research 28 (Supplement): 104-109.

--. 1997. Auditing." A Systematic Approach. New York, NY: McGraw-Hill.

Moriarity, S., and F. H. Barton. 1979. A judgment-based definition of materiality. Journal of Accounting Research 17 (Supplement): 114-135.

O'Keefe, R., and D. O'Leary. 1993. Expert system verification and validation. Artificial Intelligence Review 7: 3-42.

O'Leary, D. E. 1993. Discussion of: A computational model of auditor knowledge and reasoning processes in the going-concern judgment. Auditing: A Journal of Practice & Theory 12 (Supplement): 103-109.

Peters, J., B. L. Lewis, and V. Dhar. 1989. Assessing inherent risk during audit planning: The development of a knowledge based model. Accounting, Organizations and Society 14 (4): 359-378.

--. 1990. A cognitive computational model of risk hypothesis generation. Journal of Accounting Research 28 (Supplement): 83-109.

--. 1993. Decision making, cognitive science and accounting: An overview of the intersection. Accounting, Organizations and Society 18 (5): 383-405.

Ricchiute, D. N. 1992. Working-paper order effects and auditors' going-concern decisions. The Accounting Review 67 (1): 46-58.

Rosman, A. J., I. Seol, and S. F. Biggs. 1993. Understanding the Going-Concern Judgment: Linking Domain Experience, Process. and Performance. Storrs, CT: The University of Connecticut.

Selfridge, M., S. Biggs, and G. Krupka. 1992. A cognitive model of the auditor's going-concern judgment. The International Journal of Intelligent Systems (7-5): 393-417.

Solomon, I. 1987. Multi-auditor judgment/decision making research. Journal of Accounting Literature 6: 1-25.

Steinbart, P. 1987. Materiality: A case study using expert systems. The Accounting Review 62 (1): 97-116.

Tang, R., and E Solomon. 1998. Toward an understanding of the dynamics of relevance judgment: An analysis of one person's search behavior. Information Processing and Management 34 (2): 237-256.

Trotman, K., and J. Sng. 1989. The effect of hypothesis framing, prior expectations and cue diagnosticity on auditors' information choice. Accounting, Organizations and Society 14 (516): 565-576.

Vinze, A. S., V. Karan, and U. S. Murthy. 1991. A generalizable knowledge-based framework for audit planning expert systems. Journal of Information Systems (Fall): 78-91.

Wallsten, T. 1996. An analysis of judgment research analyses. Organizational Behavior & Human Decision Processes 65 (3): 220-226.

Williams, H. J., and D. Ricchiute. 1987. How to evaluate audit risk and materiality. The Practice Accountant (October): 75-88.

Wright, A., and R. H. Ashton. 1989. Identifying audit adjustments with attention directing procedures. The Accounting Review 64 (4): 710-728.

Wright, W. F. 1993. A cognitive characterization of audit analytical review A discussion. Auditing: A Journal of Practice & Theory 12 (Supplement): 79-81.

--, and J. J. Willingham. 1997. A computational model of loan loss judgments. Auditing: A Journal of Practice & Theory 16 (1): 99-113.

Zuber, G. R., R. K. Elliott, W. R. Kinney, Jr., and J. J. Leisenring. 1983. Using materiality in audit planning. Journal of Accountancy: 42-54.

William F. Wright

University of Arkansas

Niramol Jindanuwat

IBM Corporation in Thailand

John Todd

California State University, Long Beach
COPYRIGHT 2004 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Wright, William F.; Jindanuwat, Niramol; Todd, John
Publication:Journal of Information Systems
Geographic Code:1U2NY
Date:Mar 22, 2004
Previous Article:The evaluation of risky information technology investment decisions.
Next Article:Selecting an audit software package for classroom use.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters