Printer Friendly

Investigating the impact of auditor-provided systems reliability assurance on potential service recipients.

ABSTRACT: The objective of this study is to assess the extent to which auditor-provided systems reliability assurance affects potential service recipients' (1) likelihood of recommending that their company enter into a contractual agreement with the service provider, and (2) comfort level with the reliability of the service provider's information systems. We conducted a full-factorial between-subjects experiment where the following four auditor assurances were either absent or present: availability, security, integrity, and maintainability. A total of 481 middle- and upper-level managers from a broad spectrum of functional areas participated in the study. Research findings indicate significant main effects with respect to all four assurances, as well as firm size, for the "likelihood" variable. Significant main effects were also obtained for the "comfort" variable with respect to availability, security, and maintainability (marginal significance), but integrity and firm size were nonsignificant. The amount of variance explained by the "availability" and "security" assurances (combined) was remarkably large (56 percent for "likelihood" and 43 percent for "comfort") relative to the combined variance explained by the other two assurances (1 percent for "likelihood" and 1 percent for "comfort"). We also found evidence that respondents overweighed assurance reports on individual principles, as compared to a four-principle reliability report. Additionally, we found no significant difference in dependent variable responses when all four assurances were provided and the auditor's report focused on either the (1) effectiveness of controls or (2) reliability of the system. Research evidence offers key strategic guidance to the AICPA, CICA, and CPA/CA firms engaged in systems reliability assurance services.

Keywords: assurance services; systems reliability; SysTrust.

Data Availability: Data is available from the authors.

I. INTRODUCTION

As indicated by Elliot (1998), financial audits performed by independent accountants reflect one form of assurance service. The Special Committee on Assurance Services (AICPA 1996) developed a list of other possible business-to-consumer and business-to-business assurance services that hold great potential for the accounting profession. One type of business-to-business assurance service suggested by the committee deals with the reliability of a third party's information-processing system with respect to availability, security, integrity, and maintainability.

Since systems reliability assurance is relatively new to the marketplace, researchers have a window of opportunity through which they can conduct ex ante studies examining the potential impact of such assurance on a number of affected parties, such as firm owners, managers, and external stakeholders (e.g., service recipients, creditors, and investors). The current study investigates the extent to which auditor-provided systems reliability assurance influences the likelihood that potential service recipients will engage in business-to-business relationships with information service providers and examines the degree to which the potential recipients are comfortable with the reliability of the service firm's information systems. Research findings can help service providers and independent auditors better understand the nature of salient assertions and assurances they should offer to potential service recipients.

The next section of the paper provides an overview of systems reliability assurance and proposes study hypotheses. The following Sections discuss the research method and present study results. The final section summarizes study results and discusses the research findings.

II. BACKGROUND AND HYPOTHESES

Increasingly sophisticated developments in information technology are making greater power available to business and governmental entities at far lower costs than ever before. The information systems supported by this technology are not just performing bookkeeping functions--they are running businesses, producing products and services, and communicating with customers and business partners. As a result, information technology permeates all areas of business and public sector organizations, differentiates entities in the marketplace, and consumes increasing amounts of human and financial capital.

As dependence on information technology increases, tolerance decreases for systems that are unsecured, unavailable when needed, and unable to produce accurate information on a consistent and timely basis. Like the weak link in a chain, an unreliable system creates potentially serious vulnerabilities that can negatively impact a company and its customers, suppliers, and business partners. Unreliable systems typically display some or all of the following symptoms (Boritz et al. 1999; McPhie 2000):

* Frequent system failures and crashes that deny internal and external users access to essential system services;

* Failure to prevent unauthorized access to the system, making it vulnerable to viruses, hackers, and loss of data confidentiality;

* Loss of data integrity, including corrupted, incomplete, and fictitious data; and

* Serious maintenance problems resulting in unintended negative side effects from system changes, such as loss of access to system services, loss of data confidentiality, or loss of data integrity.

Boritz et al. (1999, 75) cite some headlines that have appeared in the press as indicators of the public's interest in systems reliability, such as "Rail company's unreliable system causes freight cars to stack up," "Computer errors decimate managed care company's stock," and "Computer woes halt TSE trading." Additionally, Boritz et al. (1999) and McPhie (2000) illustrate the impact of publicized system failures on stock prices. For example, network failures at E*Trade were followed by sharp drops in its share price, resulting in a $2.5 billion decline in its market capitalization. Similar dramatic share price drops were reported when eBay experienced system outages, and the severity of such declines was amplified as the frequency of reported outages increased.

Such negative reactions to unreliable systems (real or perceived) indicate that the potential growth of business-to-consumer and business-to-business electronic commerce may be stifled by (mis)perceptions regarding systems reliability issues. Nevertheless, in their continual quest to enter new markets, reduce operating costs, serve customers, and cope with ever-changing competitive pressures, companies are looking outside their own boundaries and relying on third parties' information systems through outsourcing, partnering, or other ventures. Thus, in today's economy, which is made up of increasingly interconnected entities, it is not just a company's own systems that need to be reliable--it is also the systems of suppliers, business partners, and customers that are part of the reliability chain.

In response to concerns about unreliable systems, the American Institute of Certified Public Accountants (AICPA) and the Canadian Institute of Chartered Accountants (CICA) have developed a new assurance service called SysTrust[SM], whereby a public accountant can report on the effectiveness of controls over the reliability of a system. Although system reliability is discussed conceptually in the system engineering discipline (e.g., Lyu 1996) and in the quality assurance literature (e.g., Kehoe and Jarvis 1996), heretofore there has been no existing standard for measuring business system reliability. Hence, a joint Systems Reliability Task Force of the AICPA and CICA established a process for developing a set of criteria that could be used to evaluate whether a particular business system is reliable. The task force developed the following definition of systems reliability:

A reliable system is a system that operates without material error, fault or failure in system availability, security, integrity, and maintainability during a specified time in a specified environment.

According to this definition, there are four principles underlying reliable business systems:

1. Availability: The system is available for operation and use at times set forth in service level statements or agreements.

2. Security: The system is protected against unauthorized physical and logical access.

3. Integrity: System processing is complete, accurate, timely, and authorized.

4. Maintainability: The system can be updated when required in a manner that continues to provide for system availability, security, and integrity.

These principles are the embodiment of 58 specific criteria, representing control objectives that can be used for evaluating whether a business system is reliable. In other words, controls over the system must operate effectively to prevent or detect and promptly correct (or otherwise remediate or mitigate the effects of) system errors, faults, or failures during a specified time period to satisfy the four principles of availability, security, integrity, and maintainability. These principles and criteria are available on the AICPA's and CICA's web sites for all interested parties to access. Additional information is provided in AICPA/CICA SysTrust[TM] Principles and Criteria for Systems Reliability (AICPA/CICA 2001).

In a SysTrust examination/audit, the practitioner tests and evaluates whether there are effective controls over a system when measured against the criteria related to the four essential principles of system reliability, or a subset of those principles. The assurance provider gathers evidence about the design, implementation, and monitoring of controls over systems reliability in the same way as is commonly done in other audit engagements, by applying procedures such as inspection, observation, inquiry, confirmation, computation, and analysis to verify the effectiveness of controls over systems reliability based on the AICPA/CICA SysTrust[TM] Principles and Criteria for Systems Reliability (AICPA/CICA 2001).

SysTrust is not a new form of attestation, but an application of existing attestation/assurance standards. In the U.S., a SysTrust engagement falls under AICPA Statement on Standards for Attestation Engagements No. 10, Attestation Standards (AICPA 2001) or other relevant standards. In Canada, a SysTrust engagement is conducted under CICA Handbook S. 5025, Standards for Assurance Engagements or other relevant standards (CICA 2001).

Under version 1.0 of the SysTrust Principles and Criteria, an auditor could accept only an engagement that led to the expression of an opinion on all four principles and all 58 criteria identified in the SysTrust publication. Under version 2.0 of the SysTrust Principles and Criteria, the requirement that all SysTrust audit engagements address all principles and criteria was relaxed to permit an opinion on a subset of the principles. An opinion on a subset of the principles is not an opinion on system reliability because not all of the principles underlying the system reliability construct are part of the engagement.

The accounting profession believes that a SysTrust report signed by a CPA or CA is valuable because these professionals are knowledgeable about the subject and assurance matters and are recognized for their independence, integrity, objectivity, and discretion. In addition, CPAs and CAs are required to follow comprehensive ethics rules and professional standards when providing professional services. However, the extent of "value" ascribed to SysTrust reports by potential contracting parties to information systems outsourcing contracts is yet to be determined in the marketplace. Accordingly, the objective of the current research is to study ex ante the reaction of potential contract grantors to auditor-provided assurances over all possible subsets of the four systems reliability principles of availability, security, integrity and maintainability.

Hypotheses

Audit theory predicts that concerns about a subject matter are particularly salient in the presence of four conditions: remoteness, complexity, consequence, and conflict of interest (AAA 1973, 9-12). These four conditions may contribute individually and in combination to the perceived value external parties place on auditor-provided assurances-related information systems reliability.

The first condition, remoteness, represents a systems reliability concern, as trading partners and service providers are often geographically dispersed across the globe. Remote computer systems have many invisible phases and components that make them difficult to monitor. Network-based systems are particularly difficult to monitor from afar. Thus, there is a need for an assurance provider with the capability to observe and verify remote system performance.

Information technology represents a high level of complexity that can require special expertise. As such, the complexity concern is paramount with respect to systems reliability. Thus, the combination of subject matter complexity and assurance process intricacy suggests an increased demand for systems reliability assurance services.

Concern over consequences is especially salient in the context of systems reliability. Assurance about system reliability would be particularly valued when system unreliability creates the risk of making incorrect decisions for users of a system, or when there are significant consequences related to unreliability, such as excessive costs or deficient revenues. Accordingly, decision risk, costs, and revenues are integral components of the "consequence" concern.

Potential conflicts of interest may arise between system developers, owners, operators, and system users. For example, outsourcing service providers may be motivated to trim costs of providing services to their customers to achieve their own profitability goals by cutting back on controls, but such actions may decrease the reliability of their customers' systems. Therefore, systems reliability assurance can mitigate concerns over conflicts of interest.

A complementary theoretical basis for expecting positive reactions to systems reliability assurance is found in extant auditing research. Prior research evidence indicates that audited information is perceived by investors to be more reliable than nonaudited information (e.g., Wallace 1980; Chow 1982; Watts and Zimmerman 1986; Abdel-khalik 1993; Willenborg 1999). Auditing studies have also suggested that financial market participants ascribe higher stock price values based on the presence of independently audited information (Dopuch et al. 1986; Chow and Rice 1982; Willlenborg 1999) and the selection of higher quality auditors (Balvers et al. 1988; Beatty 1989). The information hypothesis (Fama and Laffer 1971; Wallace 1980), which asserts that independently audited information reduces information asymmetry and decreases uncertainty, forms the theoretical basis for expecting such reactions.

Analogously, the information hypothesis can be used to posit how potential service recipients might respond to auditor-provided systems reliability assurance. That is, systems reliability assurance is designed to decrease information asymmetry between service providers and potential service recipients such that the latter party can better understand how the former has dealt with concerns over remoteness, complexity, consequence, and conflict of interest by offering system reliability assertions dealing with availability, security, integrity, and maintainability. In turn, systems reliability assurance is designed to reduce uncertainty regarding the faithful representation of such management assertions.

Application service providers (ASPs) embody many of the risks and concerns identified above. As Internet-based services, they are remote from their customers. Their services may be complex from the viewpoint of a customer and the consequences of entering into a contract with an unreliable ASP may be high once an entity comes to depend on the ASP's services as part of its ongoing business activity. Also, as cutsourcers, ASPs involve potential conflicts of interest. Thus, in theory, assurance about system reliability should be of interest to potential business partners who are considering entering into a service agreement with an ASP and should enhance contracting opportunities between such parties. Accordingly, this study investigates the extent to which potential ASP clients are impacted by auditor-provided assurances over systems reliability by examining their likelihood of contracting with the ASP provider and their comfort level with the service provider's systems in the presence of such assurances. Thus, the following hypotheses (alternate form) are offered:

H1: Auditor-provided assurances on information systems availability security, integrity and maintainability will exhibit significant main effects with respect to the participants' likelihood of entering into a contractual agreement with the ASP firm.

H2: Auditor-provided assurances on information systems availability, security, integrity, and maintainability will exhibit significant main effects with respect to the participants' comfort level with the reliability of the ASP firm's ERP system.

III. RESEARCH METHOD

The study involved a 2 (Availability Assurance: Absent or Present) x 2 (Security Assurance: Absent or Present) x 2 (Integrity Assurance: Absent or Present) x 2 (Maintainability Assurance: Absent or Present) between-subjects experiment, resulting in a full-factorial design with 16 treatment conditions. An additional (17th) treatment condition was included wherein all four assurances were present, but rather than offering a control-oriented opinion, the assurance provider offered a reliability-oriented opinion instead.

To operationalize the study, we chose an existing assurance service offered by the AICPAICICA (SysTrust) as a specific instance of the broader systems reliability assurance services phenomenon. Since the participants were likely unfamiliar with SysTrust (based on preliminary oral interviews), we provided the SysTrust principles and criteria in the case materials (specific assurances were matched to the experimental manipulation contained in the instrument and the presentation orders of assurances were randomized). While we could have contrived a fictitious systems reliability assurance service for the study, we reasoned that the extent of familiarization for the participants would have been equivalent. Hence, the use of Systrust seemed appropriate for this study, as the extensive amount of research and collaboration that went into developing SysTrust provides a high degree of ecological validity to the assurance service. It is important to note, however, that although SysTrust was used as the assurance manipu lation, we are not assessing the potential success or failure of SysTrust per se. Rather, our research interests lie in understanding the impact of auditor-provided systems reliability assurance on potential recipients of information systems service providers.

Pilot Test

We conducted a pilot test of the experimental instrument using volunteer students in a master's of accountancy course taught by one of the authors. The students were enrolled in a professional program leading to an accounting certification and had an average of 1.25 years of work experience in accounting firms. About 100 questionnaires were distributed and 39 usable responses were returned (21 females and 18 males). This was a reasonable response rate given that there were no incentives offered to the students for participating. As all participants were accounting students, there was little disparity in the demographic information.

Participants' actual experience with ERP, ASP, and outsourcing was "very low," although their familiarity with ERP and understanding of ASP was somewhat higher. Participants generally had a "moderately high" level of experience with independent auditors, "moderately high" trust in auditors' reports and assurance services, and "slight" distrust of management's assertions that are not assured by independent auditors.

Participants generally were concerned about the availability, security, processing integrity, and maintainability of the ASP system, and indicated more concern about security and integrity than availability and maintainability. This suggested that different principles could have different weights in the overall evaluation of system reliability. However, due to sample size and data limitations, this issue could not be explored further.

The semantic gap between assurance about system reliability and assurance about controls over the four principles did not appear to be significant. However, since the pilot test did not include all combinations of the four principles, the resulting data analysis limitations suggested that the study should be expanded to address all combinations. The pilot test also indicated that some of the debriefing questions needed to be revised. Accordingly, appropriate revisions to the experimental instrument were made in accordance with pilot test results and participant debriefings.

Experimental Procedure

Eleven consultants who were employed by a large international management consulting/training firm administered the experimental instrument during training sessions. The consulting firm specializes in providing a variety of fee-based consulting and training services to business managers at all levels over a wide range of functional areas. At the request of the researchers, the consultants solicited volunteer participants from 62 training sessions (lasting from one to five days in length) over a three-week period. The topics of the 62 sessions were: leadership skills, customer relations, contract negotiation, advertising innovation, procurement practices, human resource sustainability, and government contracting. The training sessions were held throughout the United States.

Measures were taken to preclude the possibility of order effects biasing study results. First, there were 24 permutations regarding the presentations of management's four assertions. Next, the order in which the auditor's assurances were presented to participants (e.g., availability, security, integrity, and maintainability) was fully counterbalanced for each treatment condition, resulting in 89 permutations (including the control condition with no assurance), yielding 2,136 possible versions before considering debriefing questions. Finally, there were two randomized versions of the debriefing questions provided at the end of the experiment. As a result, there were 4,272 possible instrument versions. Based on preliminary discussions with the consulting firm, the approximate total sample size would be about 615 participants.

Accordingly, the experimenters first created 26 sets of the 24 orders in which the assertions could be presented and randomized them into a single pile of 624 instruments. Next, seven sets of the 89 combinations of experimental manipulations were created and randomized into a second pile of 623 instruments. Then, 310 copies of each of the two versions of debriefing questions were randomized into a third pile of 620 instruments. Working from the top of each pile to the bottom, the experimenters created 620 complete experimental instruments by attaching one order of assertions to one order of assurances to one order of debriefing questions. Finally, based on enrollments for each training session, the researchers placed the appropriate number of instruments into 62 different envelopes and handed the envelopes to the consulting firm contact person who distributed them to the 11 trainers based on the training sessions each trainer would conduct over the data collection period.

The cover letter in each envelope, endorsed by the firm's chief executive officer, encouraged the trainers to hand out the instruments at the end of the training session, solicit participants to voluntarily complete the survey, require participants not to communicate with each other while the instrument was being completed, collect the instruments, and return the envelopes to the contact person at the consulting firm. The cover letter also asked the trainers not to discuss the nature of the survey with participants before, during, or after the training session. The experiment was designed to take about 30 minutes to complete.

Experimental Instrument

The participants first read the following scenario:

Your company wants to maximize the potential benefits of using an enterprise resource planning (ERP) system to process all company information, while minimizing the immediate cost of implementing and ongoing expense of maintaining the ERP system. After holding many corporate meetings and reviewing multiple cost/benefit analyses, your company has decided that the benefits of outsourcing the processing of all corporate information via the ERP system outweigh the costs. Thus, management would like to outsource the company's information-processing functions to an application service provider (ASP). An ASP offers its customers access to application software via the Internet on a subscription basis. Thus, customers can obtain the benefits of costly systems, such as ERP systems, at a fraction of the time and cost of implementing them in-house.

At this point, your company is considering whether it should outsource its information processing to an ASP firm named "NextWave." The setup cost and monthly processing fees proposed by NextWave meet your company's cost/benefit criteria; meaning that, if your company decides to contract with NextWave, the anticipated benefits significantly outweigh the expected costs.

Participants then read about an ASP firm named "NextWave," including descriptions of its ERP system and related applications, computer and communications infrastructure, people and procedures and data. Participants next read the following management's assertions regarding availability, security integrity, and maintainability (which were fully counterbalanced across experimental instruments):

Management's Assertions

During the period October 1,2000 to December 31,2000, NextWave maintained the reliability of the ERP system, such that--

1. The ERP system was available for operation and use at times set forth in service-level statements or agreements.

2. The ERP system was protected against unauthorized physical and logical access.

3. The ERP system processing was complete, accurate, timely, and authorized.

4. The ERP system could be updated when required in a manner that continued to provide for system availability, security, and integrity. [Signed and dated by the CEO of Next Wave]

Afterward, the participants were presented with an auditor's assurance opinion with regard to management's assertions (depending on the treatment condition, one, some, or all assertions were included in the opinion). An example of the report follows:

Auditor's Report

We have audited the accompanying assertion by the management of NextWave Corporation regarding the reliability of the ERP system during the period October 1,2000 to December 31, 2000. This assertion is the responsibility of the management of NextWave Corporation. Our responsibility is to express an opinion, based on our audit, on the conformity of management's assertion with the SysTrust[TM] Principles and Criteria established by the American Institute of Certified Public Accountants (AICPA) and the Canadian Institute of Chartered Accountants (CICA), which are available at http://www.cica.ca (reproduced in Appendix attached).

Our audit was conducted in accordance with standards for assurance engagements established by the CICA. Those standards require that we plan and perform our audit to obtain reasonable assurance as a basis for our opinion. Our audit included: (1) obtaining an understanding of the controls related to the availability, security, integrity, and maintainability of the ERP system, (2) testing and evaluating the operating effectiveness of the controls, and (3) performing such other procedures as we considered necessary in the circumstances. We believe that our audit provides a reasonable basis for our opinion.

In our opinion, management's assertion that it maintained the reliability of the ERP system, such that--

* The system was available for operation and use at times set forth in service level statements or agreements;

* The system was protected against unauthorized physical and logical access;

* The system processing was complete, accurate, timely, and authorized; and

* The system could be updated when required in a manner that continued to provide for system availability, security, and integrity during the period October 1, 2000 to December 31, 2000, is fairly stated in all material respects in accordance with the SysTrust[TM] Principles and Criteria established by the AICPA and CICA.

Management's description of the aspects of the ERP system covered by its assertion is attached. We did not audit this description and accordingly we do not express an opinion on it.

Because of the inherent limitations of controls, errors or fraud may occur and not be detected. Furthermore, the projection of any conclusions based on our findings to future periods is subject to the risk that changes made to the system or controls, changes in processing requirements, or the failure to make changes to the system when required may alter the validity of such conclusions.

[Signed and dated by the independent auditor]

At this point, the participants were asked, "How likely is it that you would recommend your company to enter into a contractual agreement with NextWave to process your company's information via the ERP system?" (0 percent to 100 percent likelihood), followed by, "What is your comfort level with respect to the reliability of NextWave's ERP system?" (1 = Extremely Low, 7 Extremely High). Finally, participants responded to debriefing, manipulation check items, and demographic items.

IV. EXPERIMENTAL RESULTS

Sample Demographics

A total of 594 middle-and upper-level business managers participated in the 62 training sessions, of which 481 (81 percent) managers volunteered to participate in the study. All participants were asked about their personal experience negotiating contracts with trading partners, such as suppliers and customers, throughout their business career (1 = Extremely Low, 7 = Extremely High). The overall mean (standard deviation) response was 5.74(1.37). ANOVA testing indicated no significant differences across treatment conditions (F = 0.63, p = .86) and the overall mean was significantly higher than the midpoint of the scale (4), as the t-statistic (p-value) was 27.98 (.01). Thus, on average, the study participants indicated a fairly high level of experience dealing with contract negotiations. Such congruence between the 'participant's work experience and the experimental task is key to the internal and external validity of behavioral experiments of this nature (Schipper 1991).

The sample sizes within each treatment condition ranged from 24 to 33, with a median cell size of 28. The mean (standard deviation) age was 42.01 (5.88) years. There were 364 (76 percent) male and 117 female managers. The mean (standard deviation) years in business, years with the current company, and years at the current position were 18.37 (6.18), 8.92 (6.63), and 4.12 (4.56), respectively. The participant's industry designation, firm size, college major, attained education, position title, and management level are shown on Table 1. Statistical analyses (MANOVA, ANOVA, and Chi-square) indicated no significant differences among treatment conditions on any demographic variables, as all p-values exceeded .10.

Manipulation Check Testing

To test the effectiveness of the manipulations, participants were asked the extent to which they agreed with four statements (1 = Strongly Disagree, 7 = Strongly Agree). Each statement dealt with one of the information systems assurances (availability, security, integrity, and maintainability). For instance, with respect to availability, participants responded to the following statement: "The independent auditors reported on the availability of the ERP system." A comparison of mean responses to manipulation check items is presented on Table 2. As indicated, mean responses were significantly higher in the presence, as compared to absence, of auditor-provided assurances.

We also asked participants to respond to the following statements: "I am familiar with enterprise resource planning (ERP) systems" and "I understand the application service provider concept" (1 = Strongly Disagree, 7 = Strongly Agree). The purpose of asking these questions was to determine if the explanations provided in the case materials, coupled with the participants' personal experience, yielded a sufficiently high base-line level of understanding with respect to the ERP and ASP concepts incorporated in the case. Mean (standard deviation) responses to the ERP and ASP questions, respectively, were 6.47 (0.75) and 6.44 (1.01). Mean responses were not significantly different across treatment conditions for either the ER? (F = 0.09, p = 0.77) or ASP (F = 0.48, p = .48) question. Based on manipulation check testing, the experimental manipulations were considered to be successful.

Preliminary Testing

We used MANCOVA to test the main and interactive effects of information system reliability factors (availability, security, integrity, and maintainability) on the (1) likelihood of recommending that the potential service recipient enter into a contractual agreement with the ASP firm and (2) comfort level with the reliability of the APS firm's ERP system. The following four covariates were included in the model: the participant's (1) experience with ERP systems, (2) experience with ASPs, (3) experience with outsourcing, (4) experience dealing with external auditors (1 = Extremely Low, 7 Extremely High). Other factors included in the MANCOVA model were: (1) time of day the instrument was completed (AM or PM), (2) day of week the instrument was completed (1 - 5), (3) data collection week (1 - 3), (4) trainer (1 - 11), (5) subject matter of the training session (1 - 7), (6) firm size (1 - 3), and (7) industry (1 - 7).

MANCOVA results indicated significant main effects for availability (p < .01) security, p < .01), integrity (p < .02), and maintainability (p < .01). All two-way interactions were nonsignificant, as the lowest p-value was .118 for the "security by integrity" interaction term. Marginal significance (p .07) was obtained for one three-way interaction (availability by security by maintainability). All other three-way interactions and the four-way interaction were nonsignificant, as the lowest p-value was .130 for the four-way interaction.

All four covariates were nonsignificant in the MANCOVA model, as the lowest p-value was .43 for "experience with ERP systems." No significant differences in dependent variable responses were indicated for time of day (F = 0.18, p = .84), day of week (F = 0.77, p = .63), data collection week (F = 0.33, p = .86), trainer (F = 0.96, p = .51), subject matter of the training session (F = 0.68, p = .77), or industry (F = 0.95, p = .50). However, firm size was significant (F = 2.71, p = .03). Since all the potential covariates tested via MANCOVA were nonsignificant, except for firm size, the following tests of hypotheses only consider the size variable.

Hypothesis 1

The first hypothesis (H1) anticipates significant main effects for each of the systems reliability assurances (availability, security, integrity, and maintainability) on the likelihood of recommending that the potential service recipient enter into a contractual agreement with the ASP. An ANOVA model was used to test this hypothesis. As shown on Table 3, there were significant main effects for availability, security, integrity, maintainability, and firm size on the participant's likelihood of contracting with the ASP firm. None of the interaction terms were significant.

The "likelihood" means in the presence (absence) of auditor assurances were as follows: 66.68 percent (35.58 percent) for availability, 66.24 percent (36.02 percent) for security, 53.38 percent (48.88 percent) for integrity, and 53.22 percent (49.04 percent) for maintainability. The mean "likelihood" assessments For firm size were as follows: small (56.71 percent), medium (49.92 percent), and large (46.76 percent). Duncan's multiple pairwise comparison test ([alpha] = .05) reveals that the mean "likelihood" assessment for small firms is significantly greater than medium and large firms, and the mean likelihood assessments of medium and large firms are not significantly different from each other. Based on the above analysis, H1 was supported, as participants were significantly more likely to recommend that their firm engage in an ASP relationship when auditor-provided assurances were provided for availability, security, integrity, and maintainability.

Treatment means and Duncan's multiple pair-wise comparisons ([alpha] = .05) are also shown on Table 3. While one should not make inferences from experimental results with respect to specific weights placed on independent variables, it is interesting to note the percentages of variance accounted for by the significant assurances, based on the sum of squares, which are as follows: 28.78 percent (availability), 27.20 percent (security), 0.61 percent (integrity), and 0.52 percent (maintainability). Overall, the presence of auditor-provided assurances explained 57.11 percent of the experimental variance.

Hypothesis 2

Table 4 indicates significant main effects for availability (p < .01) and security (p < .01), and a marginally significant main effect for maintainability (p = .096), with regard to the participants' comfort level with the ASP firm's ERP system (H2). Additionally, marginally significant interactions are obtained for "availability by security by maintainability" (p=.085) and "availability by integrity by maintainability" (p=.093). Main effect means in the presence (absence) of auditor assurances are as follows: 5.68 (3.97) for availability, 5.69 (3.96) for security, 4.90 (4.75) for integrity, and 4.93 (4.72) for maintainability.

Also shown on Table 4 are treatment means and Duncan's multiple pair-wise comparisons ([alpha] = .05). As mentioned earlier, while one should not rely on experimental results to determine specific weights placed on independent variables, it is interesting to note that the percentages of variance explained by the assurances, based on the sum of squares, are as follows: 21.65 percent (availability), 21.54 percent (security), 0.19 percent (integrity), and 0.33 percent (maintainability). The overall percentage of variance explained by the assurance manipulations was 43.71 percent.

The marginally significant three-way interactions were graphed and examined. There were no rank reversals in the interactions. Instead, the means were all moving in the same relative direction; that is, the presence of assurances increased mean "comfort" assessments in all cases. The interactions arise due to absolute changes in the magnitudes of increases. Since the relative effects of providing the assurances are consistent and the interactions are marginally significant, we deemed that further analysis and interpretation of the interactions were not warranted. Overall, based on the above analyses, H2 is only partially supported, as the availability and security manipulations were significant, the maintainability manipulation was marginally significant, and the integrity manipulation was nonsignificant.

Debriefing Questions

The wording of debriefing questions, along with statistical analyses, is shown on Table 5. None of the 12 debriefing question means are significantly different across treatment conditions. Additionally, all means are significantly different from the midpoint of the scale (4), except for Question #3, which deals with the participants' experience with outsourcing in general.

Debriefing question responses reveal that the participants have relatively low experience dealing with ERP systems and ASP firms. Conversely, their experience with independent auditors is relatively high. The participants tend not to trust management's assertions when auditor assurances are not provided, but do trust such assertions if accompanied by auditor assurances. Participants are not comfortable with outsourcing IS functions to a service provider. Regarding systems reliability factors, the participants are highly concerned with "availability" and "security" and somewhat concerned with "integrity" and "maintainability." Responses to the latter four debriefing questions are consistent with the experimental findings, as auditor assurances over availability and security evoked remarkably positive mean responses from the participants.

Post Hoc Analysis

We included a 17th treatment condition in the experiment, as we wanted to ascertain whether the participants would react differentially to two forms of auditor reports. The first form, which is suggested by the AICPA/CICA (2001), stated the following (emphasis added):

We have audited the accompanying assertion by the management of NextWave Corporation regarding the effectiveness of its controls over the security integrity and maintainability of the ERP system....

The second form of the report stated (emphasis added):

We have audited the accompanying assertion by the management of NextWave Corporation regarding the reliability of the ERP system....

We tested for differences between a controls-oriented and reliability-oriented report because there has been considerable debate within the profession regarding whether users perceive one form to be more "comforting" than the other form. In these two treatment conditions, the auditor provided assurances on all four management assertions.

Sample sizes in the control-oriented and reliability-oriented report conditions were 24 and 33. respectively. Mean responses to the "likelihood of contracting with the ASP firm" variable were 84.58 percent (control-oriented) and 83.33 percent (reliability-oriented). The likelihood means were not significantly different from each other (t = 0.47, p = .64). Mean responses to the "comfort with the reliability of the ASP firm's ERP system" variable were 6.63 (control-oriented) and 6.58 (reliability-oriented), respectively. Statistical testing revealed no significant mean difference (t = 0.24, p = .25). Hence, the participants provided equivalent mean responses to both types of auditor reports.

Comparison of Assurance Derived from Individual Principles and All Principles

To further investigate the relative assurance that subjects derived from individual principles, we compared the amount of assurance derived from the sum of reports on individual principles compared with that derived from a report on all four principles. We used the two Response Variables "likelihood" (RV1) and "comfort" (RV2) as proxies for assurance. In theory, the assurance derived from an individual principle should be less than that derived from that principle when combined with other principles, since important interactions between one principle and the other three principles would be covered in a multi-principle examination, but would not necessarily be covered in a report on an individual principle. For example, as Figure 1 illustrates, controls over system maintainability could affect controls over each of the other three principles. Similarly, controls over Availability and Security could affect controls over Integrity. Controls over security could also affect controls over availability. However, an examination of criteria related to one principle would not ordinarily address criteria related to other principles. This implies that, ceteris paribus, a one-principle examination would leave more control risks unaddressed for that principle than a multi-principle examination. To perform this comparison we calculated a main effects model and a full effects model to the responses as described below.

Main effects model: The model is as follows:

RV1 = [a.sub.0] + [a.sub.1] *A + [a.sub.2] *S + [a.sub.3] *I + [a.sub.4] *M + e. (1)

RV2 = [b.sub.0] + [b.sub.1] *A + [b.sub.2] *S + [b.sub.3] *I + [b.sub.4] *M + e. (2) where RV1 and RV2 are the likelihood and comfort response variables, respectively, A, S, I, and M represent the presence of the individual principles, respectively, in the 16 conditions considered by the subgroups of respondents, and ASIM represents the presence of all four principles. (1) Since our goal was to compare average(ASIM) with average(A + S + I + M) we deducted the intercepts to obtain the following comparisons:

[RV1(ASIM) - [a.sub.0]] vs. [RV1(A) - [a.sub.0] + RV1(S) - [a.sub.0] + RV1(I) - [a.sub.0] + RV1(M) - [a.sub.0]], and [RV2(ASIM) - [b.sub.0]] vs. [RV2(A) - [b.sub.0] + RV2(S) - [b.sub.0] + RV2(I) - [b.sub.0] + RV2(M) - [b.sub.0]].

We used a regression analysis to obtain the following parameter estimates for = 14.516 and [b.sub.0] = 2.982123. Using these parameter estimates, we derived the following results:

For RV1, average(ASIM) = 70.07 percent < average(A + S + I + M) = 81.55 percent;

For RV2, average(ASIM) = 3.64 < average(A + S + I + M) 5.06.

Both of these results indicate that the sum of the assurance derived from individual principles exceeds the sum of the assurance derived from all four principles.

Full effects model: Using the same approach, we also fitted a full effects model, as follows:

RV1 = [a.sub.0] + [a.sub.1] *A + [a.sub.2] *S + [a.sub.3] *I + [a.sub.4] *M + [a.sub.5] *AS + [a.sub.6] *AI + [a.sub.7] *AM + [a.sub.8] *SI + [a.sub.9] *SM + [a.sub.10] *IM + [a.sub.11] *ASI + [a.sub.12] *ASM + [a.sub.13] *AIM + [a.sub.14] *SIM + [a.sub.15] *AISM + e. (3)

RV2 = [b.sub.0] + [b.sub.1] *A + [b.sub.2] *S + [b.sub.3] *I + [b.sub.4] *M + [b.sub.5] *AS + [b.sub.6] *AI + [b.sub.7] *AM + [b.sub.8] *SI + [b.sub.9] *SM + [b.sub.10] *IM + [b.sub.11] *ASI + [b.sub.12] *ASM + [b.sub.13] *AIM + [b.sub.14] *SIM + [b.sub.15] *AISM + e. (4)

The regression analysis provides intercept parameters 0 = 7.667 percent and [b.sub.0] 2.0333. In this full effects model, the "overweighting" result is even more striking than in the main effects model.

For RV1, average(ASIM) = 76.92 percent < average(A + S + I + M) = 108.95 percent;

For RV2, average(ASIM) = 4.59 < average(A + S + I + M) = 8.88.

V. DISCUSSION

This paper examines the impact of auditor-provided systems reliability assurance on the potential service recipients. The study involved a 2 (availability assurance: present or absent) x 2 (security assurance: present or absent) x 2 (integrity assurance: present or absent) x 2 (maintainability assurance: present or absent) experimental design, which resulted in 16 treatment conditions. A 17th treatment condition was included wherein all assurances were present, but rather than issuing a controls-oriented opinion, the auditors issued a reliability-oriented opinion. The orders of management's assertions and the auditor's assurances were fully counter-balanced within each condition.

After reading background material, management's assertions and the auditor's assurances, participants responded to two dependent variable items. One item asked participants to assess the likelihood (0 percent to 100 percent) that they would recommend that their company enter into a contractual arrangement with the application service provider (ASP) firm described in the case materials. The other item asked participants about their comfort level with the ASP firm's information (ERP) system. Afterward, participants responded to manipulation check, debriefing, and demographic item items.

Eleven consultants who worked for a national consulting/training firm administered the experimental instrument during 62 training sessions that were held across the United States. The 481 participants represented middle- and upper-level managers across seven industries. Small-, medium-, and large-sized firms were included in the sample. The participants' position titles reflected seven functional areas, including general management, information systems, accounting/finance, production, human resources, sales/marketing, and purchasing.

Regarding the likelihood that the participant would recommend the ASP firm to management, significant main effects were obtained for availability, security, integrity, and maintainability, such that the presence of these assurances significantly increased the participants' likelihood assessments. The amount of variance explained by availability, security integrity, and maintainability was 28.78 percent, 27.20 percent, 0.61 percent, and 0.52 percent, respectively. Firm size was also significant, indicating that small firms were more likely to recommend the ASP firm than either medium or large firms. Firm size did not interact with the assurance factors.

With respect to the assessed comfort level with the reliability of the ASP firm's ERP system, significant main effects were obtained for availability and security, marginal significance was achieved for maintainability, and the integrity assurance was nonsignificant. Once again, the presence of availability, security, and maintainability assurances significantly increased the participants' comfort levels. The amount of variance explained by the assurances was 21.65 percent (availability), 21.54 percent (security), 0.19 percent (integrity), and 0.33 percent (maintainability). There was no significant difference for "likelihood" or "comfort" assessments between the controls-oriented and reliability-oriented opinions.

An analysis of assurance responses for individual principles compared with all four principles indicated that subjects overweighed individual assurances. Such an overweighting can lead to over reliance by service purchasers and other assurance report users on reports on individual principles relative to the assurance that they actually contain, and the work effort behind that assurance. This over-reliance effect is potentially risky to both users and practitioners and may need to be addressed by assurance practitioners through actions such as enhanced communications with purchasers of assurance services about the limitations of examinations of less than four principles and cautionary language in assurance reports on less than four principles.

Based on an analysis of debriefing questions, the participants' experience dealing with ERP systems and ASP firms is relatively low, but their experience with independent auditors is relatively high. The participants recorded a relatively low level of trust with respect to management's assertions when auditor assurances are not provided, but they recorded a relatively high level of trust if the assertions are accompanied by auditor assurances. The participants' comfort level with outsourcing IS functions to a service provider is somewhat low. The participants also indicated that they are highly concerned with the "availability" and "security" of a service provider's systems, and they were moderately concerned with the "integrity" and "maintainability" of such systems. Responses to the "availability" and "security" debriefing questions are consistent with the experimental findings, as auditor assurances over these two assurances yielded remarkably positive reactions from the participants.

Assurance has been studied extensively in contexts where the demand for such assurance was based on government regulation, such as audits of financial statements of public companies. This study addresses assurance in a voluntary contracting context, where the demand for assurance is endogenously created by factors such as conflict of interest, remoteness, complexity, and consequence. System reliability is a concern for service providers, business partners, and users, as unreliable systems can yield disastrous consequences for service recipients. While the participants in this study did not actually engage in contracting, they reported significantly increased likelihood of contracting in the presence of auditor-provided system reliability assurance. Thus, this study provides ex ante evidence that concerns about system reliability can stimulate contracting demand for assurance services and. that auditor-provided assurance on system reliability can significantly increase managers' comfort with outsourced systems .

A potential limitation of these findings is that the materials presented to the participants did not indicate that there was a potential cost of obtaining the assurance. Such a cost could be encountered if the ASP service provider passed on the costs of obtaining an independent assurance report to the service purchaser in the form of higher prices for the services provided. This would be a useful issue to consider in a future study, first to establish what the magnitude of the cost of system reliability assurance would be to the average purchaser, and second, whether it is material to the service purchaser relative to the cost of contracting with an unreliable ASP.

Future studies in this area should investigate whether all four system reliability assurances are necessary. Experimental results suggest that potential service recipients reacted most positively to assurances dealing with "availability" and "security." However, the "integrity" and "maintainability" assurances did not seem to matter much to the participants. We have no debriefing information that can shed light on this issue. At the design stage, we had investigated adding more extensive debriefing questions to our study to obtain information that might have provided information on this issue. However, because the subjects are difficult to get to, their time is very valuable, and the consulting firm that helped to distribute the case materials did not want to burden their clients too much, we could not extend our questionnaire to ask open-ended questions that would require more time of the participants.

Given our limited information about the participants' apparent lack of concern about integrity and maintainability, and given the nature of the response variables, we recognize that we need to be cautious when comparing relative effect sizes of the experimental treatment conditions. However, the findings of the current study raise questions whether the AICPA/CICA should expend a great deal of time and effort into developing "integrity" and "maintainability" standards and criteria, if external parties place very little value on such assurances.

Another area for future research lies in understanding why potential service recipients believe that "availability" and "security" are paramount concerns with ASP firms. Such studies can help the accounting profession to refine future business-to-consumer and business-to-business assurance services offerings.
FIGURE 1

Relationships among SysTrust Criteria

Integrity related to Availability: "Minor processing errors, minor
destruction of records, and major disruptions of system processing that
might impair system availability" may also be "potential impairments" to
the completeness, accuracy, and timeliness of the systemprocessing
integrity.


I3.2   There is a process to identify       A2.3
       potential impairments to the
       system's ongoing ability to
       address the documented system
       processing integrity objectives,
       policies, and standards and to
       take appropriate action.


I3.2   Continuity provisions address minor
       processing errors, minor
       destruction of records, and major
       disruptions of system processing
       that might impair system
       availability.


Integrity related to Security: The security authorization issues
addressed may relate to the authorization part of system processing
integrity.


I2.2   The information processing           S2.2
       integrity procedures related to
       information inputs are consistent
       with the documented system
       processing integrity requirements.

I2.3   There are procedures to ensure that  S2.3
       system processing is complete,
       accurate, timely, and authorized.



I2.4   The information processing           S2.4
       integrity procedures related to
       information outputs are consistent
       with the documented system
       processing integrity requirements.

                                            S2.9





I2.2   There are procedures to identify
       and authenticate all users
       authorized to access the system.



I2.3   There are procedures to grant
       system access privileges to users
       in accordance with the policies
       and standards for granting such
       privileges.

I2.4   There are procedures to restrict
       access to computer processing
       output to authorized users.



       There are procedures to segregate
       incompatible functions within the
       system through security
       authorizations.

Availability related to Security:  Security risks can cause "potential
risks" and "major disruptions" to system availability.


A2.2   There are procedures to protect the  S2.6
       system against potential risks
       that might disrupt system
       operations and impair system
       availability.

A2.3   Continuity provisions address minor  S2.7
       processing errors, minor
       destruction of records, and major
       disruptions of system processing
       that might impair system
       availability.

                                            S2.8




                                            S2.10




A2.2   There are procedures to protect
       external access points against
       unauthorized logical access.



A2.3   There are procedures to protect the
       system system against infection by
       computer viruses, malicious codes,
       and unauthorized software.



       Threats of sabotage, terrorism,
       vandalism and other physical
       attacks have been considered when
       locating the system.

       There are procedures to protect the
       system against unauthorized
       physical access.

Availability/Security/Integrity related to Maintainability:
Maintainability risks affect the other three principles.


A/S/I  The acquisition, implementation,     M2.2
2.1    configuration and management of
       system components related to system
       availability/security/processing
       integrity are consistent with
       documented system availability/
       security/processing integrity
       objectives, policies, and
       standards.

A/S/I  Environmental and technological
3.3    changes are monitored and their
       impact on system availability/
       security/iprocessing integrity is
       periodically assessed on a timely
       basis.


A/S/I  Procedures to manage, schedule, and
2.1    document all planned changes to the
       system are applied to modifications
       of system components to maintain
       documented system availability,
       security, and integrity consistent
       with documented objectives,
       policies, and standards.


A/S/I
3.3
TABLE 1

Sample Demographics

                                               Firm Size (a)
Industry (based on one-digit SIC  Small     Medium    Large     Totals
 codes)

Construction                        3         17        8         28
Manufacturing                       4         25       23         52
Transportation, Communications,     7         47       27         81
 Electric
Gas and Sanitary Service            0         19        6         25
Retail Trade                       10         51       32         93
Finance, Insurance, and            12         78       56        146
 Real Estate
Services                            3         29       24         56

Totals                             39        266      176        481
                                  Level Achieved
                      Under      Some     Graduate
College Degree       Graduate  Graduate    Degree   Totals

Marketing               16        5          18       39
Management              69       20          49      138
Finance                 36       11          35       82
Accounting              54       31          44      129
Information Systems     35       10          28       73
Economics               10        1           4       15

Other                    3        1           1        5

Totals                 223       79         179      481
                            Management Level
Position Title       Middle    Upper     Totals

General Management    61        109       170
Information Systems   33         47        80
Accounting/Finance    37         64       101
Production             0          0         0
Human Resources       14         27        41
Sales/Marketing        9         14        23
Purchasing            19         47        66

Totals               173        308       481

(a) Firm size was self-assessed based on gross sales (including all
subsidiaries -- foreign and domestic). First, sales were subdivided into
quartiles by one-digit SIC codes using Compustat. Then, sales were
placed into three categories representing small (from zero through the
first quartile), medium (2nd and 3rd quartiles), and large (4th quartile
and above).
TABLE 2

Manipulation Check Response Means (Standard Deviations)

Assurance         Assurance    Assurance
Factor             Present      Absent        t     p

Availability     6.91 (0.47)  1.18 (0.39)  145.83  <.01
Security         6.87 (0.67)  1.11 (0.32)  121.78  <.01
Integrity        6.98 (0.54)  1.52 (0.86)   80.27  <.01
Maintainability  6.88 (0.52)  1.37 (0.48)  120.07  <.01
TABLE 3

ANOVA Test Results for Likelihood of Contracting with the ASP Firm

Source               df   Sum-Squares  F-Ratio  p-value

Availability (A)       1    107,084    305.66    .001
Security (S)           1    101,194    388.85    .001
Integrity (I)          1      2,255      6.44    .011
Maintainability (M)    1      1,922      5.49    .019
Firm Size              2      3,024      4.32    .014
A*S                    1        136      0.39    .533
A*I                    1        136      0.39    .533
A*M                    1        183      0.52    .469
S*I                    1        574      1.64    .201
S*M                    1        197      0.56    .453
I*M                    1          5      0.01    .908
A*S*I                  1        403      1.15    .283
A*S*M                  1        285      0.81    .368
A*I*M                  1        355      1.01    .314
S*I*M                  1        814      2.32    .127
A*I*S*M                1        413      1.18    .278

Error                430    150,645

Total (Adj.)         447    372,046

Results of Duncan's multiple pairwise comparison test ([alpha] = .05).
Availability  Security  Integrity  Maintainability  Likelihood Means (a)

Absent         Absent     Absent        Absent            7.67 (a)
Absent         Absent     Absent       Present           18.44 (b)
Absent         Absent    Present        Absent           23.08 (b)
Absent         Absent    Present       Present           25.52 (b)
Absent        Present     Absent        Absent           43.33 (c)
Absent        Present    Present        Absent           46.21 (c)
Absent        Present     Absent       Present           46.80 (c)
Absent        Present    Present       Present           47.86 (c)
Present        Absent     Absent        Absent           56.67 (d)
Present        Absent     Absent       Present           57.10 (d)
Present        Absent    Present       Present           58.08 (d)
Present        Absent    Present        Absent           59.62 (d)
Present       Present     Absent       Present           79.03 (e)
Present       Present    Present        Absent           80.38 (e)
Present       Present     Absent        Absent           81.07 (e)
Present       Present    Present       Present           84.58 (e)

(a) Different superscripts indicate significant differences at [alpha] =
.05.
TABLE 4

ANOVA Test Results for Confidence in the ASP Firm's ERP System

Source                df  Sum-Squares  F-Ratio  p-value

Availability (A)       1       323.62   178.97   .001
Security (S)           1       322.04   179.10   .001
Integrity (I)          1         2.79     1.55   .213
Maintainability (M)    1         4.97     2.77   .096
Firm Size              2         3.67     1.02   .362
A*S                    1         1.76     0.98   .322
A*I                    1         4.47     2.49   .115
A*M                    1         1.64     0.91   .340
S*I                    1         4.57     2.54   .111
S*M                    1         1.75     0.97   .324
I*M                    1         3.81     2.12   .145
A*S*I                  1         4.55     2.53   .112
A*S*M                  1         5.32     2.96   .085
A*I*M                  1         5.07     2.82   .093
S*I*M                  1         2.60     1.44   .230
A*I*S*M                1         4.23     2.35   .125

Error                430       773.20

Total (Adj.)         447      1495.00

Results of Duncan's multiple pairwise comparison test ([alpha] = .05).
Availability  Security  Integrity  Maintainability  Comfort Means (a)

Absent         Absent     Absent        Absent          1.60 (a)
Absent         Absent    Present        Absent          2.96 (b)
Absent         Absent     Absent       Present          3.06 (b)
Absent         Absent    Present       Present          3.14 (b)
Absent        Present     Absent        Absent          4.40 (c)
Present        Absent     Absent        Absent          4.44 (c)
Present        Absent    Present        Absent          4.92 (c)
Absent        Present    Present       Present          4.93 (c)
Present        Absent    Present       Present          4.96 (c)
Present        Absent     Absent       Present          4.97 (c)
Absent        Present    Present        Absent          5.00 (c)
Absent        Present     Absent       Present          5.04 (c)
Present       Present    Present        Absent          6.42 (d)
Present       Present     Absent        Absent          6.46 (d)
Present       Present    Present       Present          6.63 (d)
Present       Present     Absent       Present          6.68 (d)

(a) Different superscripts indicate significant differences at [alpha] =
.05.
TABLE 5

Debriefing Items

                                                   Statistics
                                                   Standard
                                         Mean (a)  Deviation   F (b)

 1. My experience dealing with ERP       3.27        1.49     1.35
    systems is best characterized as:
     (1 = Extremely Low; 7 = Extremely
     High)
 2. My experience dealing with           2.07        1.16     0.47
    application service providers
    is best characterized as:
     (1 = Extremely Low; 7 =
     Extremely High)
 3. My experience with outsourcing       3.93        1.10     1.29
    in general is best
    characterized as:
     (1 = Extremely Low; 7 =
     Extremely High)
 4. My experience dealing with           5.44        1.23     1.35
    independent auditors is best
    characterized as:
     (1 = Extremely Low; 7 =
     Extremely High)
 5. I trust the integrity of report      6.92        0.57     1.07
    issued by independent auditors.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 6. I trust the assertions made by       3.52        0.95     1.40
    management of companies (other
    than my company) that have not
    been assured by independent
    auditors.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 7. I trust management's assertions      6.84        0.61     0.87
    that have been assured by an
    independent auditor.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 8. I am comfortable with the            2.82        0.38     1.40
    general notion of outsourcing IS
    functions to a service provider.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 9. Extent of concern that the           6.85        0.54     0.99
    service provider's systems will be
    available when needed.
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)
10. Extent of concern that the           6.11        1.46     0.23
    service provider's systems will
    be secure.
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)
11. Extent of concern that the           4.88        1.62     0.04
    service provider's systems will
    process your information with
    integrity (meaning that processing
    is complete, accurate, timely, and
    authorized).
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)
12. Extent of concern that the           4.68        1.77     0.16
    service provider's systems will be
    properly updated (meaning that
    updates will not detract from
    system availability, security and
    integrity).
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)

                                        Statistics

                                         p

 1. My experience dealing with ERP      .17
    systems is best characterized as:
     (1 = Extremely Low; 7 = Extremely
     High)
 2. My experience dealing with          .95
    application service providers
    is best characterized as:
     (1 = Extremely Low; 7 =
     Extremely High)
 3. My experience with outsourcing      .20
    in general is best
    characterized as:
     (1 = Extremely Low; 7 =
     Extremely High)
 4. My experience dealing with          .17
    independent auditors is best
    characterized as:
     (1 = Extremely Low; 7 =
     Extremely High)
 5. I trust the integrity of report     .38
    issued by independent auditors.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 6. I trust the assertions made by      .14
    management of companies (other
    than my company) that have not
    been assured by independent
    auditors.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 7. I trust management's assertions     .60
    that have been assured by an
    independent auditor.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 8. I am comfortable with the           .14
    general notion of outsourcing IS
    functions to a service provider.
     (1 = Strongly Disagree; 7 =
     Strongly Agree)
 9. Extent of concern that the          .47
    service provider's systems will be
    available when needed.
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)
10. Extent of concern that the          .99
    service provider's systems will
    be secure.
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)
11. Extent of concern that the          .99
    service provider's systems will
    process your information with
    integrity (meaning that processing
    is complete, accurate, timely, and
    authorized).
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)
12. Extent of concern that the          .99
    service provider's systems will be
    properly updated (meaning that
    updates will not detract from
    system availability, security and
    integrity).
     (1 = Highly Unconcerned; 7 =
     Highly Concerned)

(a) Based on t tests, all means are significantly different from the
midpoint of the sale (4) at p < .01, except for debriefing question #3
(t = 1341, p = .16).

(b) F-rates and p-values reflect the results of ANOVA testing of means
to etermine if responses differ across treatment conditions.


(1.) There are 16 conditions (the 17th condition on reliability is not considered in this analysis): 1 None; 2 = A; 3 = S; 4 = 1; 5 = M; 6 = AS; 7 = AI; 8 = AM; 9 = SI; 10 = SM; 11 = IM; 12 = ASI; 13 = ASM; 14 = AIM; 15 = SIM; 16 = ASIM.

REFERENCES

Abdel-khalik, A. R. 1993. Why do private companies demand auditing? A case for organizational loss of control. Journal of Accounting. Auditing and Finance 8 (Winter): 31-53.

American Accounting Association (AAA). 1973. Committee on Basic Auditing Concepts. A Statement of Basic Auditing Concepts. Sarasota, FL: American Accounting Association.

American Institute of Certified Public Accountants (AICPA), Special Committee on Assurance Services Information Technology Subcommittee. 1996. The Effect of information Technology on the Assurance Services Marketplace. New York, NY: AICPA.

-----, 2001. Attestation Standards. Statement on Standards for Attestation Engagements No. 10. New York, NY: AICPA.

-----, and the Canadian Institute of Chartered Accountants (CICA). 2001. AICPA/CICA SysTrust[TM] Principles and Criteria for Systems Reliability. Version 2.0. New York, NY: AICPA.

Balvers, R. J., B. McDonald, and R. E. Miller. 1988. Underpricing of new issues and the choice of auditor as a signal of investment banker reputation. The Accounting Review 63 (October): 605-622.

Beatty, R. P. 1989. Auditior reputation and the pricing of initial public offerings. The Accounting Review 64 (October): 693-709.

Boritz, J. E., E. Mackler, and D. McPhie. 1999. Reporting on systems reliability. Journal of Accountancy (November): 75-87.

Canadian Institute of Chartered Accountants (CICA). 2001. CICA Handbook. S. 5025: Standards for Assurance Engagements. Toronto, Ontario: CICA.

Chow, C. W. 1982. The demand for external auditing: Size, debt and ownership influences. The Accounting Review 57 (April): 272-291.

-----, and S. J. Rice. 1982. Qualified audit opinions and share prices: An investigation. Auditing: A Journal of Practice & Theory I (Winter): 35-53.

Dopuch, N., R. W. Holthousen, and R. W. Leftwich. 1986. Abnormal stock returns associated with media disclosures of "subject to" qualified audit opinions. Journal of Accounting and Economic 8 (June): 93-117.

Elliot, R. K. 1998. Assurance services and the audit heritage. Auditing: A Journal of Practice & Theory 17 (Supplement): 1-7.

Fama, E. F., and A. B. Laffer. 1971. Information and capital markets. Journal of Business 44 (July): 289-298.

Kehoe, R., and A. Jarvis. 1996. ISO 9000-3: A Tool for Software Product and Process Improvement. New York, NY: Springer-Verlag.

Lyu, M. R. 1996. Handbook of Software Reliability Engineering. Englewood Cliffs, NJ: McGraw-Hill.

McPhie, D. 2000. AICPA/CICA SysTrust principles and criteria. Journal of Information Systems (Supplement):1-8.

Schipper, K. 1991. Commentary on analysts' forecasts. Accounting Horizons 5 (December): 105-121.

Wallace, W. A. 1980. The Economic Role of the Audit in Free and Regulated Markets. New York, NY: Touche Ross & Co.

Watts, R. L., and J. L. Zimmerman. 1986. Positive Accounting Theory. Englewood Cliffs, NJ: Prentice Hall.

Willenborg, M. 1999. Empirical analysis of the economic demand for auditing in the initial public offerings market. Journal of Accounting Research 37 (Spring): 225-238.

We acknowledge the research assistance provided by Min Qian and the financial support provided by the University of Waterloo Centre for Information Systems Assurance.

RELATED ARTICLE: Discussion of Investigating the Impact of Auditor-Provided Systems Reliability Assurance on Potential Service Recipients

Emil J. Ragones

I believe that the results of the Boritz and Hunton (2002) study identify some interesting challenges for the AICPA and the CICA in their efforts to gain greater acceptance and understanding of the SysTrust[TM] Principles and Criteria, and how these interact in a reliable system. The authors obtained what I consider to be an excellent response rate of 81 percent (481 responses from a sample population of 594) from a group of participants at middle and upper management.

These sample participants indicated some experience dealing with ERP systems and ASP firms and a fairly high level of experience dealing with contract negotiations. They also understood the benefits of having independent assurance by a CPA or CA on the various SysTrust principles and criteria. What they surely did not completely understand was the interrelationships of the four SysTrust principles and criteria and how they contributed to a reliable system. The participants having greater concern for the availability and security principles and overweighing assurance reports on individual principles, as compared to a four-principle reliability report, evidence this latter point.

The study results may also be impacted by the sample participants having a somewhat low comfort level with outsourcing IS functions and the cost of independent assurance not being included as a variable.

When version 1.0 of SysTrust principles and criteria was introduced, all four principles and criteria needed to be addressed and satisfied to obtain an independent attestation examination report. With the issuance of SysTrust version 2.0, the CPA/CA could report any one or all four of the SysTrust principles and criteria, however, all four principles and criteria needed to be satisfied for a system to be deemed reliable. Perhaps the option of not reporting on all four principles or providing an explanation of how the four principles interact contributed to the study's results.

Although the presence of independent CPA/CA assurance increased the likelihood of recommending an ASP to management, this study indicates confusion exists in the marketplace. The CPA/CA needs to understand the user needs of the proposed SysTrust report and work with their client to identify the applicable SysTrust principles and criteria. They also need to better educate the marketplace on the various types of assurance reports available and who their intended users are. Clearly, a SysTrust report has the potential to address the systems reliability assurance needs in the marketplace as well as replace or complement existing SAS No.70 or 5900 reports.

REFERENCE

Boritz, J. E., and J. E. Hunton. 2002. Investigating the impact of auditor-provided systems reliability assurance on potential service recipients. Journal of Information Systems (Supplement): 69-87.

Discussion of Investigating the Impact of Auditor-Provided Systems Reliability Assurance on Potential Service Recipients

Glen L. Gray

For at least the last two decades the AICPA and CICA have been reporting that the accounting profession is going through profound changes. These findings were formalized in the report of the Special Committee on Assurance Services (frequently referred to as the Elliott Committee after its chair, Robert Elliott). The report indicated that traditional A&A (accounting and auditing) were becoming commodities and that revenue growth for these services was flat. Instead of recommending that practitioners pursue totally new areas of services, the report recommended building on current reputation and skills. Practitioners have the reputation for being independent, objective, and trustworthy. In addition, they have the skills necessary to conduct attestation, assurance, and audit engagements.

Prior to the publication of the Elliott Report an increasing number of practitioners were already conducting attestation and assurance engagements outside the traditional domain of financial audits. The report encouraged even more attestation and assurance activities. The report listed over 200 types of potential assurance engagement that it collected from the Committee's interviews with practitioners and corporate representatives. The report discussed six potential assurance engagements in more detail, including:

1. Electronic commerce

2. Information system reliability

3. Risk assessment

4. Business performance measurement

5. Health care performance measures

6. Care for the elderly

Soon after the report was published separate task forces were formed to develop services for each domain. For example, the Electronic Commerce Assurance Services Task Force developed WebTrust[TM] and the System Reliability Task Force developed SysTrust[TM].

Although WebTrust and SysTrust have similarities (both address information technology) and differences (WebTrust specifically focuses on electronic commerce and SysTrust addresses any IT setting), both have the same basic critical success factors:

1. Practitioners invest in improving their technology skills. Both WebTrust and SysTrust have significant technology components. To offer these services, practitioners have to either learn those skills or hire other professionals with those skills. Besides the time and CPE costs to learn the new skills, practitioners generally have high opportunity costs related to lost billable hours while pursuing these skills. As such, practitioners need to do their own cost-benefits analysis to determine if they should go forward in these areas.

2. The marketplace values those assurance services. Somebody has to pay for these services. The ultimate payer of these services must determine the relative value for these services and whether the value outweighs the associated costs.

3. Accounting practitioners have a competitive advantage in offering the services. A wide variety of companies and professionals in addition to CPAs/CAs are providing third-party assurance services. Internal auditors could also provide similar services. Will the marketplace think of CPAs/CAs when they seek out these services? Or, said another way, how do CPAs/CAs convince the marketplace to call them first?

The Boritz and Hunton (2002) paper primarily addresses item 2. The paper represents an important area of research and provides a valuable starting point for additional research. The paper indicates that careful thought went into designing the questionnaire and the research design in general. The questionnaire was pilot-tested. Multiple questionnaire layouts were created to reduce any bias that might be introduced by the order in which items appear in the questionnaire. A variety of statistical tests were performed on the data to test for possible biases in the subsequent questionnaire responses.

It is always hard to involve "real-world" representatives in academic research--and it is almost impossible to attain the 81 percent response rate that the authors were able to achieve. Using participants at management seminars was an innovative approach to obtaining research participants. Having a cover letter from the organization's CEO was an influential way of demonstrating the importance of the research. The fact that 81 percent of the participants were willing to stay after attending the seminars and complete the questionnaire is testament to the effectiveness of the investigators' approach to data collection.

The results are intriguing and important. However, our ability to generalize from this scenario to other situations and samples is limited. Future research should address this by providing more nuanced scenarios and more variety. The remaining paragraphs raise questions, issues, and suggestions regarding the research presented in the paper. Because the Boritz and Hunton study established an important line of research, my comments are directed specifically to those researchers who want to build on this paper and conduct additional research in this domain.

Open-Ended Questions

The questionnaire presents a scenario about a fictional company considering whether to purchase services from an ASP called NextWave in order to help the firm "maximize the potential benefits of using ERP" and "after many...meetings" the company decided that the benefits would outweigh the costs. The scenario included descriptions of NextWave's ERP system and related applications, computer and communications infrastructure, people and procedures, and data. It also included management's assertions regarding various combinations of availability, security, integrity, and maintainability of NextWave's ERP systems. These assertions differed across scenarios. In addition, some of the scenarios included an auditor's report assuring or attesting to management's assertions. The questionnaire also included the 58 SysTrust criteria.

At this point, the participants were asked, "How likely is it that you would recommend your company to enter into a contractual agreement with NextWave to process your company's information via the ERP system?" (0 percent to 100 percent likelihood), followed by, "What is your comfort level with respect to the reliability of NextWave's ERP system?" (1 = Extremely Low, 7= Extremely High).

I thought about how I would answer these two questions. Without the auditor's. report my likelihood of recommending NextWave would be 0 percent. Although NextWave has 150 employees and appears to have solid technology, I could not recommend NextWave without seeing their financials, an overview of the company's top management (Who are they? Where did they come from? What is the depth of management? What is management stability or turnover?), and without contacting some of their current customers. Many of the current ASPs are rapidly burning through venture capital or IPO monies and are having troubles and others have gone out of business. So, bottom line, I want to see financials at minimum because I do not want to be the one who recommends an ASP--particularly one running mission critical ERP software--that files bankruptcy soon after my recommendation.

Regarding the second question, I would also indicate a low comfort score. Although NextWave's hardware and software are probably highly reliable, without seeing the financials, etc., I do have concerns about the NextWave's financial reliability to stay in business.

If my scenario included the auditor's attestation/assurance report, I would give higher scores for both questions since the report does indicate that NextWave does at least have enough money to hire the auditors. However, I would still not recommend NextWave (assign a likelihood > 50 percent) without the information I listed before.

I gave you reasoning underlying my answers. We do not know, however, why participants in this study selected their answers. Some participants (like me) may have read more into the scenario than was intended or anticipated. This is a concern for all research that uses scenarios.

Using an ASP may have been a particular hot button with some of the participants. For example, regular readers of business publications (Wall Street Journal, Business Week, etc.) may have seen the many articles on ASPs filing bankruptcy--even those with major corporate backing. For example, Pandesic, which was a joint venture of Intel and SAP that had 400 employees, abruptly announced it was closing its business in August 2000 leaving 104 customers in need of finding a replacement ASP. For some of the ASPs still in business, their stock prices have dropped more than 80 percent in value from their highs. Perhaps participants with IS backgrounds see ASPs as a threat to their in-house positions. These people would respond to the questionnaire based on their attitudes toward ASPs in general, with the other information presented in the scenario given secondary consideration.

These considerations introduce alternative explanations for the observed findings. In addition, they limit our ability to generalize from this sample to other samples or situations.

The authors stated that they did not include open-ended questions because of the limited time they had access to the participants. Future research should include open-ended questions in order to determine the reasoning underlying responses. Answers to open-ended questions may indicate to the researcher that the participants misunderstood a question or a scenario was incomplete. Alternatively, focus groups with small groups of participants provide another means to obtain such information.

ASPs as the Scenario

Following on with my prior comments, I have two recommendations regarding ASPs. First, if an ASP is used as the basis of the scenario, then I suggest incorporating a broader range of financial and management information. For example, computer trade publications frequently include checklists or questions that should be considered in evaluating ASPs. Including this type of information may make the scenario more realistic.

Future research should also consider using scenarios that feature other situations, such as making a recommendation about adding another company to a B2B supply chain such that this new company's computers will be tightly linked to the computers already in the supply chain. This will increase external validity.

Trade-Offs and Consequences of the Decisions

While this scenario presented only a single option, real-world decision making involves choosing among a variety of less than perfect options where trade-offs will have to be considered. Without alternatives and trade-offs, the participants do not know the consequences of their decisions. For example, if I do not recommend NextWave, does that mean that management will locate another ASP to evaluate? Does it mean they may reconsider bringing ERP in house?

In future research, participants might be asked to select between pairs of options or rank several options and then determine what value the inclusion or absence of the author's report seemed to contribute to those selections or rankings.

Data Collection

Participants were attendees at management seminars who volunteered to stay after the seminar ended. Participants may have been hungry, in a hurry, or tired. If so, they may not have carefully considered the 58 criteria that underlie the four SysTrust principles. Consequently, I feel somewhat uncomfortable concluding anything specific about SysTrust based on this study. I worry that it was merely the participants' prior internal definitions of availability, security, integrity, and maintainability that were driving the participants' opinions--not what those terms specifically capture in terms of the specific SysTrust criteria. This is a possible explanation for the different weights for the four principles reported in Boritz and Hunton regarding H1 and H2.

Of course there is an important trade-off here. The authors could have asked for the list of seminar attendees and then mailed the questionnaires to the attendees. The participants would have had more time to complete the questionnaires, but the authors would have been lucky to get a 20 percent response rate instead of the 81 percent they did achieve.

Considerations for future research would include: using participants familiar with the 58 criteria; providing participants with a short introduction to the criteria as opposed to just attaching them to the questionnaire; or triangulating results from mailed surveys, surveys administered in workshops, and qualitative methods such an interviews and focus groups.

What Is the Value?

One important issue with this research, which the authors recognize in concluding paragraphs, is that the auditor's report was free. Any rational decision maker would prefer more information to less--particularly if the information is free and from an independent third party. Most prior surveys indicate that people have a positive opinion about CPAs, so getting a free report from a CPA is even better. As such, it was not totally surprising that the authors' findings regarding the participants' reactions to the auditor's report were consistent with literature cited by the authors that indicates that investors react positively to audited information and more information. The critical success factor question is not whether people feel better getting the auditor's report, but what are they willing to pay for it. The current study explores the "reaction" to SysTrust, not the "value" of SysTrust.

Future research should address the value question. The value could be solicited directly by giving participants actual dollar choices for various scenarios. Alternately, as suggested before, participants could be asked to select between pairs of alternatives (e.g., ASP vs. in-house) where the auditor's report is included or absent. Participant choices can then be used to determine an implied value of the report.

Manipulation Check

I am a little confused by the manipulation check reported on in the paper. Participants were asked how strongly they agreed or disagreed (on a seven-point scale) with statements such as "The independent auditors reported on the availability of the ERP system." Since they had the complete questionnaire, they could easily look back at the scenario to see if that statement was true or not true. I do not understand the need for a seven-point scale for a binary question. What is more disconcerting is that some participants said the statement was true when it was not true. Does this mean the participants were so much in a hurry to finish the questionnaire that they did not take the time to look back in the questionnaire to ascertain the correct answer? If this is true, then what does this imply about their other questionnaire responses?

Post Hoc Analysis

Participants were given one of two versions of the auditor's report where a few words were changed in part of one sentence. The modified words indicated whether the auditor's examination was related to "control" or "reliability." Results indicated that the participants did not react differently to the two versions. We cannot determine if the lack of difference reflects the substance of the reports or the possibility that the participants did not actually read the report. Because the report is rather long and has that "legalese" look about it, I would venture that many participants only skimmed it. If so, then they were not reading carefully enough for the subtle differences to have any impact.

CONCLUSIONS

The research presented in this paper provides an important contribution to the assurance services domain. The AICPA, CICA, and many practitioners have made significant investments in an attempt to expand the assurance services that practitioners provide. The reported research focuses primarily on whether the marketplace reacts positively to the inclusion of an auditor's report in an IT decision. The researchers used an innovative approach to obtain access to real-world participants--and they were able to achieve an 81 percent response rate. The experimental design based on SysTrust principles and criteria was well conceived and pilot-tested. The data analysis appeared appropriate.

As said before, the authors' results are intriguing and important. However, our ability to generalize from this scenario to other situations and samples is limited. Future research should address this by providing more nuanced scenarios and more variety. Researchers who want to build on this research should include one or more of the following suggestions:

* Add open-ended questions or qualitative approaches to help determine the "why" behind the questionnaire answers.

* If ASPs are used for future research, then consider adding financial and managerial information in addition to mostly technological information.

* Include a scenario other than ASPs.

* Present alternatives (e.g., ASP vs. in-house) with trade-offs and consequences.

* Try alternative approaches such as direct mailing. The response rate will be lower, but those who do respond will have more time to complete a more comprehensive questionnaire.

* Create scenarios and questions to try to determine the value participants might place on the auditor's report.

* Include questions that would explore what I listed as critical success factor #3, namely, whether accounting practitioners have a competitive advantage in offering IT assurance services.

REFERENCE

Boritz. J. E., and J. E. Hunton. 2002. Investigating the impact of auditor-provided systems reliability assurance on potential service recipients. Journal of Information Systems (Supplement): 69-87.

Reply to Discussions of Investigating the Impact of Auditor-Provided Systems Reliability Assurance on Potential Service Recipients

J. Efrim Boritz

James E. Hunton

We appreciate the comments and feedback received from participants at the 2001 University of Waterloo Research Symposium on Information Systems Assurance and, in particular, the detailed discussant comments by Glen Gray and Emil J. Ragones. Many of the comments represent potentially valuable extensions of the work started in this paper and we thank the discussants for their suggestions. We have some clarifications to offer in connection with several of the points raised by Gray (2002).

Data Collection

Gray's (2002) comments in this section imply that all participants were given the same set of 58 criteria to work with. This is not the case. Each participant's instrument contained only the subset of criteria related to the assurance issue addressed in that participant's material. In other words, if the participant's assurance scenario addressed the availability principle, then only the availability criteria were provided, and so on. Thus, there were 17 different presentations of the criteria, corresponding to the 17 different assurance scenarios described in the research method section of the study.

What Is Value?

Gray (2002) asserts that our finding of significant positive responses to the presence of assurance about system reliability is evidence of a reaction to, rather than of the value of, the assurance report. We disagree. A mere reaction could be randomly negative or positive. Our findings show strongly positive effects in all the scenarios involving the presence (as opposed o absence) of assurance. Furthermore, although in this study we did not attempt to explicitly identify the costs of the assurance to the participants (we agree that such information could be an interesting extension), we do not agree with Glen's assertion that the value of the assurance cannot be established because the auditor's report was assumed to be free.

Although the cost of the report was not explicitly addressed in this study, few managers would consider an assurance report to be free. Managers routinely rely on information that they do not directly pay for, but nevertheless recognize that the related costs will be passed onto them through the purchase price of other goods and services. Although they do not bear the cost directly, they still assess the relative value of information.

In this study, managers implicitly made judgments about the relative value of various assurance reports in a simulated business context in which the cost of such assurance reports would be incorporated in the cost of the service for which they were contracting. It is noteworthy that participants did not place equal value on all of the types and combinations of assurances that were made available to them. This suggests that participants were discriminating in theirjudgments about the relative value of alternative assurance reports, not merely reacting randomly. We believe that our findings indicate that the value of the assurance to the participants was significantly positive in comparison with the imputed costs of obtaining such assurance that would be passed on to the company by the ERP/ASP provider.

Manipulation Check

In connection with the manipulation check, Gray (2002) expresses concerns about the debriefing materials and the response scales used in some of the questions. The debriefing materials were administered immediately after the participants responded to the experimental case. Hence, while they were separately administered from the study instrument, debriefing responses were obtained in the same session by the same respondents as in the experiment. As for the choice of scales, while we acknowledge that other choices were possible, we used the seven-point scale throughout the debriefing materials for ease of analysis, consistency, and comparability.

Post Hoc Analysis

In connection with the post hoc analysis, Gray (2002) speculates about why we found no difference between the "control" and "reliability" version of the report. His assumption is that there should have been a difference and our failure to find it may be due to a design flaw in our study; in particular, he worried that our manipulation may not have been strong enough. While this is possible, on the whole we disagree.

The absence of a significant difference between the "control" and "reliability" versions of the report may not be due to flaws, but simply due to the fact that the participants considered the two reports to be equivalent to one another. The power of our test to detect a significant difference was .93; therefore, our analysis should have detected even a small effect size, to the extent that it truly exists. It is interesting to note that in many studies of human judgment, small, apparently inconsequential, wording changes have been shown to have dramatic effects on judgments, whereas in this study no such effect was observed.

REFERENCES

Boritz, I. E., and J. E. Hunton. 2002. Investigating the impact of auditor-provided systems reliability assurance on potential service recipients. Journal of Information Systems (Supplement): 69-87.

Gray, G. L. 2002. Discussion of investigating the impact of auditor-provided systems reliability assurance on potential service recipients. Journal of Information Systems (Supplement): 91-95.
COPYRIGHT 2001 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Boritz, J. Efrim; Hunton, James E.
Publication:Journal of Information Systems
Geographic Code:1USA
Date:Dec 22, 2001
Words:14524
Previous Article:An experimental examination of alternative forms of Web assurance for business-to-consumer e-commerce.
Next Article:Information system assurance for enterprise resource planning systems: unique risk considerations.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters