Printer Friendly

Internet-based experiments: prospects and possibilities for behavioral accounting research.

The emerging technology of Internet-based experiments offers behavioral accounting research (BAR) new possibilities for obtaining large sample sizes, providing world-wide access to previously hard-to-reach participants (e.g., CFOs, audit partners, and financial analysts) and exploring new research questions. However, the validity characteristics of Internet-based experiments differ from previous BAR technologies. Herein, we review existing BAR Internet-based experiments, describe how to create and run Internet-based experiments, overview emerging literature on the validity of Internet-based experiments, and highlight several areas where the use of Internet-based experimentation offers accounting behavioral researchers new possibilities for exploring previously uninvestigated research questions.

INTRODUCTION

In January 1991, approximately 376,000 computers across the globe were linked to the Internet. By January 2003, the number of Internet-linked computers grew to 172 million (Internet Software Consortium 2003) and 59 percent of the U.S. population had Internet access (Nielsen//Net Ratings 2003). A 1999 survey indicated that 90 percent of CPAs conducted Internet research (Nearon 1999). Given the growth and significance of Internet use across global communities, academics have unprecedented access to heretofore inaccessible research populations. This paper explores the implications of the world-wide growth in Internet use among accountants and the public for behavioral accounting research (BAR). (1)

A Chronology of Behavioral Accounting Research (BAR) Technologies

To situate Internet-based experiments within previous and extant BAR methods, we briefly trace the history of BAR technologies for experimentation. Beginning in the late 1920s, BAR relied on pencil-and-paper to deliver experimental materials to participants (e.g., Burrell 1929). The advent of increasingly inexpensive duplication methods (e.g., mimeograph and later photocopy machines) enabled researchers to use pencil-and-paper technology to administer research experiments. Paper-and-pencil technology is a reliable data collection method that requires minimal technological knowledge. But, according to Reips (2000), this technology has the disadvantages of geographically limited samples, little or no interactive responses, amplified demand effects and experimenter biases, slow data turnaround times, and increased potential motivational confounds.

The microcomputer revolution of the 1970s enabled accounting researchers to collect data using personal computers (PCs). (2) Data collection on PCs allows researchers to more efficiently and effectively achieve certain environmental controls (e.g., randomizing treatments, monitoring time and attention) (e.g., Payne et al. 1993, Appendix). In addition, PCs enable "real-time" interactive data collection that can provide insight into cognitive decision processes and track responses to real-time stimuli, such as computer-based decision aids and alternative information displays.

Because of the enhanced possibilities for implementing experimental controls and obtaining decision process data, the internal validity of computer-based experiments can sometimes be higher than that obtained in paper-and-pencil experiments. At the same time, using personal computers for data collection requires advanced levels of technological knowledge, potentially increases the cost of data collection, and often limits sample sizes due to scarce laboratory computing resources.

Beginning in the mid-1980s, some accounting researchers followed the work of economists who conducted experiments in market processes using local area networks (LANs) to collect multiparticipant data (e.g., DeJong, Forsythe, and Uecker 1985; DeJong, Lundholm, Forsythe, and Uecker 1985). Using this method enables accounting researchers to collect simulated market data (e.g., prices, quantities) resulting from the manipulation of one or more accounting-relevant market variables (e.g., financial statement disclosures, auditor presence, and reputation) (Smith et al. 1987; Berg 1994).

Pencil-and-paper technology experiments are frequent in BAR and the deployment of computer-based data collection is on the rise; however, the use of LAN technology for data collection in BAR remains relatively low. Meanwhile, rapid growth in the Internet provides behavioral accounting researchers with yet another potentially fruitful technology for conducting experiments. While Internet-based experiments in BAR are rare, their use in psychology has been steadily increasing over the past few years (Birnbaum 2000a).

Psychology researchers began Internet-based experimentation in 1995 and Reips (2000, 89) notes "the method is thriving." The number of online experiments listed on the American Psychological Society's (APS) Web site (3) "Psychological Research on the Net" (http://psych.hanover.edu/ Research/exponnet.html) provides one measure of the growing importance of Internet-based experimentation in psychology (Krantz 2003). The APS site listed 35 Internet experiments in June 1998 and 65 in May 1999 (Birnbaum 2000a, xv). As of October 2002, there were 163 Internet experiments in 15 categories at this Web site.

Excellent reviews of the research literature on psychological Internet-based experiments already exist (e.g., Birnbaum 2000a); thus, in this paper, we do not (re-) review this literature. Instead, we examine emerging BAR that uses Internet-based experimentation, and investigate the promises and problems of the largely unexplored technology of Internet-based experiments for BAR.

Our investigation begins with a brief review of the use of Internet-based experiments in accounting research. Following this, we examine the design of online experiments: We then discuss the validity characteristics associated with Internet-based experiments. We conclude with a review of the potential benefits and limitations of Internet-based experimentation for BAR.

INTERNET-BASED EXPERIMENTS RESEARCH LITERATURE

Internet-Based Experimentation--A Definition

We define an experiment as a research study that systematically manipulates one or more stimulus variable(s), controls for one or more extraneous variables, and empirically observes the effects on one or more process and/or response variable(s). We define a BAR Internet-based experiment as an experiment that investigates an accounting issue using the Internet to administer stimuli, collect data, and recruit participants. Our definition encapsulates two Internet technologies (i.e., email and the World Wide Web [WWW]), either or both of which can be used for administering, collecting, and recruiting purposes. Our investigation intentionally omits computerized PC-based or LAN-based experiments in controlled environments that do not utilize the Internet; Internet-based surveys (i.e., studies that do not manipulate a variable); experiments in which researchers merely use the Internet for internal coordinating and sharing purposes and research that uses Internet tools only for data analysis (e.g., downloadable statistical analysis software). (4)

Behavioral Accounting Internet-Based Experiments

To identify relevant Internet accounting studies, we reviewed the abstracts and methods section of articles from a sample of accounting research journals (see Table 1). Additionally, we searched the "ABI Inform" database for other accounting journal references to "Internet-based experiment" and "Internet experiment." We also queried relevant list servers (e.g., AECM, ISWorld) for information about published Internet-based experiments in accounting. Though we uncovered a number of working papers, our search identified (only) five published behavioral accounting Internet-based experiments as of 2002. Table 2 summarizes these five studies.

Barrick (2001)

Barrick (2001) investigated whether Internal Revenue Service (IRS) Code section knowledge affects tax research performance. The author measured participants' tax code knowledge and randomly assigned participants to search method conditions in which they accessed data either by IRS Code section or by topic. Participants accessed the study using a URL (uniform resource locator) provided by the author. The author was physically present during data collection, which took place over a four-week period. (5) The author created the software for the experiment using Microsoft FrontPage[R] to create Hypertext Markup Language (HTML) code. Results indicated a joint effect of experience and search condition on research performance.

Beeler, Franz, and Wier (2001)

Beeler et al.'s (2001) computer program randomly emailed one of four experimental scripts to managerial accountants, from which the authors received an 11.3 percent response rate. The authors used a managerial accountants' association proprietary intranet system to email instruments to participants. (6)

Beeler et al. (2001) illustrate the use of Internet-based experimentation to quickly access a large number of potential experimental participants at a relatively low cost. However, the relatively low response rate in Beeler et al. (2001) is consistent with evidence indicating lower response rates to email communications than U.S. mail solicitations for participation (e.g., see Hutchinson et al. 1998, 2001; Odom et al. 1999).

Hodge (2001)

Hodge (2001) examined the source credibility effects of hyperlinking audited and unaudited information in an Internet environment. Forty-seven M.B.A. students participated in the experiment: 31 in an Internet environment (one group with a decision aid, one without) and 16 using pencil and paper. The results indicate that hyperlinking unaudited with audited financial information increases the credibility of unaudited information.

As a part of the research design, Hodge (2001) manipulated the site of data collection between within- and outside-laboratory conditions. Consistent with the psychology literature (e.g., see Dillman 2000), he found no differences between Internet and paper-and-pencil technology responses.

Beeler and Hunton (2002)

Beeler and Hunton (2002) studied the effects of contingent economic rents on auditor independence with 73 geographically disbursed audit partners. The authors chose Internet delivery to overcome the difficulty of delivering PC-based experimental software to geographically disbursed partners. Participants accessed the Internet site (on a server computer at one of the author's university) using individually assigned passwords. After validating the participant's password, the program randomly assigned participants to conditions. To further control the identity of the respondents, the server computer granted access to the experimental software only if the computer machine name (i.e., the computer's IP address) matched the individual's office computer. Partners were randomly assigned to treatment conditions and, where possible, response items were randomly presented. Data collection occurred during a contiguous five-day window (Monday through Friday).

The Beeler and Hunton (2002) study illustrates how Internet-based technology can assure the identity of respondents, randomize treatment conditions and response items, and gather data within a short time period. Additionally, delivering the experiment on the Web allowed the geographically dispersed audit partners considerable flexibility regarding where and when they participated in the study.

Herron and Young (2002)

In Herron and Young (2002), participants accessed the Internet site using a researcher-assigned password. The 70 student participants "randomly self-selected" into conditions by entering self-selected four-digit ID numbers. Odd numbered participants received one set of experimental materials; even numbered participants received another. The authors manipulated time pressure by informing participants that their time to complete the task was computer-recorded (see Hen-on and Young [2000] for description). In addition, participants saw an on-screen timer that displayed time taken on the task. Herron and Young (2002) demonstrate real-time randomization in Internet-based experiments. In addition, their use of an online display and computer monitoring to create time pressure are possible in an Internet or PC environment, but not with paper-and-pencil.

These five published Internet-based accounting experiments reveal a few of the possibilities of the Internet to expand BAR data collection to nontraditional times, locations, and participants. However, there are technical and validity issues to consider in Internet-based experiments. We next discuss technological and procedural aspects of creating and disseminating Internet-based experiments.

DESIGNING AND IMPLEMENTING INTERNET-BASED EXPERIMENTS

Researchers who are considering Internet-based experimentation must consider several design and implementation issues, including (1) acquiring technical expertise; (2) deciding where to host the experiment; (3) anticipating and addressing Internet configuration and connection compatibility problems; (4) building in random assignment and other controls; (5) recruiting participants; and (6) obtaining informed consent.

Acquiring Technical Expertise

Email-Distributed Experiments

The creation and distribution of an email-distributed experiment begins with the design of the introductory materials, experimental stimuli, and response items. Imagine that a researcher wanted to create a 2 x 2 between-participants experiment. In both a paper-and-pencil and Internet-based experiment, the researcher would create four "stacks" of experimental materials, one for each treatment condition. In addition, the researcher might create two orders for the randomized response items to allow for the measurement of order effects. Next, the researcher would obtain permission to mail to a list of potential participants (e.g., from the Internal Audit Association), randomly split the list into eight sublists (four treatment conditions by two orders). The researcher would likely request responses within a short time period (e.g., one week) and would resend requests to nonrespondents, in data collection in which respondents are identified (i.e., not anonymous).

Email-distributed requests for participation in an experiment can take four forms. First, the email message can ask respondents to complete and return a file (e.g., Microsoft Word[R], Access[R], or Excel[R]) attached to the email message (e.g., as in Beeler et al. 2001). Email attachments are technologically easier to generate and send, but this technique provides fewer options for formatting, question skipping, and interactive responses (Dillman 2000). In addition, the common use of Microsoft Word email attachments is labor-intensive and error-prone, as the researchers must re-enter responses into a spreadsheet or database. Second, researchers could attach a Word document containing the introductory materials and experimental stimuli, and a spreadsheet (or database file) for responses. Using this procedure, the data can be copied into a "master" spreadsheet or a database containing all participant responses, thus eliminating data entry and transcription errors. This approach has been implemented in some of the authors' previous and ongoing data collection efforts.

Third, researchers can use Internet development tools such as Microsoft FrontPage[R] to incorporate the experiment into an email attachment, including scales for replying to response items, and automatically save the responses to a database. However, this approach requires knowledge of configuring an Internet server. Finally, the email message can ask respondents to click on a URL provided in the email message (e.g., as in Barrick 2001; Hodge 2001; Herron and Young 2002). Internet form responses are technologically more challenging, but also more visually interesting, flexible, and interactive.

Internet-Distributed Experiments

Designing and delivering non-email Internet-based experiments requires greater technical expertise than email-distributed participation requests. The technical expertise needed for non-email Internet-based experiments can be considered on a continuum. On the high-expertise end of the continuum, researchers who are experienced in HTML, Active Server Pages (ASP), Java, JavaScript, PERL, and/or Common Gateway Interface (CGI) programming can write their own program code.

In the center of the continuum are technology-savvy researchers who lack programming skills. These researchers may choose to use software development tools such as Microsoft FrontPage (http://www.microsoft.com/frontpage/), Authorware (http://www.macromedia.com/software/authorware/) or ColdFusion[R][TM] (http://www.macromedia.com/software/coldfusion/), which are largely menu-driven and thus do not require significant programming expertise. (7) Such tools provide extensive instruction and technical support from their manufacturers--key benefits to their successful use. Nevertheless, effective use of such tools requires an investment in learning.

At the other end of the continuum are less technologically adept researchers who are neither comfortable with using a software development tool nor possess the requisite programming skills. These researchers can hire one or more graduate students, thereby outsourcing the technical (but not conceptual) design of programs. (8) Of course, this solution carries with it a set of unique problems. For example, the researcher must be able to effectively manage a systems development project, including determining whether the program is operating as he or she intends, ensuring that the student/programmer adequately documents his or her work, and ensuring that there is another student who can assume the project if the original student graduates mid-project. (9)

Deciding Where to Host the Experiment

The researcher will also need to decide where to host the experiment. This decision is often dependent on the researcher's comfort level with respect to technology. A researcher who is comfortable with technology may choose to host the experiment on his or her own personal server. This allows for close individual monitoring of the server to ensure that it is operating properly. Alternatively, the researcher may use university resources to host the experiment. The advantage here is that the researcher will likely have technical support available from the university, as noted by Herron and Young (2002).

Another option for creating Internet-based experiments is to utilize an online research environment such as PsychExperiments (http://psychexps.olemiss.edu/), which is a psychology laboratory created by researchers at the University of Mississippi (2003). While the site was created by psychologists, the goal is to develop the site into a "collaboratory" center for interdisciplinary research. The Appendix introduces the mechanics of using the PsychExperiments Web site to host experiments.

Anticipating and Addressing Technology Compatibility Problems

Research participants may chose any of a number of Web browsers (e.g., Netscape, Internet Explorer, Opera, Hot Java), browser versions, processor speeds, memory capabilities, computer screen sizes, and Internet connections (e.g., broadband or dial-up) to access the same experiment. Any one of these choices has the potential to cause the experiment to fail to execute or to create suboptimal data or information displays. For example, an experiment written with Java or JavaScript requires a Java-enabled browser. A participant accessing an experiment through an old browser version may be unable to run the experiment. If the experiment is graphics-intense, participants with slower connections may tire of waiting for the pages to load, which can increase the drop-out rate. Insufficient memory can also cause a browser to freeze or otherwise prevent highly graphical Web pages from loading properly. Further, participants may have screen resolutions set low (e.g., 640 x 480 resolution instead of 1024 x 768), which can cause the participant to misinterpret the displayed information. Some experiments also require an exchange of information between the participant's computer and the server. If the participant's computer is not set to "trust" an external computer/server (i.e., the "security" option on the browser is set to "medium" or "high" or the browser is set not to receive cookies), the experiment may fail to execute.

A two-fold strategy will address many of these problems: (1) extensive pilot testing of Web sites using differing browsers, operating systems, and system configurations, and, (2) informing participants of the technical specifications needed to successfully complete the experiment. To illustrate informing participants of Web site requirements, the PsychExps Web site (http://psychexps.olemiss.edu/) displays a "Getting Started" link that describes the system configuration needed for participation and provides additional links to download software. Participant instructions in an Internet-based experiment should provide contact information to help users answer system-related questions.

Consistent with some interpretations of Moore's Law, computing power has monotonically increased over time. In addition, Internet protocols are moving toward global standards. Consequently, the incompatibility issues just raised may not persist in the face of standardization around high levels of computing power. Nevertheless, they represent an existing impediment to Internet-based BAR. In addition, the second strategy mentioned above (i.e., informing participants of needed technical specifications) may unintentionally discourage some people from participating in the experiment. However, this outcome seems preferable to the alternative of misleading some potential participants to believe that they can complete an experiment, only to have technical requirements result in an inability to do so.

Random Assignment and Other Controls in Internet-Based Research

Randomization distributes uncontrolled influences across treatment conditions (Shadish et al. 2001). In Internet-based experiments, randomization almost always requires programming. Musch and Reips (2000) report that 29 of 35 Internet-based psychology experiments randomly assigned participants to treatment conditions. CGI scripts, Java, and JavaScript programming were used most often to achieve randomization. Baron and Siepmann (2000, 247-248) provide a JavaScript program "to implement a between-subjects manipulation or to counterbalance the order of questions in a within-subjects design." Birnbaum (2001, 210-212) also provides a JavaScript program for random assignment. Random assignment into two groups can also be accomplished by having participants enter a random number and then assigning them to a group based on whether their number was even or odd, as in Herron and Young (2002).

The researcher may also decide to build in other experimental controls such as hiding the browser's "Back" button. This prevents the participant from returning to previous pages and changing answers based on information learned later in the experiment. (10) Additionally, long experiments may be best completed in multiple sessions. Setting browser parameters to accept "cookies" facilitates the completion of an experiment in multiple sessions (Schmidt 2000). However, the use of cookies requires that participants access the experiment from the same computer for each session.

Recruiting Participants

Researchers often attract participants to Internet-based experiments by posting links or sending emails to individuals or list servers (listservs) that advertise the experimental Internet site. The desired population will influence where to posts links or emails. For example, a researcher recruiting international accounting participants might post links to sites where individuals interested in international accounting are likely to visit (e.g., the AAA International Accounting Section's Internet site). Researchers recruiting accounting instructors or participants could post an announcement on an accounting or accounting information systems listserv such as AACCSYS-L, AECM, and ISWORLD. (11) If the researcher wishes to restrict access to a subset of potential participants, then access can be restricted to individuals willing to provide identifying (e.g., email address) or demographic (e.g., current occupation or position) data.

We urge behavioral accounting researchers to refrain from large-scale and unwanted solicitations among accounting academics and professionals for participation in experiments. One of the powerful possibilities of Internet-based experimentation is the solicitation of, literally, millions of potential experimental participants at almost no cost to the researcher, but considerable cost to companies, organizations, and individuals receiving the unwanted email. Currently, the U.S. Congress is considering multiple laws regulating "spam" (i.e., unwanted email) (Coalition Against Unsolicited Commercial Email 2003). We propose that behavioral accounting researchers, perhaps through the Accounting, Behavior and Organizations Section of the American Accounting Association, consider adopting a voluntary code of conduct for behavioral accounting researchers' use of electronic mail for solicitations of participation in experiments. We would expect such a voluntary code of conduct to include guidance regarding the use of online solicitation of list servers and email addresses for experimental participation.

Obtaining Informed Consent in an Internet-Based Experiment

Research involving human participants (including surveys, laboratory experiments, and Internet-based experiments) must complete an Institutional Review Board (IRB) approval process, which protects the rights of research participants. Although specific IRB regulations vary by institution, many IRBs require researchers to inform experimental participants: (1) that participation is voluntary, (2) of any risks from participation, and (3) of any rewards for participation (e.g., cash payoffs for participation, effort, or accuracy). This process can require participants to sign an "Informed Consent" form that confirms their voluntary participation and their understanding of any risks and rewards.

An Internet-based experiment or survey can comply with the informed consent requirements of many IRBs in one of three ways. First, the experimenter can obtain a "Waiver of Informed Consent" that allows participation without the participant's written signature and by guaranteeing the participant's anonymity. Using this approach, the experiment might require participants to click an "I Agree" button at the bottom of an informed consent form. Secondly, and less desirable (for obvious reasons), the experimental instructions can direct the participant to print and sign the informed consent page and mail or fax it to the researcher. Finally, the Internet site may include all required disclosures and allow the participant to provide an "electronic signature" by clicking to indicate approval of the agreement. The first and third options discussed above differ in that the former provides anonymity while the latter does not.

After designing and implementing an experiment on a Web server, researchers must consider issues of data quality; specifically, how can the experiment's internal and external validity be increased? The next section discusses these issues.

DATA QUALITY AND VALIDITY ISSUES IN INTERNET-BASED EXPERIMENTS

Psychology research explores two dimensions of validity associated with Internet-based experiments (Krantz and Dalal 2000). Convergent validity (or triangulation) studies investigate the consistency of results obtained in Internet-based versus laboratory experiments. Construct validity investigations examine whether Internet-based research confirms the results of previous theories that have been confirmed in laboratory research.

Krantz and Dalal (2000) summarize the results of nine experiments that use a triangulation approach to compare the validity of Internet-based experiments with those of laboratory experiments. They observe that "in all cases ... the Web findings are quite valid or at least are comparable to those of laboratory studies of the same phenomena" (Krantz and Dalal 2000, 42). They also note that the results of Internet-based experiments are robust across individual differences and environmental conditions. Similarly, Birnbaum (2000a, xvii) states that "the trend emerging from the early research on the validity of Internet-based experiments compared with lab experiments is that Internet studies yield the same conclusions as studies done in the lab." Additionally, McGraw et al. (2000b) note that research designed to validate Internet-based experiments yields evidence of consistency with laboratory results for both between- and within-participants manipulations.

However, Internet-based experiments hold differing threats to validity as compared to laboratory experiments. Table 3 summarizes the validity characteristics of Internet-based experiments using four categories of validity identified by Shadish et al. (2001, 38). Column one of Table 3 displays the four validity types, column two identifies the characteristics of Internet-based data collection, and column three presents the expected effect of each characteristic. We next discuss these validity threats and related controls in Internet-based experiments.

Statistical Conclusion Validity

Potential Increases in Statistical Conclusion Validity in Internet-Based Experiments

Statistical conclusion validity is the extent to which a treatment and an outcome variable covary (Shadish et al. 2001). Two characteristics of Internet data collection are likely to increase statistical conclusion validity. Statistical power is the long-term likelihood of correctly rejecting the null hypothesis. Statistical power is a function of sample size, population effect size (i.e., the hypothesized difference between two populations), and [alpha] error (Cohen 1988; Kraemer and Thiemann 1987; see also Lindsay 1995). Internet-based experimentation can increase statistical power through the availability of large sample sizes. For example, in a survey of 35 Internet-based experiments, the mean (standard deviation) [median] number of participants was 427 (650) [158] (Musch and Reips 2000). Similarly, when Birnbaum (2000b) administered an online decision-making experiment, over 150 people participated within two days of the site's inauguration, 318 participated within 12 days, and 1,224 people participated within 4 months. Sample sizes of this size are rare in non-Internet-based experiments.

Although not evident in the published accounting studies reviewed previously, Ayers et al. (2003) illustrate the possibility for obtaining large sample sizes in accounting research. They examined the effect of U.S. presidential candidate campaign rhetoric related to taxation on security prices using data from the 1992 "Winner-Take-All" U.S. presidential election "real-money futures markets" of the Iowa Electronic Markets (University of Iowa Tippie College of Business 2001). In this market, the winners correctly predict the 1992 U.S. presidential campaign outcome. According to Roberts, (12) 471 traders participated in this market.

The second potential increase in statistical conclusion validity in Internet-based experiments results from the decrease or elimination of data entry errors. Well-designed Internet-based experiments prevent participants from entering invalid responses and generate data that are immediately downloadable into data analysis software. Thus, random human data entry and transcription errors associated with paper-and-pencil technology experiments can be dramatically reduced, if not eliminated, with Internet-based experiments. Consequently, Internet-based experiments can increase statistical conclusion validity by eradicating random data entry errors. (13)

Potential Decreases in Statistical Conclusion Validity in Internet-Based Experiments

Four characteristics of Internet data collection potentially decrease statistical conclusion validity. First, Internet-based experiments allow participants to complete instruments in "natural" (i.e., non-laboratory) settings. Therefore, participants in Internet-based experiments may complete experimental instruments in distracting environments (e.g., while watching television or in the middle of the night). This characteristic of "naturalism" in Internet-based experimental settings can increase construct and external validity (as we will discuss). However, the heterogeneity of experimental settings in Internet-based experiments may decrease statistical conclusion validity by increasing random error. McGraw et al. (2000, 502) argue that "the added noise created by having participants in different settings using different computers is easily compensated for by the sample sizes achievable with Internet delivery." Hence, the central issue here is the extent to which the large sample sizes (e.g., n in the thousands) of Internet experiments reduces beta error sufficiently to compensate for the increased noise from lessened control over data collection. While no existing research empirically explores these trade-offs, simulations and analytical methods offer useful tools for exploring the trade-offs between these experimental parameters.

System downtime (i.e., unavailability to potential participants) is a second potential threat to statistical conclusion validity in Internet-based experiments. Because of the expense of maintaining 24 hour/7 day availability, few computer servers achieve zero percent downtime. Consequently, system unavailability could potentially reduce statistical conclusion validity by removing some participants from the sample frame. For example, two authors of this article were collecting experimental data from accountants at a U.S. governmental agency. At the exact time when one group of participants chose to complete the experiment, the server failed. Had the authors not been able to reboot the server and arrange another time for completion of the experiment, these participants' data would have been lost. Server reliability and the availability of system help are important considerations in choosing a host server for BAR experiments.

A third potential threat to statistical conclusion validity in Internet-based experiments arises from new software and technologies that potentially increase software coding and data handling errors. For example, Barrick (2001, 24) reported that data from two participants were "inadvertently deleted." Pilot testing Internet-based experiments in multiple operating environments provides one important preventive control over software and data handling errors.

Finally, as discussed earlier, participants will use different technology configurations and browsers to access experiments (see discussion in the "Designing and Implementing Internet-Based Experiments" section). If system differences do not systematically covary with other participant characteristics, then such differences can potentially decrease statistical conclusion validity, primarily by increasing drop-out rates (as frustrated users leave the experiment). Next, we examine internal validity threats arising from Internet-based experiments.

Internal Validity

Potential Increases in Internal Validity in Internet-Based Experiments

Internal validity is the extent to which one can infer that the change in an outcome is attributable to the treatment (Shadish et al. 2001). Diffusion (i.e., participants not in a treatment condition learn information intended only for those in the treatment condition) and imitation (i.e., participants not in a treatment condition imitate those in the treatment condition) are important threats to internal validity that can result in nonsignificant differences between treatment and control conditions. Under certain conditions, Internet-based experiments may be less susceptible to the threat of a diffused or imitated treatment than laboratory-based experiments. For example, Internet-based experiments conducted with geographically disbursed populations may be less susceptible to the diffusion and imitation of treatments. Unless the experiment is targeted to a Web population that typically interacts online (e.g., accounting academics interested in computers), it seems unlikely that geographically disbursed participants will learn information that is intended for participants in other conditions. In contrast, laboratory experiments conducted at one or a small number of sites (e.g., the accounting classes at one university, or, auditors attending a firm training class) are more susceptible to treatment diffusion and imitation.

Potential Decreases in Internal Validity in Internet-Based Experiments

Internet-based experiments typically experience higher participant attrition rates than laboratory experiments (Reips 2000). A higher attrition rate among Internet relative to laboratory participants could create a participant self-selection bias that renders causal inference problematic; hence, internal validity is potentially lower. The placement of requests for information in Internet-based experiments offers one potential control over high drop-out rates. For example, Frick et al. (1999) conducted an Internet-based experiment to determine the effect on drop-out rates based on the order of personal information and the description of monetary rewards for participation. Results indicated that dropout rates were lowest when personal information requests and monetary reward information appeared at the beginning of the experiment. Drop-out rates were highest with personal information requests at the end of the experiment and no monetary reward.

"Cheating," which Reips (2000) defines as a single participant completing an experiment multiple times, is a second potential threat to internal validity. For example, a student might participate twice in an experiment--once using his/her identity and another time using a friend's identity, so that both receive course credit. One potential mechanism to decrease cheating is a request not to participate more than once. More sophisticated controls include logon and password identifications, and manual or programmed checks of IP addresses (Hunton and Beeler 2002) or email addresses to identify previously participation. We next discuss the effect of Internet-based experiments on construct validity.

Construct Validity

Potential Increases in Construct Validity in Internet-Based Experiments

Construct validity reflects the extent to which one can generalize from observations to higher-order constructs (Shadish et al. 2001). Demand effects (e.g., Pany 1987) and other experimenter influences (see Rosenthal 1966, 1976) threaten construct validity in that they potentially confound the hypothesized effects of treatments on outcomes. For example, evaluation apprehension, in which participants behave to gain experimenter approval, represents an undesirable experimenter influence that can threaten construct validity (Rosenberg 1969). In contrast to Barrick (2001), many Internet-based experiments eliminate or reduce face-to-face contact between experimenter and participant. Although untested, it seems reasonable to speculate that the "naturalism" of Internet-based experimental settings, in which participants control their experimental settings, may increase construct validity by decreasing demand effects and other experimenter influences.

External Validity

Potential Increases in External Validity in Internet-Based Experiments

External validity is the extent to which research observations are generalizable to other people, tasks, settings, treatments, and times (Shadish et al. 2001). Most BAR studies do not randomly select participants, tasks, settings, treatments, or time periods from well-defined populations. Consequently, the external validity of existing BAR is low.

Internet-based experiments offer four possibilities for improving the external validity of BAR. First, participants in Internet-based experiments are often heterogeneous compared with participants in laboratory experiments. For example, 27 percent of the participants in Birnbaum's (2000b) Internet study of choices among gambles were from outside of the United States, with participants from over 49 nations in the sample. Similarly, the 145 respondents in Beeler et al. (2001) were geographically disbursed managerial accountants working at North American companies. Thus, because Internet-based experiments can select from a large, more heterogeneous population, they have the potential to increase external validity.

Second, existing BAR generally captures professional accountants' behavior during weekday normal working hours (i.e., 8:00 AM to 5:00 PM). The external validity of our knowledge of accounting professionals' behavior is suspect since it has been generated from a truncated subsample of the work hours that are typical in professional accounting. Internet-based experiments afford the possibility of studying professional accountants at times other than during normal work hours and days. The ability to examine accountants' responses outside of weekday working hours affords accounting researchers new possibilities to raise the external validity of accounting research by expanding the sample time "frames" with respect to such critical issues as time pressure (e.g., Spilker 1995), stress (e.g., Weick 1983), and work-life balance (Hooks and Higgs 2002) in professional accounting.

Third, the use of interactive controls (e.g., "branch and bound" capabilities) can increase the external validity of Internet-based experiments by allowing for stratified sampling procedures (Dillman 2000) that would be impossible or inconvenient using paper-and-pencil methods. For example, interactive controls can ensure that participants with certain individual traits are (are not) present in treatment conditions or that participation is (is not) equal across days or weeks within and between conditions. Alternatively, researchers might control treatment assignment based on a participant's response(s) to a preliminary question. (14)

Multimedia refers to "the linear representation of information using multiple media, including sounds, animation, and other audio and visual formats" (Bryant and Hunton 2000). Interactive multimedia, in which the user interacts with the experimental material, is often used in Internet experiments. For example, Horswill and Coster (2001) studied drivers' risk-taking behavior using photographs and photographic animation in an Internet-based experiment. Participants viewed videos on the Internet of real driving situations and indicated whether they would drive faster or slower than a given car. The authors noted that while laboratory experiments would be sufficient for their purposes, participants/drivers would still have to be persuaded to come to the laboratory. Instead, they presented the stimuli in an Internet-based experiment to capture nonstudent, real-world participants, thus increasing external validity. The use of multimedia technologies in accounting may be particularly useful in creating realistic, externally valid simulations of accounting environments (cf., Hogarth 1991). For example, imagine Internet-based simulations that explore the auditor's decisions and their consequences in the collapses of WorldCom and Enron. (15)

Last, real-time multiparticipant experiments such as interactive stock markets can increase external validity by drawing from a large, heterogeneous population that might behave quite differently than accounting students in a laboratory setting (e.g., see Ayers et al. 2003). For example, risk-taking behavior, negotiation strategies, and other behavioral constructs can be studied from a broader perspective via Internet-based experiments as compared to laboratory experiments using small groups of homogeneous participants.

Potential Decreases in External Validity in Internet-Based Experiments

Participant self-selection is the most serious external validity threat in Internet-based experiments (Reips 2000). Self-selection occurs when participants systematically differ from non-volunteers. For example, accounting participants in Internet-based experiments are likely to be younger and have higher levels of computing knowledge than a relevant reference population (cf., Nearon 1999). Strategies for managing participant self-selection in Internet-based experiments include defining a population that would likely volunteer for participation (e.g., Hodge 2001), or limiting inferences to a population that can be reasonably inferred from the demographic profile of the obtained sample.

BEHAVIORAL RESEARCH ISSUES IN INTERNET-BASED EXPERIMENTS

We next offer suggestions as to how BAR might advance through Internet-based methods.

Taxation Research

Tax research investigates both taxpayer compliance (e.g., Davis 2003; Feltham 2002; Boylan 2001) and responses to tax policy changes (e.g., Davis et al. 2003). The external validity of theoretical constructs and relationships that have been confirmed in controlled laboratory settings can be further tested in Internet-based experiments with broader samples than previously obtained. For example, Bobek and Hatfield (2003) find consistent gains in taxpayer compliance with increases in social influence and perceived behavioral control. The sample for their investigation includes students (n = 108), parents from a local elementary school (n = 19), and respondents to a mailing to residents of Florida and Georgia (n = 51).

Internet-based research would allow exploration of this (confirmed) theory with non-U.S, taxpayers, and, with taxpayers who visit Web sites arguing that federal taxation is illegal (e.g., see http: //www.wcool.com/mo96/0326.html) or immoral (http://www.warresisters.org/piechart.htm). In addition, public interest in tax policy (Taxpayers for Common Sense: http://www.taxpayer.net/about/ welcome.htm) and taxpayer (e.g., TurboTax: http://www.turbotax.com/) issues make Internet-based taxation research particularly likely to achieve a large and diverse participant sample.

Online Decision Behavior and Databases

Decision and information search behavior in online environments is another important area for Internet-based experimentation. Such research could investigate the "natural" search and information processing behavior of decision makers using the many online database research services (e.g., EDGAR (http://www.sec.gov/edgar.shtml), Lexis-Nexis (http://www.Lexis-Nexis.com), Moody's Investor Service (http://www.moody.com), and Value Line (http://www.valueline.com)), which use hypertext links to direct users along information paths. User characteristics such as meta-cognition (i.e., knowledge of one's search strategy), learner control, and learner style have been shown to correlate with efficient and effective search strategies (Bryant and Hunton 2000). Alexander et al. (2003) illustrate this approach by exploring the search strategies of users of the Commerce Clearing House (CCH) proprietary online tax database, including how long the search process took, the number of searches performed, and the number of authorities reviewed.

Real-Time, Interactive, Multiparticipant Research

The Iowa Electronic Market project provides a template for real-time interactive multiparticipant research. Extending this research to a broader participant population could enhance the generalizability of locally obtained BAR results. For example, Ayers et al. (2003) examined whether the tax rhetoric of opposing candidates during the 1992 presidential campaign affected stock prices in the Iowa Political Stock Market.

Another possibility would be to investigate the effect of more frequent financial statement reporting on stock price volatility among individual investors. For example, an Internet experimental market could examine the buying and selling behavior of a heterogeneous population of individual investors where the frequency of financial reporting changes (e.g., from quarterly-to-monthly or quarterly-to-daily).

Issues in Auditing, Fraud, and Accounting Ethics

It is vitally important for the accounting profession to better understand the precursors to fraud in organizations. Statement of Auditing Standard No. 99 (American Institute of Certified Public Accountants 2002) requires auditors to explicitly consider opportunity, rationalization, and personal characteristics that can motivate employees or managers to commit fraud. The potential of allowing anonymous Internet responses might induce individuals to disclose more truthfully their fraud-related attitudes and behaviors compared to laboratory settings that include face-to-face interactions with researchers. For example, an Internet-based experiment might recruit differing levels of employees and manipulate ethical scenarios to discover the conditions under which participants consider committing fraud, and, when fraud moves from conception to action.

Multicultural Issues

A large body of international accounting research focuses on cultural differences. For example, researchers in accounting have investigated how risk and uncertainty are viewed across global populations (e.g., Doupnik and Richter 2003), how culture affects auditor professional judgment in auditor/client conflicts (Patel et al. 2002), and how cross-cultural differences impact performance evaluation and reward systems (Awasthi et al. 2001). The worldwide reach of the Internet makes it an ideal media for conducting multicultural experiments in accounting.

The issues just identified reflect a small sample of topics that hold a high degree of relevance to researchers and professionals in accounting. We highlight such issues as a way to stimulate creative thinking regarding BAR areas for which Internet-based experimentation holds great promise. We are confident that the innate creativity of behavioral accounting researchers will lead to exciting uses of the Internet for BAR that we have not anticipated.

SUMMARY AND CONCLUSIONS

In this paper, we consider the possibilities created by worldwide availability of the Internet for BAR. While there are currently many Internet-based experiments in psychology, we identified only five published Internet-based experiments in accounting. Recently, a growing number of Internet-based BAR experiments have appeared as working papers. We believe that significant unexplored opportunities still exist for Internet-based BAR experiments.

This paper reviews design issues for Internet-based experiments, acquiring technical expertise, deciding where to host the experiment, addressing hardware/software compatibility issues, building in randomization and other controls, recruiting participants, and obtaining Institutional Review Board approval. With regard to alternative approaches to creating Internet-based experiments, the least technically demanding approach available to researchers is to solicit potential participants through email. The most technically demanding approach is for researchers to create their own Internet-based experiments.

We compare the validity characteristics of Internet and laboratory experiments. Internet-based experiments potentially strengthen some aspects of validity and weaken other aspects (see Table 3 for a summary). Finally, we highlight several areas where behavioral accounting researchers can develop Internet-based experiments to extend lab results to different populations, tasks, settings, and times. These areas include tax research, online database use, real-time interactive multiparticipant studies, auditing/fraud/ethics investigations, and cross-cultural comparisons. These are but some of the possibilities in BAR that can benefit from Internet-based experimentation.

Each generation of BAR technologies creates new research opportunities. The emerging technology of Internet-based experiments creates the possibility of exploring uninvestigated research questions with larger numbers of previously unavailable participant groups. At the same time, Internet-based experiments require alternative approaches to managing validity threats. Our experience with Internet-based experimentation and the creativity of our BAR colleagues leads us to look forward with great anticipation to an emerging generation of Internet-based BAR.

APPENDIX

A Sample Tool: Using the PsychExperiments Internet Site for Internet Experiments at http:// psychexps.olemiss.edu/

Many interactive experiments on the Internet are written in Java or HTML with JavaScript, while the experiments at PsychExperiments are developed using Macromedia's Authorware. (16) A limitation of this site is that all experimental materials must be written in Authorware. (17)

Learning Authorware is relatively easy compared with other Internet-based experiment development tools, but would still be a challenge to novice programmers. To aid in learning Authorware, PsychExperiments offers a free interactive training CD (see the PsychExperiments Internet site). In many cases, researchers can create a new experiment by adapting an existing experiment already located at the site, as the source code for all existing programs is freely available for download. Willingness to share computer program code is a pre-condition for posting experiments to the site. Adapting existing source code nevertheless involves considerable technical skill on the part of the experimenter. As with any application development software, the time required to build experiments becomes considerably shorter with knowledge and experience.

An important advantage of the PsychExperiments site is that the experimenter's development responsibility ends with the creation of source code for the experiment, because there are generic scripts available to perform most common tasks. These include presenting title screens, collecting user information, obtaining informed consent, performing error checks, and storing the data in a text file. Separate scripts located at the site allow users to download their data from the text file (McGraw et al. 2000a).

Authorware is an object-oriented development tool that makes extensive use of object libraries. For example, a given input screen can be saved as an object and reused multiple times in the same experiment or different experiments. The objects are placed on a "flow" line, which depicts the order in which each object will be executed. Each object can be decomposed into sub-objects, as depicted in Figure 1. As illustrated, only when the developer reaches the primitive object (lowest level) must he/she write programming code.

[Figure 1 OMITTED]

After obtaining IRB approval for conducting the experiment and submitting the Authorware code to the Webmaster at PsychExperiments, the PsychExperiments Webmaster "packages" the experiment into files with the extensions *.aas and *.aam. These extensions allow Internet browsers equipped with the Authorware Web Player (formerly known as Shockwave for Authorware) to read the files. The Webmaster also creates an HTML page for the experiment with an "embedded" experimental tag. As described by McGraw et al. (2000a, 227):
   Once the *.aas segments, the *.aam file, and the HTML file are
   placed at a URL, any browser with the Authorware Web Player can
   execute the experiment. The Authorware Web Player plug-in
   resides on the user's computer and basically provides the capability
   of a run-time player via the user's browser.


In addition to obtaining a comma-delimited text file of data, PsychExperiments users can also download an Excel[R] workbook that is useful for data analysis. The workbook includes macros that take the comma-delimited file as input to create separate worksheets with data from each research participant, along with a summary statistics and a summary page with data from the entire sample of participants. Samples of the Excel workbook can be viewed at the PsychExperiments Internet site.

The PsychExperiments Internet site offers a ready-made platform for creating Internet-based experiments and analyzing data. Based on our personal experience, an experienced programmer can learn the Authorware software and develop an experiment in a few days of concentrated effort. Researchers with less technology knowledge would require longer learning curves in order to create an experiment.
TABLE 1
Journals, Issues, and Years Searched for Internet-Based Experiments

Journals                                     Years and Issues Searched

Accounting Horizons                                  1994-2002
Accounting, Organizations, and Society               1995-2002
Advances in Behavioral Research in
 Accounting                                          1998-2002
Auditing: A Journal of Practice & Theory             1994-2002
Behavioral Research in Accounting                    1994-2002
International Journal of Accounting
 Information Systems                               1995-2002 (a)
Issues in Accounting Education                 Fall 1995-August 2002
Journal of Accounting Education                      1994-2002
The Journal of American Taxation
 Association                                         1995-2002
Journal of Information Systems                      1994-2002,
                                               including Supplements
Journal of Management Accounting Research            1994-2002
National Tax Journal                              1995-March 2003
The Accounting Review                                1995-2002

Search excludes instructional cases for all journals.

(a) Formerly named Advances in Accoutning Information Systems Research.

TABLE 2
Summary of Behavioral Accounting Internet-Based Research

Study              Primary Issue(s)              Participants

Barrick        How does Internal           * 41 graduate taxation
(2001)         Revenue Code section          student and
               knowledge affect taxation   * 31 tax professionals
               research performance?         from three Big 4 firms

Beeler et al.  What are the relationships  145 usable responses
(2001)         among perceived justice,    (11.72%) that resulted
               organizational              from email to 1,280
               commitment,                 geographically disbursed
               and work performance        managerial accountants
               among managerial
               accountants?

Hodge          How does hyperlinking       47 local MBA students
(2001)         audited with unaudited      participated for $10 and
               information affect source   the chance to win $100 in
               credibility in a Web        a random drawing (see
               environment?                Hodge 2000)

Beeler and     How do contingent           73 geographically
Hunton (2002)  economic rents affect       disbursed audit partners
               auditor independence?

Herron and     What effect does time       70 local students who
Young (2002)   pressure have on ethical    volunteered or
               decisions in an audit-      participated for course
               related task?               credit

                                                     Relevant
                 Participant     Data Response     Experimental
Study             Recruiting        Method            Controls

Barrick        * Students--at    Internet       Physical presence of
(2001)           request of                     author during data
                 course                         collection
                 instructor.
               * Professionals
                 --at the
                 request of a
                 CPA firm
                 partner or as
                 a part of a
                 continuing
                 education
                 course

Beeler et al.  * Personal        Email on       1. One week data
(2001)           contact with    proprietary       collection window
                 participating   Association    2. Random ordering
                 Accountants'    intranet          of items within
                 Association                       instrument
               * Email on                          sections
                 Association's                  3. Match of IP
                 proprietary                       addresses to
                 intranet                          originating
                                                   client computer
                                                   IP address

Hodge          Personal contact  Internet or    Comparison of within-
(2001)         with local        paper-and-     laboratory versus
               M.B.A. program    pencil         out-of-laboratory
                                                responses

Beeler and     Personal contact  Internet       1. Password required
Hunton (2002)  with firm                           for access to Web
               offices                             site
                                                2. Match of IP
                                                   addresses to
                                                   originating client
                                                   computer IP
                                                   address
                                                3. One week data
                                                   collection window
                                                4. Random ordering
                                                   of items within
                                                   instrument
                                                   sections

Herron and     Instructor        Internet       1. Real-time random
Young (2002)   contact with                        assignment to
               local accounting                    conditions
               class                            2. Online clock
                                                   display and
                                                   computer
                                                   monitoring (to
                                                   create time
                                                   pressure)

TABLE 3
Validity Characteristics of Internet-Based Experiment
Compared with Laboratory Experiments

Type of Validity(a)           Characteristics of           Expected
                           Internet Data Collection        Effect on
                                                           Validity

Statistical Conclusion     1. Increased Sample Size and    Arrow up
Validity                      Statistical Power
(Covariation of Treatment  2. Decreased or Eliminated      Arrow up
and Outcome)                  Data Entry Errors
                           3. Increased Variability In
                              Experimental Setting
                           4. System Downtime              Arrow down
                           5. Software Coding Errors       Arrow down
                           6. Internet Formatting          Arrow down
                              Differences

Internal Validity (Causal  1. Decreased Potential          Arrow up
Inference of Treatment        Diffusion of Treatment
Effect on Outcome)         2. Increased Participant        Arrow down
                              Drop-Out Rates across
                              Treatments
                           3. "Cheating"--Multiple         Arrow down
                              Submissions from the
                              Same Participant

Construct Validity         1. Decreased Demand Effects     Arrow up
(Generalizability from        and Other Experimenter
Constructs to Sample)         Influences
                           2. Decreased Participants'      Arrow up
                              Evaluation Apprehension

External Validity          1. Increased Heterogeneity of   Arrow up
(Generalizability to Other    Respondents
People, Tasks, Settings,   2. Increased Variability in     Arrow up
Treatments, and Times)        Times and Settings
                           3. Interactive Controls         Arrow up
                           4. Multimedia Capability
                           5. Real-Time Multiparticipant   Arrow up
                              Capability
                           6. Increased Participant        Arrow down
                              Self-Selection

(a) Adapted from Shadish et al. 2001.


Thanks to Kenneth McGraw for advice and assistance, and to Anita Reed for research assistance. Thanks also to Frank Hodge, Steve Kaplan, Uday Murthy, Teresa Stephenson, Brad Tuttle, Chris Wolfe, an anonymous reviewer, and workshop participants at the University of California, Riverside, the University of South Florida, and Virginia Commonwealth University, for comments on earlier drafts. Professor Stone gratefully acknowledges the financial support provided by the Von Allmen School of Accountancy and the Gatton College of Business at the University of Kentucky. Author order is alphabetic; the authors contributed equally to this project.

This manuscript was commissioned by the editor.

GLOSSARY

ASCII--American Standard Code for Information Interchange is a standard seven-bit code created by the American National Standards Institute in 1968. ASCII code was established to promote compatibility in computer data processing.

Active Server Pages (ASP)--applications that interact with a user's input to create dynamic HTML Web pages.

Authorware[R][TM]--application software users can use to create Internet-based experiments.

Client computer--a computer requesting information from a server on the Internet or a local network. See "Server computer" below.

Common Gateway Interface (CGI) scripts--programs originally designed to run in a UNIX environment, but work (although more slowly) with Windows[TM] environments as well. CGI scripts facilitate interactive activity such as filling in a form on the Internet.

Cookie--a text file stored on a user's computer that is used to identify a user and provide a customized Web page for them.

ColdFusion[R][TM]--a software development tool that allows for creation of Internet applications.

Comma delimited file--A text file where attributes (fields) are separated by spaces and entities (rows) are separated by commas.

Generic script--a short computer program that automates and facilitates routine data gathering on the Internet.

Hypertext Markup Language (HTML)--the language that allows Internet pages to be displayed on the Internet.

Hypertext Transfer Protocol (HTTP)--"defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands" (http://www.webopedia.com, Jupitermedia 2003).

Internet--"an interconnected system of networks that connects computers around the world via the TCP/IP protocol" (http://www.gurunet.com, Atomica Corporation 1999-2003).

Internet-based experiment--an experiment using the Word Wide Web (e.g., Internet) that manipulates at least one variable.

Internet Protocol (IP) address--the unique number assigned to each computer on the Internet. These numbers can be permanently assigned or dynamically assigned by an Internet Service Provider such as America Online[R].

Java--a programming language created by Sun Microsystems that is used to develop Internet applications that can operate on different operating system platforms (e.g., LINUX, Windows[TM], and Apple).

JavaScript--a scripting language created by Netscape[R].

LISTSER V--Malling list management software from L-Soft International, Inc., Landover, MD (http:// www.lsoft.com) that runs on mainframes, VMS, NT, and various UNIX machines. LISTSERV scans email messages for the words "subscribe" and "unsubscribe" to automatically update the list" (Computer Language Company Inc. 2003b). The software also provides Virus protection.

PERL--Practical Extraction Report Language. A programming language written by Larry Wall that combines syntax from several UNIX utilities and languages. Introduced in 1987, PERL is designed to handle a variety of system administrator functions and provides comprehensive string handling functions. It is widely used to write Internet server programs for such tasks as automatically updating user accounts and newsgroup postings, processing removal requests, synchronizing databases, and generating reports. Perl has also been adapted to non-UNIX platforms" (Computer Language Company Inc. 2003a).

Scripts--lines of codes embedded in Web pages that perform a task, often delivered through an interface known as CGI (see above).

Server computer--a computer whose function it is to provide information as requested from another computer (called a "client computer"--see above). For example, to download a form from the IRS, the user will go to the IRS Internet site and follow the link to locate and download the desired form. The forms are maintained at the IRS on a special "server" computer. The computer through which the user requests the form is a "client" computer.

Source code--the original program code in which an experiment is programmed.

TCP/IP--Transmission Control Protocol/Internet Protocol. A set of communication standards that allows the connection and transfer of information across networks that taken together comprise the Internet.

Turnkey software--"off-the-shelf" software that is loaded onto a computer and used without customization by the user.

Uniform Resource Locator (URL)--"An Internet address (for example, http://www.hmco.com/trade/), usually consisting of the access protocol (http), the domain name (www.hmco.com), and optionally the path to a file or resource residing on that server (trade)" (Houghton Mifflin Company 1992).

VBScript--a scripting language created by Microsoft based on Visual Basic.

World Wide Web (WWW)--"the complete set of documents residing on all Internet servers that use the HTTP protocol, accessible to users via a simple point-and-click system" (http://www.gurunet.com, Atomica Corporation 1999-2003).

(1) Herron and Young (2000) and Alexander et al. (2003) also provide useful introductions to Internet-based data collection in BAR.

(2) Vasarhelyi (1977) was among the first BAR studies to collect data from participants using computers.

(3) As we performed this research, we noted several style variations in "Web" and "Web site." Following The Chicago Style Manual (2003), we use the terms "Web site" and "Web" (and not "Website," "website," or "web").

(4) We distinguish between the terms "Internet" and "World Wide Web" as follows: "Internet" refers to the system of worldwide interconnected networks, while "World Wide Web" (or Web) refers to the set of documents residing on network servers throughout the world. See the Glossary.

(5) 2003 personal communication (email) from J. Barrick.

(6) 2003 personal communication (telephone) from B. Wier.

(7) Academic prices for ColdFusion[R][TM] Professional and Authorware 7.0 are $859 and $499, respectively (Macromedia customer service quote, 7/18/03).

(8) If the researcher assigns programming to a graduate student, then the issue arises as to whether the student should then be a coauthor. Fine and Kurdek (1993) argue that the level of the student's contribution to the project is the decisive criteria. For example, in Beeler and Hunton (2002) and Hodge (2001), the graduate students' involvement was limited to programming, and thus, the student was not a coauthor in either study. In other cases, where the student provides the original research idea or significantly contributes to the research design, the student should likely be a coauthor.

(9) We note that behavioral accounting doctoral students would benefit greatly by investing time in developing programming skills during their doctoral studies. There are a variety of online resources that provide a good starting point. For example, http://visualbasic.about.com/contains information on learning Visual Basic (VB), VB Script, VB.Net, and ASP. Online tutorials for learning ASP can be found at http://www.w3schools.com/; www.kamath.com/tutorials/; and http://www.learnasp.com/learnasp/ (Bryant et al. 2003). An interactive tutorial for learning HTML can be found at http://www.davesite.com/webstation/html/.

(10) See http://www.4guysfromrolla.com/webtech/111500-1.shtml for the code for a script to hide (disable) the browser's "Back" button.

(11) AACCSYS-L and AECM are listservs focused on AIS teaching, theory, and practice. ISWORLD is an MIS list server.

(12) Personal communication, 2003, email message to one of the authors.

(13) Often, "generic scripts," i.e., short, specific-purpose computer programs that run on the server, automate the process of collecting data and transferring to a database for analysis.

(14) These tasks are accomplished through programs executed on the server hosting the experiment (Schmidt 2000).

(15) Multimedia presentations, however, may introduce technological problems. Horswill and Coster (2001, 49) indicate that they encountered "technological difficulties with showing high quality video online due to the bandwidth, processing 1power, and memory required."

(16) Another software package used to develop online experiments is E-Prime[TM] (Windows-based, $695, http://www.pstnet.com/e-prime) (McGraw et al. 2000a).

(17) Novice programmers would require considerable time (perhaps 2 weeks) to learn to develop Authorware[R] experiments. Additionally, research participants must download (for free) a client version of Authorware Web player to run experiments on their computers.

REFERENCES

Alexander, R., A. Blay, and K. Hurtt. 2003. Internet-based experimental accounting research: Is this delivery method right for you? Working paper, University of California, Riverside.

American Institute of Certified Public Accountants (AICPA). 2002. Consideration of Fraud in a Financial Statement Audit. Statement of Auditing Standard No. 99. New York, NY: AICPA.

Awasthi, V., C. Chow, and A. Wu. 2001. Cross cultural differences in the behavioral consequences of imposing performance evaluation and reward systems: An experimental investigation. The International Journal of Accounting 36 (3): 291-309.

Ayers, B., C. B. Cloyd, and J. Robinson. 2003. "Read my lips ...": Does the tax rhetoric of presidential candidates affect security prices? Working paper available: http://papers.ssrn.com/so13/delivery.cfm/ SSRN_ID382561_code030324500.pdf?abstractid=382561. Access date: 2003.

Baron, J., and M. Siepmann. 2000. Techniques for creating and using Web questionnaires in research and teaching. In Psychological Experiments on the Internet, edited by M. H. Bimbaum, 235-265. San Diego, CA: Academic Press.

Barrick, J. 2001. The effect of Code section knowledge on tax-research performance. The Journal of American Taxation Association 23 (2): 20-34.

Beeler, J. D., D. Franz, and B. Wier. 2001. Perceptions of benefit, justice, and desired outcomes. Advances in Accounting Behavioral Research 4: 361-377.

--, and J. E. Hunton. 2002. Contingent economic rents: Insidious threats to auditor independence. Advances in Accounting Behavioral Research 5: 21-50.

Berg, J. E. 1994. Using experimental economics to resolve accounting dilemmas. Journal of Accounting and Economics 10 (Spring): 547-556.

Birnbaum, M., and D. Beeghley. 1997. Violations of branch independence in judgments of the value of gambles. Psychological Science 8: 87-94.

--. 1999. Testing critical properties of decision making on the Internet. Psychological Science 10: 399407.

--. 2000a. Introduction to psychological experiments on the Internet. In Psychological Experiments on the Internet, edited by M. H. Birnbaum, xv-xx. San Diego, CA: Academic Press.

--. 2000b. Decision making in the lab and on the Web. In Psychological Experiments on the Internet, edited by M. H. Birnbaum, 3-34. San Diego, CA: Academic Press.

--, M., ed. 2001. Introduction to Behavioral Research on the Internet. Upper Saddle River, NJ: Prentice Hall, Inc.

Bobek, D. D., and R. C. Hatfield. 2003. An investigation of the theory of planned behavior and the role of moral obligation in tax compliance. Behavioral Research in Accounting 15: 14-38.

Boylan, S. 2001. Experimental evidence on the relation between tax rates and compliance: The effect of earned vs. endowed income. The Journal of the American Taxation Association 23 (1): 75-91.

Bryant, S., and J. Hunton. 2000. The use of technology in the delivery of instruction: Implications for accounting educators and researchers. Issues in Accounting Education 15 (1): 129-162.

--, D. Stone, and B. Wier. 2003. Accountant's roles and responsibilities at the U.S. Army Corps of Engineers. Working paper, University of South Florida.

Burrell, O. K. 1929. An experiment in student and teacher rating. The Accounting Review 4 (3): 194-197.

Chicago Manual of Style. 2003. Available at: http://www.chicagomanualofstyle.org/cmosfaq.html.

Coalition Against Unsolicited Commercial Email. 2003, Pending legislation. Available at: http://www.cauce.org/legislation/index.shtml. Last updated May 12, 2003.

Cohen, J. 1988. Statistical Power for the Behavioral Sciences. Second edition. San Diego, CA: Academic Press.

Computer Language Company, Inc. 2003a. Computer desktop encyclopedia: Definition of LISTSERV. Available at: http://www.computerlanguage.com/index.htm. Cited 2003.

--. 2003b. Computer desktop encyclopedia: Definition of PERL. Available at: http:// www.computerlanguage.com/index.htm. Cited 2003.

Davis, J. 2003. Social behaviors, enforcement, and tax compliance dynamics. The Accounting Review 78 (1): 39-70.

--, G. Hecht, and J. Perkins. 2003. Social behaviors, enforcement, and tax compliance dynamics. The Accounting Review 104 (1): 39-69.

DeJong, D. V., R. Forsythe, and W. C. Uecker. 1985. The methodology of laboratory markets and its implications for agency research in accounting and auditing. Journal of Accounting Research 23 (2): 753-793.

--, R. J. Lundholm, R. Forsythe, and W. C. Uecker 1985. A laboratory investigation of the moral hazard problem in an agency relationship. Journal of Accounting Research 23 (Supplement): 81-120.

Dillman, D. A. 2000. Mail and Internet Surveys: The Tailored Design Method. New York, NY: J. Wiley.

Doupnik, T., and Richter, M. 2003. Interpretation of uncertainty expressions: A cross-national study. Accounting, Organizations and Society 28 (1): 15-35.

Feltham, G. 2002. The interrelationship between estimated tax payments and taxpayer compliance. The Journal of the American Taxation Association 24: 27-46.

Fine, M. A., and L. A. Kurdek. 1993. Reflections on determining authorship credit and authorship order on faculty-student collaborations. American Psychologist 48 (11): 1141-1147.

Frick, A., M. Bachtiger, and U. Reips. 1999. Financial incentives, personal information and drop--out rate in online studies. Zurich: Online Press. Available at: dgof.de/tband99/. Cited 2003.

Herron, T., and G. Young II. 2000. E-research: Moving behavioral accounting research into cyberspace. Advances in Accounting Behavioral Research 3: 265-280.

--, and G. Young. 2002. Ethical decisions and the dilution effect: The impact of nondiagnostic information on ethical decisions. Research on Accounting Ethics 8: 145-166.

Hodge, F. 2000. Hyperlinking unaudited information to audited financial statements: Effects on investor judgments. Doctoral dissertation, Indiana University.

--. 2001. Hyperlinking unaudited information to audited financial statements: Effects on investor judgments. The Accounting Review 4 (76): 675-691.

Hogarth, R.M. 1991. A perspective on cognitive research in accounting. The Accounting Review 66 (2): 277290.

Hooks, K. L., and J. L. Higgs. 2002. Workplace environment in a professional services firm. Behavioral Research in Accounting: 105-127.

Horswill, M., and M. Coster. 2001. User-controlled photographic animation, photograph-based questions and questionnaires: Three Internet-based instruments for measuring drivers' risk-taking behavior. Behavior Research Methods, Instruments, & Computers 33 (1): 46-58.

Houghton Mifflin Company. 1992. American Heritage[R] Dictionary of the English Language. Boston, MA: Houghton Mifflin Company.

Hunton, J., and J. Beeler. 2002. Contingent economic rents: Insidious threats to audit independence. Advances in Accounting Behavioral Research 5: 21-50.

Hutchinson, P. D., G. M. Fleischman, and D. W. Johnson. 1998. Email versus mail surveys: A comparative study. Review of Accounting Information Systems 2 (3): 43-55.

--, --, and --. 2001. Email surveys: Additional research insights. The Review of Business Information Systems 5 (Spring): 37-48.

Internet Software Consortium. 2003. Internet domain survey. Available at: http://www.isc.org/ds./ Cited 2003.

Kraemer, H. C., and S. Thiemann. 1987. How Many Subjects? Statistical Power Analysis In Research. Newbury Park, CA: Sage.

Krantz, J., and R. Dalal. 2000. Validity of Web-based psychological research. In Psychological Experiments on the Internet, edited by M. H. Birnbaum, 35-60. San Diego, CA: Academic Press.

--. 2003. Psychological research on the net. Available at: http://psych.hanover.edu/Research/exponnet.html. Cited 2003.

Lindsay, R. M. 1995. Reconsidering the status of tests of significance: An alternative criterion of adequacy. Accounting, Organizations and Society 20 (January): 35-53.

McGraw, K., M. Tew, and J. Williams. 2000a. PsychExps: An online psychology laboratory. In Psychological Experiments on the Internet, edited by M. H. Bimbaum, 219-233. San Diego, CA: Academic Press.

--, --, and --. 2000b. The integrity of Web-delivered experiments: Can you trust the data? Psychological Science 11 (6): 502-506.

Musch, N., and U. Reips. 2000. A brief history of Web experimenting. In Psychological Experiments on the Internet, edited by M. H. Birnbaum, 61-87. San Diego, CA: Academic Press.

Nearon, B. H. 1999. Survey Reports CPA Computer and Internet Use. New York State Society of Certified Public Accountants. Available at: http://www.nysscpa.org/trustedprof/0799/tp19.htm. Cited 2003.

Nielsen//Net Ratings. 2003. Online usage at work jumps 17 percent year-over-year, driven by female office workers. Available at: http://www.Nielsen-netratings.com/news.jsp. Cited 2003.

Odom, M. D., M. Giullian, and M. Totaro. 1999. New technology in survey research: Does it improve response rates? Review of Accounting Information Systems 3 (2): 27-34.

Pany, K. 1987. Within- vs. between-subjects experimental designs: A study of demand effects. Contemporary Accounting Research 7 (Fail): 39-53.

Patel, C., G. Harrison, and J. McKinnon. 2002. Cultural influences on judgments of professional accountants in auditor-client conflict resolution. Journal of International Financial Management & Accounting 13 (1): 1-31.

Payne, J. W., J. R. Bettman, and E. J. Johnson. 1993. The Adaptive Decision Maker. New York, NY: Cambridge University Press.

Reips, U. 2000. The Web experiment method: Advantages, disadvantages, and solutions. In Psychological Experiments on the Internet, edited by M. H. Birnbaum, 89-117. San Diego, CA: Academic Press.

Rosenberg, M. J. 1969. The conditions and consequences of evaluation apprehension. In Artifact in Behavioral Research, edited by R. Rosenthal, and R. Rosnow, 143-179. New York, NY: Academic Press.

Rosenthal, R. 1966. Experimenter Effects in Behavioral Research. New York, NY: Appleton-Century-Crofts.

--. 1976. Experimenter Effects in Behavioral Research. New York, NY: Irvington Publishers [Distributed by Halsted Press.]

Schmidt, W. 2000. The server side of psychology Web experiments. In Psychological Experiments on the Internet, edited by M. H. Birnbaum, 285-310. San Diego, CA: Academic Press.

Shadish, W., T. Cook, and D. Campbell. 2001. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, MA: Houghton Mifflin.

Smith, V. L., J. Schatzburg, and W. S. Waller 1987. Experimental economics and auditing. Auditing: A Journal of Theory & Practice 7 (1): 71-93.

Spilker, B. C. 1995. The effects of time pressure and knowledge on key word selection behavior in tax research. The Accounting Review 70: 49-70.

University of Iowa, Tippie College of Business. 2001. Iowa electronic markets. Available at: http:// www.biz.uiowa.edu/iem/index.html. Access Date: 2003.

University of Mississippi. 2003. PsychExps: Psychological experiments on the Internet. Available at: http:// psychexps.olemiss.edu/. Access date: 2003.

Vasarhelyi, M. 1977. Man-machine planning systems: A cognitive style examination of interactive decision making. Journal of Accounting Research 15 (Spring): 138-153.

Weick, K. 1983. Stress in accounting systems. The Accounting Review (April): 350-374.

Stephanie M. Bryant

University of South Florida

James E. Hunton

Bentley College

Dan N. Stone

University of Kentucky
COPYRIGHT 2004 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bryant, Stephanie M.; Hunton, James E.; Stone, Dan N.
Publication:Behavioral Research in Accounting
Date:Jan 1, 2004
Words:11410
Previous Article:The effects of comprehensive information reporting systems and economic incentives on managers' time-planning decisions.
Next Article:A note about the effect of auditor cognitive style on task performance.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters