Printer Friendly

The Effects of Small Monetary Incentives on Response Quality and Rates in the Positive Confirmation of Account Receivable Balances.

The AICPA Professional Standards (AICPA 1997, AU330) considers the confirmation of accounts receivable balances to be a generally accepted auditing procedure. A major drawback to the procedure is that auditors are often confronted with high nonresponse rates that can result in time-consuming alternative auditing procedures. Several research studies have found that response rates can be significantly improved with the use of small monetary incentives and the AICPA has encouraged the use of incentives in a document titled: Auditing Procedure Study: Confirmation of Accounts Receivable (AICPA 1996). However, auditing researchers have not studied the potential effects of monetary incentives on the "quality" of confirmation responses. This study focuses on this issue by investigating the relationship between monetary incentives and two relevant dimensions of response quality--misstatement detection rate and self-reported completion time.

Data were gathered by mailing 7,200 positive confirmations to retail and classified advertising customers of four large metropolitan newspapers. The sample consisted of a cross section of commercial businesses, governmental entities, and not-for-profit organizations. One-third of the confirmations contained a seeded error that overstated the account balance, one-third that understated the account balance, and one-third was not intentionally misstated. Each of the three groups was further subdivided such that one-third received a confirmation with a quarter incentive, one-third received a confirmation with a dollar incentive, and one-third received a confirmation with no incentive. Consistent with prior research, the use of the monetary incentives significantly improved response rates and a quarter incentive proved as effective as a dollar. However, when monetary incentives were used, there was a significant reduction in both dimensions of response quality and the quality effect of the quarter and dollar was not significantly different. This result occurred for both the understated and overstated accounts. Response quality was higher for overstated than understated accounts and the response rate also was higher for overstated than understated accounts.

While this study supports a significant body of prior research regarding the response-enhancing effectiveness of small monetary incentives, it has produced some disturbing evidence related to the negative effects of monetary incentives on confirmation response quality. It must be emphasized that definitive conclusions, much less policy decisions, should not be made based upon the limited amount of existing research evidence. Until further research is concluded, the auditing profession should take a "wait and see" approach regarding the use of monetary incentives in the audit confirmation process.

INTRODUCTION

The Auditing Standards Division of the American Institute of Certified Public Accountants (AICPA) has encouraged the use of incentives in an effort to improve confirmation response rates. Specifically, in a document titled Auditing Procedure Study: Confirmation of Accounts Receivable (AICPA 1996), the use of small monetary incentives is included among a variety of practical suggestions for improving the effectiveness and efficiency of the confirmation procedure.

While prior auditing research has produced evidence suggesting that confirmation response rates can be improved with small monetary incentives (e.g., Engle 1991), research in nonaccounting disciplines reports disturbing information suggesting that the quality of survey-based responses can deteriorate when small monetary incentives are used (e.g., Hansen 1980; Schneider and Johnson 1994, 1995). A limited number of studies have focused on response quality in an auditing confirmation context (e.g., Armitage 1990; Caster 1990), but none of these investigations examined the potential effects of small monetary incentives on quality. The current study was designed to provide information regarding this issue.

In the current study, four newspaper organizations mailed 7,200 confirmations to a representative cross-section of trade accounts receivable customers. The field experiment employed a three (no incentive, quarter incentive, dollar incentive) by three (no seeded error, seeded understatement error, and seeded overstatement error) fully-crossed, between-subjects design.

Commensurate with prior auditing research, the results indicate that monetary incentives enhanced confirmation response rates, and significant response rate differences did not exist between the quarter and dollar incentive conditions. Also, consistent with prior studies, the misstatement detection rate was higher for overstated, as compared to understated, account balances. In contrast to prior research, the response rate was higher for overstated, as compared to understated, account balances. The most noteworthy findings are that the use of small monetary incentives reduced response quality and failed to close the quality gap between the overstated and understated accounts.

The current research makes three notable contributions to the auditing literature. First, this study investigates previously unexplored relationships between the use of small monetary incentives and two dimensions of confirmation response quality (i.e., misstatement detection rate and number of minutes expended in completing the confirmation). Second, the results extend existing knowledge with regard to the effects of small monetary incentives on response rates and response quality between overstated and understated balance conditions. Third, this study reinforces extant auditing research by investigating the response-enhancing effectiveness of small monetary incentives in a new context, using a large cross-section of trade accounts receivable customers. The findings have implications for accounting researchers, the AICPA, and practicing auditors, as evidence suggests that the use of small monetary incentives with the positive form of accounts receivable confirmations may not be warranted.

PRIOR RESEARCH

The confirmation procedure is a common tool used to achieve audit objectives in a wide variety of areas, particularly in the verification of account receivable balances during an independent financial statement audit. AU 330 of the AICPA Professional Standards (AICPA 1997) considers the confirmation of accounts receivable balances to be a generally accepted auditing procedure. While the procedure is not mandatory, there is a presumption that auditors will confirm receivables during a financial statement audit. A major drawback to the procedure is that auditors are commonly confronted with the problem of high nonresponse rates, which result in concerns over nonresponse bias and time-consuming alternative auditing procedures. Further, prior research has led to questions concerning the accuracy of confirmations in detecting misstatements.

Response Rates

Prior auditing research has examined response rates in the absence of monetary incentives. For example, Armitage (1990) reported that the positive confirmation response rate for understated balances was not significantly different from the response rate for overstated balances. Also, the response rate for correctly stated balances was not significantly different from misstated balance response rates. Research conducted by Caster (1990) also found that the response rates of overstated, understated, and correctly stated account balances were not significantly different from one another.

Monetary Incentives and Response Rates

Recent findings suggest that small monetary incentives significantly increase confirmation response rates. For example, Engle (1991) investigated the response-enhancing effectiveness of two monetary incentive levels (quarter and dollar) and prenotification letters when positive confirmation requests were sent to installment and commercial loan customers of a medium-size bank. His analysis revealed that the small monetary incentives proved effective with installment loans when incentives were used without a prenotification letter.(1) The quarter proved as effective as the dollar and there was no indication that the use of monetary incentives insulted the confirmation recipients or that the use of the incentives was viewed as unprofessional.

A substantial amount of nonaccounting research (e.g., Church 1993; Yu and Cooper 1983) has also produced evidence indicating that small monetary incentives can be effectively used to increase the response rate of mail surveys. For example, Church (1993) performed a meta-analysis of 38 studies that utilized various types of incentives to increase mail-survey response rates. The analysis led the researcher to conclude: "The use of prepaid cash rewards for completing surveys had the most significant impact on increasing response rates" (Church 1993, 75).

Marketing researchers (e.g., Armstrong and Yokum 1994) have cited the "norm of reciprocity" as a theoretical explanation for the response-enhancing effectiveness of monetary incentives. In essence, this explanation asserts that recipients of a small monetary incentive believe that they should "give something back" to the incentive provider. Accordingly, individuals are often motivated to complete and return a survey since this form of activity satisfies their reciprocation beliefs. The AICPA appeared to be describing a similar psychological process when it stated: "Why should an accompanying cash inducement of a nominal amount (such as a dollar) significantly increase response rates? Apparently, it is not necessarily the value of the inducement that increases the response rate, but rather the psychology of offering a reward to the respondent. If the recipient of a confirmation request receives a monetary reward, he or she is more likely to feel obligated to comply with the confirmation request" (AICPA 1996, 30).

While small monetary incentives appear to be a promising technique that auditors can use to increase confirmation response rates, there is a potential problem with the procedure. That is, researchers in nonaccounting disciplines have produced evidence suggesting that incentives can detract from the quality of responses.

Response Quality

Only a limited amount of research has focused on response quality in an audit confirmation context. Warren (1975) summarized five studies that involved the deliberate misstatement of information on confirmation requests and the measurement of reported detection rates (an indicator of quality) by confirmation recipients. Two salient discoveries were that positive confirmations appeared to be more reliable than negative confirmations and recipients were less likely to detect and/or report misstatements that were in their favor. More recently, research findings from Armitage (1990) and Caster (1990) supported Warren's (1975) summary.

While studies in an audit confirmation context have produced evidence indicating that several variables (e.g., misstatement direction and form of confirmation) may be affecting the misstatement detection effectiveness of confirmations, the potential effect of small monetary incentives on confirmation response quality has not been addressed by accounting researchers. Available evidence pertaining to this issue comes from nonaccounting sources and findings are mixed.

Monetary Incentives and Response Quality

Research in marketing and public opinion surveys provide evidence that the use of monetary incentives can affect several aspects of response quality relevant to the audit confirmation procedure. Some studies (e.g., McDaniel and Rao 1980; James and Bolstein 1990) found that incentives improved the quality of responses, while other investigations (e.g., Hansen 1980; Schneider and Johnson 1994) reported that the use of incentives detracted from response quality.

The previously described "norm of reciprocity" can be used to explain research findings that small monetary incentives improve response quality. Conversely, this explanation can also be used to provide a plausible reason for diminished response quality in the presence of incentives. That is, while individuals may believe that they should reciprocate as a way to "help" the incentive provider, some individuals may believe that the mere act of responding to the survey, irrespective of conscientious attention to answering survey items, is sufficient reciprocal behavior.

At present, the effect of monetary incentives on response quality is unclear since there is research evidence and theoretical support on both sides of the debate. In addition, most research studies addressing this issue have focused on surveys in marketing or public opinion polling environments. Due to a host of unique factors in the audit environment (e.g., types of respondents, purpose of eliciting responses, and vested interest in confirmed balances), the question of whether monetary incentives affect response quality in audit confirmation applications remains unanswered.

The primary objective of this research is to investigate the impact of small monetary incentives on the response quality of confirmation requests. Naturally, response rates are inexorably linked to response quality; thus, these two response factors are simultaneously collected and examined. Given the exploratory nature of this study, the following research question is examined:

RQ: To what extent will small monetary incentives impact the quality of responses in the positive confirmation of account receivable balances?

RESEARCH METHOD

Design and Procedures

Data were gathered by mailing 7,200 positive confirmations to accounts receivable customers of four independent, large metropolitan newspapers (each organization randomly selected 1,800 customers). The newspaper organizations were located in the Northeast, Southeast, Midwest, and western regions of the United States. The sample consisted of a representative cross-section of commercial businesses, governmental entities, and not-for-profit organizations. Organizations, as opposed to individuals, comprised the target population since organizations represent a relatively large proportion of the overall accounts receivable balances for many business enterprises. The random-selection process was designed to produce a sample representative of a nonstratified, trade accounts receivable population. The accounts receivable transactions represented the sale of retail and classified advertisements.

The researchers provided each of the four organizations with a detailed sampling plan. Each organization was instructed to confirm 1,800 accounts randomly assigned to three conditions: intentionally overstated, intentionally understated, no intentional misstatement. The understated (overstated) account balances omitted the latest sales transaction (sales volume discount) that was within the range of 5 to 10 percent of the customer's outstanding accounts receivable balance. This particular error range was selected by the sponsoring newspaper organizations, as they did not want the seeded errors to be so high that they would reflect poorly on the newspapers' accounting departments.(2)

Each condition was then subdivided (randomly) into three incentive levels: no monetary award (control group), a quarter incentive, and a dollar incentive. Each confirmation request was accompanied by a cover letter signed by the publisher of the newspaper. The letter stated the following:
   You have been randomly selected from a database of all classified charge
   customers to participate in an accounting study cosponsored by [Newspaper
   name] and the [University name]. Your participation in this study is
   completely voluntary. [Newspaper name] stands to learn a great deal from
   this study. As a result, we hope to more effectively target our marketing
   and promotional dollars. This is one of several programs designed to hold
   your advertising rate increases to an absolute minimum, as we have done in
   the past. In the following envelope you will find a request from our
   accounting department to confirm your account receivable balance as of
   [end-date]. Your participation in this study is greatly appreciated.


The positive confirmations were printed on the organization's letterhead. The confirmation design was a close adaptation of an example provided in Auditing Procedure Study-Confirmation of Accounts Receivable (AICPA 1996, 20). The confirmation form was modified slightly to reflect the fact that an independent audit was not the motivating factor for the request. Instead of referring to auditors, the confirmation stated that the organization's accounting department was updating its records regarding the recipient's account and that the confirmation should be returned to the accounting department.

The confirmation plan mandated that the four organizations conform to standardized mailing instructions. All subjects received a (1) cover letter, (2) confirmation, (3) detailed statement of their account activity, and (4) postage-paid return envelope addressed to the sponsoring organization's accounting department. In addition, incentive recipients received a sheet of paper containing the money. The money (either a quarter or a dollar) was taped to a plain white sheet of paper containing a short explanation of the money and its purpose. The message stated:
   Please accept the attached quarter (dollar). The money is not a payment for
   services, but rather a small token of appreciation for your consideration
   in spending a small amount of your valuable time completing and returning
   the accompanying confirmation form.


Response-Quality Metrics

In this study, one dimension of quality is defined as the seeded error detection rate of respondents. More specifically, the detection-rate metric is defined as the number of respondents who report an error (i.e., detectors) divided by the total number of respondents.(3) This metric was chosen as an important indicator of response quality for the following reason. The confirmation of receivables is typically part of a sampling plan. When auditors estimate the value of a client's accounts receivable balance, they base their inference on information derived from the following two sources: (1) confirmations, and (2) alternative auditing procedures performed on nonresponding accounts. The detection-rate metric in this study focuses on the former informational source. Since alternative auditing procedures are typically not performed for responding accounts, the quality of the information obtained from confirmation respondents directly impacts the accuracy of the auditors' inference about the fairness of the client's account receivable balance.(4) The second dimension of quality is the self-reported number of minutes expended in completing the confirmation request. This metric provides an indication of the respondents' level of effort.

RESULTS

Descriptive Statistics

Table 1 presents descriptive statistics for the nine treatment conditions. Of the 7,200 mailed confirmations, 4,146 (58 percent) were returned. The mean accounts receivable balance was $3,253.72 and, on average, there was a $245.01 (7.53 percent) seeded misstatement in the account balances. Of the 4,800 account holders whose statements included seeded errors, 2,751 (57 percent) responded to the confirmation request and 1,480 (54 percent) of those respondents reportedly detected the errors.(5) The average number of transactions on each statement was 25.46 and there were 252 reported nonseeded errors (3.5 percent). On average, respondents spent 40.77 minutes completing the confirmation.(6)
TABLE 1
Descriptive Statistics

                                   Misstatement Direction
Incentive
Condition                                No Misstatement

None       Sample size                           n = 800
           Mean balance                        $3,166.31
           Mean number of
            transactions                           25.14
           Mean seeded error                       0.00%
           Survey responses           (378/800) = 47.25%
           Seeded error detection                  0.00%
           Nonseeded errors             (28/800) = 3.50%
           Mean number of
            minutes                                15.34
           Mean days outstanding                   18.81

Quarter    Sample size                           n = 800
           Mean balance                        $3,319.22
           Mean number of
            transactions                           25.40
           Mean seeded error                       0.00%
           Survey responses           (517/800) = 64.63%
           Seeded error detection                  0.00%
           Nonseeded errors             (25/800) = 3.13%
           Mean number of
            minutes                                11.27
           Mean days outstanding                   18.92

Dollar     Sample size                           n = 800
           Mean balance                        $3,335.71
           Mean number of
            transactions                           25.65
           Mean seeded error                       0.00%
           Survey responses           (500/800) = 62.50%
           Seeded error detection                  0.00%
           Nonseeded errors             (31/800) = 3.88%
           Mean number of
            minutes                                10.82
           Mean days outstanding                   18.97

                                   Misstatement Direction
Incentive                               Understated
Condition                                 Balance

None       Sample size                           n = 800
           Mean balance                        $3,201.56
           Mean number of
            transactions                           24.59
           Mean seeded error                       7.56%
           Survey responses           (303/800) = 37.88%
           Seeded error detection     (119/303) = 39.27%
           Nonseeded errors             (30/800) = 3.75%
           Mean number of
            minutes                                34.95
           Mean days outstanding                   18.76

Quarter    Sample size                           n = 800
           Mean balance                        $3,292.76
           Mean number of
            transactions                           26.20
           Mean seeded error                       7.55%
           Survey responses           (437/800) = 54.63%
           Seeded error detection     (136/437) = 31.12%
           Nonseeded errors             (26/800) = 3.25%
           Mean number of
            minutes                                20.98
           Mean days outstanding                   19.00

Dollar     Sample size                           n = 800
           Mean balance                        $3,251.84
           Mean number of
            transactions                           25.90
           Mean seeded error                       7.51%
           Survey responses           (424/800) = 53.00%
           Seeded error detection     (131/424) = 30.90%
           Nonseeded errors             (28/800) = 3.50%
           Mean number of
            minutes                                21.19
           Mean days outstanding                   18.82

                                   Misstatement Direction
Incentive
Condition                            Overstated Balance

None       Sample size                           n = 800
           Mean balance                        $3,200.41
           Mean number of
            transactions                           25.99
           Mean seeded error                       7.55%
           Survey responses           (435/800) = 54.38%
           Seeded error detection     (332/435) = 76.32%
           Nonseeded errors             (27/800) = 3.38%
           Mean number of
            minutes                                54.20
           Mean days outstanding                   19.06

Quarter    Sample size                           n = 800
           Mean balance                        $3,330.13
           Mean number of
            transactions                           24.50
           Mean seeded error                       7.51%
           Survey responses           (581/800) = 72.63%
           Seeded error detection     (383/581) = 65.92%
           Nonseeded errors             (28/800) = 3.50%
           Mean number of
            minutes                                39.55
           Mean days outstanding                   18.86

Dollar     Sample size                           n = 800
           Mean balance                        $3,185.52
           Mean number of
            transactions                           25.78
           Mean seeded error                       7.46%
           Survey responses           (571/800) = 71.38%
           Seeded error detection     (379/571) = 66.37%
           Nonseeded errors             (29/800) = 3.63%
           Mean number of
            minutes                                39.08
           Mean days outstanding                   19.24


Nonresponse Bias

Preliminary data analyses addressed the issue of nonresponse bias. Responses were arranged according to date received and then split into quartiles. A comparison of the early (first quartile) and late (fourth quartile) responders produced no significant differences based on any of the variables collected in this study (all beta estimate p-values were [is greater than] .15). In addition, data availability allowed a comparison of respondents to nonrespondents based on location, accounts receivable balance, misstatement amount, number of transactions, and treatment condition. There were no significant differences (p [is less than or equal to] .10) between these two groups on any of these variables. The totality of evidence led the researchers to conclude a nonresponse bias was not a significant factor in this study.

Testing the Research Question

A logistic regression model was run to facilitate comparison of detection rates among treatment conditions (Table 2). The no-misstatement condition was not included, as there were no seeded errors to detect. Preliminary model testing indicated that detection rates between the quarter and dollar conditions were not significantly different in either the understated or overstated conditions; thus, the quarter and dollar conditions were collapsed into a single incentive category.
TABLE 2
Logistic Regression Results
Dependent Variable = Detection Rate

Panel A: Logistic Regression Model

n = 2,751 (Total number of respondents whose statements included seeded
errors); Model [chi square] = 374.68; Degrees of Freedom = 3;
p-value < .001; Pseudo [R.sup.2] = .120.

Variable                   Beta Estimate   [chi square] Statistic

Intercept                      -0.80              117.78
Misstatement condition(a)       1.47              232.04
Incentive condition(b)          0.36                6.87
Misstatement X Incentive        0.14                0.52

Variable                      p-value

Intercept                      .001
Misstatement condition(a)      .001
Incentive condition(b)         .009
Misstatement X Incentive       .470

Panel B: Comparison of Mean Detection Rates(c)

Treatment                    Detection          Treatment
Condition                      Rate             Condition

Overstated                   (69.54%)          Understated
No incentive                 (57.80%)          Incentive

Treatment                    Detection
Condition                      Rate            t-statistic

Overstated                   (33.76%)             11.55
No incentive                 (48.58%)              3.51

Treatment
Condition                     p-value

Overstated                     .001
No incentive                   .001

(a) Misstatement condition is coded as: 0 = understated, 1 = overstated
(b) Incentive condition is coded as: 0 = incentive, 1 = no incentive

Notes: Preliminary testing indicated no significant detection rate
differences between a quarter and dollar in either misstatement
condition; thus, for purposes of parsimony, these two incentive
conditions were collapsed into a single category.

The following covariates were initially included in the regression
model: outstanding balance amount, seeded error amount, days
outstanding, number of transactions. Since all covariates were
insignificant (p > .10), they were subsequently dropped from the model.

(c) The t-test comparisons were made using parameter estimates produced
by logistic regression models used to obtain the mean detection rate
for each condition. Each t-test includes a Bonferroni adjustment
([Alpha] = .05) to control for inflated Type I error, as described in
Mendenhall and Sincich (1996).


The logistic regression model results (Panel A) and mean detection rate comparisons (Panel B) indicate that the detection rate in the "overstated" condition is significantly higher than the "understated" condition. This finding is consistent with Armitage (1990), Caster (1990), and Warren (1975). However, our findings suggest that the use of monetary incentives did not close the detection rate gap between the understated and overstated conditions. Results also indicate that the detection rate is significantly higher in the "no-incentive" condition, as compared to the "incentive" condition.

An ANCOVA model was used to examine the second dimension of response quality (number of minutes completing the confirmation). The ANCOVA results are shown on Table 3, Panel A and Scheffe's multiple pair-wise comparison test results are shown on Panel B. Since preliminary testing indicated no significant difference in mean completion times between a quarter and dollar incentive across the three misstatement conditions, the quarter and dollar conditions were collapsed into a single category. Results shown on Table 3, Panel A indicate significant main effects for the misstatement and incentive conditions, while the interaction term is not significant. Multiple pair-wise comparisons indicate monotonically increasing completion times across the "no misstatement," "understated," and "overstated" conditions. Also, completion time was significantly higher in the "no-incentive," as compared to the "incentive" condition.
TABLE 3
Analysis of Self-Reported Completion Time

Panel A: ANCOVA Model Results(a,b)

Variable                                Mean Square   F-Ratio  p-value

Number of transactions on statement(c)  1,185,291.00   859.45   .001
Misstatement condition                    148,910.10   107.97   .001
Incentive condition                        49,796.43    36.11   .001
Misstatement X Incentive                      221.42     0.16   .852
Error                                       1,379.12
n = 4,126 (20 respondents did not report completion time).

Panel B: Results of Scheffe's Multiple Pairwise Comparison Test
([Alpha] = .05)

Main Misstatement Effects  Mean Minutes(d)  Main Incentive Effects

No Misstatement                 12.21       No Incentive
Understated                     24.69       Incentive
Overstated                      43.38

Main Misstatement Effects  Mean Minutes(d)

No Misstatement                 35.81
Understated                     24.64
Overstated

Within each main effect section, different superscripts indicate
significant differences at [Alpha] = .05.

(a) On an exploratory basis, four additional covariates were tested with
the following results: outstanding balance amount (p = .008),
misstatement amount (p = .290), number of days outstanding (p = .045),
and a number of reported nonseeded errors (p = .942). Since the
direction, significance, and interpretation of study results do not
change when the two significant covariates (p < .10) are included in the
model, and since their inclusion can not be defended on a theoretical or
pragmatic basis, they were omitted from the final ANCOVA analysis.
(b) Preliminary testing indicated no significant completion time
differences between a quarter and a dollar across the misstatement
conditions; thus, for purposes of parsimony, these two incentive
conditions were collapsed into a single category.
(c) Chosen as a model covariate because the amount of time it takes to
verify a customer statement is, in large part, a function of the number
of transactions on the statement, since each transaction represents a
separate advertising purchase that is supported by a purchase order.
Thus, in a thorough reconciliation, each transaction would be traced
back to supporting documentation.
(d) Within each main effect section, means are significantly different
from each other at [Alpha] = .05.


Taken as a whole, results regarding detection rates and completion times exhibit similar patterns. That is, response quality was highest in the "overstated" condition, when compared to either the "understated" or "no-misstatement" conditions. However, both dimensions of quality measured in this study indicate that the use of small monetary incentives decreased the quality of audit evidence.

Testing for Response Rates

As mentioned previously, in this study response rates can be observed concurrently with detection rates. A logistic regression revealed statistically significant increases in response rates (p-value [is less than] .02) when monetary incentives were utilized, while the response-enhancing effects of the quarter and dollar were not significantly different from each other (p-value [is greater than] .49). This response-rate pattern was consistent within each misstatement condition (i.e., no misstatement, understatement, and overstatement). Statistically significant differences in response rates were also present among the three levels of misstatement. The overstated accounts yielded the highest response rate, the understated accounts resulted in the lowest response rate, and the no misstatement response rate was between the two extremes (p-value [is less than] .01).

While the significantly different response rates among the misstatement conditions (absent monetary incentives) is contradictory to results reported by Armitage (1990) and Caster (1990), the finding is intuitively appealing. That is, one would expect that more account holders would respond in the overstated condition, as compared to the understated condition, because an overstatement is economically detrimental to the confirmation recipient. Conversely, fewer account holders would be expected to respond when the statement is to their economic benefit.

While a definitive answer for the inconsistent research findings is not possible, one plausible explanation might be found in the differing composition of sample respondents. In the current study, the sample consisted of three customer types (commercial businesses, governmental entities, and not-for-profit organizations) across four newspaper organizations. In contrast, the Armitage (1990) sample reflected individuals who were customers of a single manufacturing firm and the Caster (1990) sample represented primarily commercial businesses who were customers of a single steel warehousing operation.

Post Hoc Observations

We analyzed the completion time for respondents who detected the seeded errors. Evidence revealed that within each misstatement condition, significantly less time was expended when a monetary incentive was present, and the time expended in the quarter and dollar conditions were statistically equivalent. Also, of the confirmations with monetary incentives, only 22 included comments referring to the receipt of money.(7)

DISCUSSION

The AICPA has suggested that the use of small monetary incentives should be considered as a means of enhancing the efficacy of the audit confirmation procedure (AICPA 1996). Response rates and response quality are two critical factors in the determination of the effectiveness of confirmation plans. This study was primarily motivated by the paucity of accounting research on this issue.

The results support prior accounting research (e.g., Engle 1991) in finding that the use of small monetary incentives significantly increased confirmation response rates, and that the use of a quarter was as equally effective as a dollar. Additionally, this investigation demonstrated that small monetary incentives could effectively improve response rates with a recipient population consisting of a cross-section of commercial businesses, governmental organizations, and not-for-profit entities. This finding complements prior research demonstrating the response-enhancing effectiveness of small monetary incentives in other contexts.

Response rates for overstated accounts were higher than response rates for understated accounts. Further, response rates in the no-misstatement condition fell between the understated and overstated response rates within each incentive level. While precise determination of the underlying psychological nature of this phenomenon is beyond the scope of this project, "economic self-interest" may have played a significant role. Confirmation recipients with overstated balances were motivated to return their confirmations and note the errors in an effort to correct detrimental balances, while confirmation recipients with understated balances lacked the same economic motivation.

The first aspect of quality assessed in this study was the rate at which respondents detected the seeded errors. Detection rates were consistently higher in the overstated balance condition than the understated condition. Within the understated and overstated balance conditions, detection rates in the incentive conditions were lower than the no-incentive condition, and detection rates between the quarter and dollar conditions were not significantly different. These findings suggest that there may be a core of individuals who conscientiously complied with confirmation requests, void of monetary incentives. In the presence of an incentive, the same core of individuals probably responded, along with another group of individuals who were motivated more by the incentive and less by a desire to conscientiously comply with the request. As a result, the use of monetary incentives dampened the overall detection rate. The second element of response quality, self-reported completion time, exhibited essentially the same pattern of results. It should be noted that self-reported completion time is an imprecise measure of effort; thus, the accuracy of completion time findings should be interpreted cautiously.

The response-enhancing effectiveness of small monetary incentives indicated by this study is supported by marketing research (e.g., Yu and Cooper 1983), while the deterioration of response quality is consistent with some studies in marketing and public opinion polling (e.g., Hansen 1980; Schneider and Johnson 1994, 1995) and inconsistent with other studies (McDaniel and Rao 1980; James and Bolstein 1990). The contradictory results in this regard could be due to the different settings, participants, and tasks. Future research in this area is needed in order to better understand and reconcile such mixed findings.

While the findings report prima facia evidence on the effectiveness of monetary incentives, it is important to recognize that the context of this study was different from an independent audit in several ways. First, confirmation recipients were responding to their newspapers' accounting departments to confirm accounts receivable balances, rather than replying to independent auditors. Second, confirmation recipients were told that the newspapers and a university were cosponsoring an accounting study. Finally, recipients were informed that their participation in the study was voluntary. Thus, future research is needed to establish the generalizability of the findings to the audit environment. Also, different types of recipients (e.g., individual vs. commercial) and confirmations (e.g., balance vs. invoice), and the question of whether "larger" errors can be detected are key issues for future research.

CONCLUSION

While this study supports a significant body of prior research regarding the response-enhancing effectiveness of small monetary incentives, it has also produced some disturbing evidence relating to the negative effects of incentives on confirmation response quality.(8) However, it must be emphasized that definitive conclusions, much less policy decisions, should not be made based upon the limited amount of existing research evidence. Future researchers should attempt to further validate the findings of this study, particularly in an audit context. Until then, it seems advisable for the auditing profession to take a "wait and see" approach regarding the use of monetary incentives in the audit confirmation process.

The authors thank two anonymous reviewers, Jere Francis, Gary Holstrum, Kathryn Kadous, Elaine Mauldin, Ted Mock, and participants of the Advanced Research Symposium (University of Amsterdam), Research Workshops (University of Missouri and Texas Tech Univesity), 1999 Auditing Section Midyear Conference, and 1999 Accounting Association of Australia and New Zealand (AAANZ) annual conference for their helpful comments. Each author provided an equal contribution to this research project. Please do not cite, use, or reproduce any portion of all of this manuscript without written permission from both authors.

(1) The response-enhancing effect of monetary incentives proved statistically insignificant when the use of incentives was combined with a prenotification letter.

(2) The misstatement range of between 5 percent and 10 percent used in the current study is consistent with two misstatement levels (5 percent and 10 percent) used in the Armitage (1990) investigation.

(3) An alternative response-quality metric is the dollar value of detected errors (audited amount) divided by the "correct" accounts receivable balance. This metric would be particularly important had the dollar values of seeded errors and correct accounts receivable balances differed significantly across treatment conditions, which was not the case in this study. However, as a precaution, the researchers tested the metric just described. The direction, significance, and interpretation of study findings were qualitatively the same using either response-quality metric. Thus, the detection-rate metric articulated in the text is used, as it is conventional in studies of this nature.

(4) While the use of incentives may increase the absolute number of respondents who report errors, the ratio of detectors to respondents must increase from the no-incentive to the incentive conditions if small monetary incentives are deemed to enhance response quality. For example, assume that the dollar value of accounts receivable is overstated by 10 percent and that a sample of 800 account holders are asked to confirm their balances. In scenario one, assume that no incentives are used and the response rate is 40 percent (320), where 20 percent (64) of the respondents report the 10 percent overstatement. If the auditor's alternative procedures discover the 10 percent overstatement error in the remaining 60 percent (480) of the sample, then an estimated overstatement rate of 6.8 percent would be used as the basis for estimating the dollar value of the population error (68 percent of the sample accounts either reported or were discovered to be overstated by 10 percent). In scenario two, assume that incentives are used and the response rate rises to 70 percent (560), where 15 percent (84) of the respondents report the 10 percent overstatement. If the auditor discovers the 10 percent overstatement in the remaining 30 percent (240) of the sample, then an estimated overstatement rate of 4.05 percent would be used as the basis for estimating the dollar value of the population error (40.5 percent of the sample either reported or were discovered to be overstated by 10 percent). Hence, the effectiveness of the confirmation procedure in scenario two is negatively compromised because the ratio of detectors to respondents decreased and the proportion of sample information coming from the respondents increased.

(5) The authors recognize that some respondents may have detected the seeded error, but chose not to report the misstatement.

(6) The following means were not significantly different (p [is less than or equal to] .10) across either treatment conditions or regions: (1) dollar balance, (2) number of transactions, (3) percentage seeded errors, (4) percentage nonseeded errors reported, (5) number of days outstanding, and (6) number of missing signatures. With regard to the above listed variables, tests for systematic differences across customer types (i.e., commercial, governmental, and not-for-profit entities) could not be performed due to data collection limitations.

(7) Twenty comments were positive and two were negative. The lack of a sizable number of negative comments is consistent with Engle (1991) and argues against any beliefs that auditors should not use incentives because they will insult the confirmation recipients and be viewed as unprofessional.

(8) As reported, the monetary incentives used in this study were a quarter and a dollar. We are unsure if the response- and quality-rate measures would be different under a higher incentive, such as $10.00.

REFERENCES

American Institute of Certified Public Accountants (AICPA). 1996. Auditing Procedure Study: Confirmation of Accounts Receivable. Second edition (revised). New York, NY: AICPA.

--. 1997. AICPA Professional Standards. Volume 1. New York, NY: AICPA.

Armitage, J. L. 1990. Accounts receivable confirmation effectiveness. Internal Auditing (Summer): 15-24.

Armstrong, J. S., and J. T. Yokum. 1994. Effectiveness of monetary incentives--Mail surveys to members of multinational professional groups. Industrial Marketing Management (April): 133-136.

Caster, P. 1990. An empirical study of accounts receivable confirmations as audit evidence. AUDITING: A Journal of Practice & Theory (Fall): 75-91.

Church, A. H. 1993. Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly (Spring): 62-79.

Engle, T. J. 1991. Increasing confirmation response rates: Prenotifications, monetary incentives and addressee differences. Journal of Accounting, Auditing and Finance (Winter): 109-127.

Hansen, R. A. 1980. A self-perception interpretation of the effect of monetary and nonmonetary incentives on mail survey respondent behavior. Journal of Marketing Research (February): 77-83.

James, J. M., and R. Bolstein. 1990. The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opinion Quarterly (Fall): 346-361.

McDaniel, S. W., and C. P. Rao. 1980. The effect of monetary inducement on mailed questionnaire response quality. Journal of Marketing Research (May): 265-268.

Mendenhall, W., and T. Sincich. 1996. A Second Course in Statistics: Regression Analysis. Fourth edition. New York, NY: West Publishing.

Schneider, K. C., and J. C. Johnson. 1994. Link between response-inducing strategies and uninformed response. Marketing Intelligence & Planning: 29-36.

--, and --. 1995. Stimulating response to market surveys of business professionals. Industrial Marketing Management (August): 265-276.

Warren, C. S. 1975. Confirmation reliability--The evidence. The Journal of Accountancy (February): 85-89.

Yu, J., and H. Cooper. 1983. A quantitative review of research design effects on response rates to questionnaires. Journal of Marketing Research (February): 36-44.

Submitted March 1999

Accepted January 2000

Terry J. Engle is a Professor and James E. Hunton is an Associate Professor, both at the University of South Florida.
COPYRIGHT 2001 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Engle, Terry J.; Hunton, James E.
Publication:Auditing: A Journal of Practice & Theory
Article Type:Statistical Data Included
Geographic Code:1USA
Date:Mar 1, 2001
Words:6450
Previous Article:Task Experience as a Predictor of Superior Loan Loss Judgments.
Next Article:The Effectiveness of Increasing Sample Size to Mitigate the Influence of Population Characteristics in Haphazard Sampling.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters