Printer Friendly

The Effect of Staff Accountant Objectivity in the Review and Decision Process: A Tax Setting.

ABSTRACT

Prior studies report that less experienced staff accountants are often susceptible to confirmation bias in the evaluation of evidence. This bias results in nonobjective information evaluation by staff-level accountants. This study examines how the perceived objectivity of the staff accountant and the manager's own client advocacy affect the manager's use of the staff accountant's research report when formulating client recommendations. The results suggest that objectivity judgments made by partner-/manager-level accountants are influenced by whether the staff accountant's research report confirms their initial opinion. Further, the confirmatory nature of the research report affects the manner in which the report is incorporated into a client recommendation. Nonconfirming research reports were given more weight than confirming research reports. Preference for client-favorable outcomes was found to affect the weight given to staff accountant research reports as well.

Data Availability: Experimental data is available upon request. Contact author.

INTRODUCTION

Resolving tax issues through tax research is one of the primary responsibilities of tax professionals. Tax research involves searching through available evidence, evaluating relevant evidence, and determining the correct solution based on an understanding of the evidence. One hindrance to effective tax research is confirmation bias. Recent studies (Johnson 1993; Marchant et al. 1993; Cloyd and Spilker 1999; Hatfield 2000) find that tax professionals overweight evidence that confirms their initial opinion and/or confirms the client's preference.

Public accounting firms perform a great deal of their work within hierarchically structured groups. Typically, higher-level accountants review the work of lower-level accountants (hereafter staff accountants) and make decisions based on the information collected by staff accountants. Confirmation bias at the evidence-gathering stage may cause problems throughout the research task. If the evidence that is considered by the supervisor in formulating an opinion is biased, then the supervisor's estimation of risk and his/her eventual client recommendation may be biased as well. Conversely, review by experienced supervisors may identify and mitigate the bias. The main intent of this study is to examine whether the review process will detect and mitigate confirmation bias at the staff-accountant level. Although previous research has found that experience reduces the likelihood that tax professionals will exhibit confirmation bias (Kaplan and Reckers 1989), research has not considered whether experienced tax profes sionals will recognize and account for confirmation bias at the staff-accountant level. If the supervising accountant recognizes the possibility of confirmation bias, then his/her perceptions of source objectivity should be affected. Changes in the perceived objectivity of the evidence source should, in turn, affect the perceived diagnosticity of the evidence (Schum and DuCharme 1971).

The research question addressed by this study is divided into three components. First, does the possibility of confirmation bias by the staff accountant affect the supervisor's judgments of the staff accountant's objectivity? Second, does the possibility of confirmation bias reduce the weight given to the staff accountant's research report by the supervisor (cascaded inference)? Finally, does the preference for client-favored outcomes moderate the discounting effect prescribed by cascadedinference theory?

In an experiment, manager-level accountants judged staff accountants who provided confirming research reports to be less objective than staff accountants who provided nonconfirming research reports. This demonstrates that supervising accountants are able to recognize the potential for confirmation bias by staff accountants. Further, these managers gave less weight to staff accountants' reports that confirmed the staff accountants' initial opinion than to nonconfirming reports. This finding provides support for the ability of the review process to mitigate confirmation bias in a tax research setting. Finally, this study provides further evidence that a preference for client-favorable positions influences the extent to which staff accountant research reports affect the manager's final recommendation. Research reports reaching the client-favorable position were given more weight than research reports with the opposite position. However, evidence that clientfavorable research reports mitigated the discounting of potentially biased reports was not observed.

The remainder of this study is organized as follows. The following section develops the hypotheses concerning cascaded inference and client advocacy. The third section discusses the research method, including the experimental design. The fourth section contains the results from the experiment. General discussion and concluding comments are offered in the final section.

THEORY AND DEVELOPMENT OF HYPOTHESES

Most tasks in public accounting are performed in a hierarchical structure. A staff-level accountant performs a task at a supervisor s request. The staff accountant's work is then reviewed by the supervisor prior to being incorporated into the final output of the firm. This study examines a particular component of the task of making a client recommendation based on the available evidence (research task). In the research task, the staff accountant will search the existing evidence, and then make a report for the supervisor that includes a set of the relevant evidence. The supervising accountant first reviews this report, and then uses the report and the evidence to make a recommendation to the client. [1]

Confirmation Bias at the Staff-Accountant Level

Previous research has identified confirmation bias in a variety of settings, in both the search for and evaluation of evidence (Watson 1960; Batson 1975; Kaplan and Reckers 1989; Johnson 1993; Koehler 1993; Marchant et al. 1993; Hatfield 2000). Confirmation bias results in decision makers preferring information that confirms their prior beliefs or hypotheses. This preference could be in the form of seeking confirming information or simply evaluating confirming information as better (i.e., more important, more relevant, more convincing, etc.). Johnson (1993), Marchant et al. (1993), and Cloyd and Spilker (1999) all found this type of bias in tax professionals. Marchant et al. (1993) found that subjects relied more heavily on an information source when the outcome of that information was consistent with the subject's own predicted outcome. Johnson (1993) found that subjects evaluated information as more relevant to their own client's situation when the outcome of the information was consistent with the client- favorable opinion. Cloyd and Spilker (1999) find that tax professionals' information searches emphasize cases with conclusions consistent with the desired client outcome. The implications of these findings could be serious to tax firms providing tax research services. [2]

Research has found that experience reduces the likelihood of confirmation bias (Biggs and Mock 1983; Kaplan and Reckers 1989; Kida 1984). However, a question that has received little attention is whether experienced supervisors will recognize and account for confirmation bias at the staff-accountant level. Barrick et al. (2000) provide mixed evidence. They report that research reports containing a balanced set of evidence had a greater influence on the supervisor's recommendation and resulted in less additional rework being required of the staff by the supervisor than research reports supporting the same conclusion but containing only supporting evidence. However, the proper tax treatment was unambiguous. Further, research reports containing only evidence supporting the wrong conclusion did influence the subjects' recommendations away from the correct answer. The current study considers an ambiguous scenario where supervisors can recognize the potential staff bias only from the consistency of the staff accou ntant's pre-search opinion with their eventual research report. [3]

The first goal of this paper is to determine if supervisors are cognizant of this potential lack of objectivity. As discussed earlier, prior research has documented a mitigating effect of experience on confirmation bias. The ability to correct the bias over time suggests that experienced professionals become aware of its influence. Awareness of one's own susceptibility may also lead to the ability to recognize its potential influence on staff. Hypothesis 1 provides a test for this expectation by considering the possibility that supervising accountants' perceptions of the objectivity of the staff accountant are affected by the consistency of the staff accountant's opinion with the resulting research report.

H1: Staff accountants who provide research reports containing evidence that confirms the staff accountant's pre-search opinion will be judged to be less objective than staff accountants providing nonconfirming research reports.

Source Reliability (Cascaded Inference)

Cascaded inference is a normative process characterized by additional information-processing steps necessary to incorporate the additional uncertainty generated by sources of information that are less than perfectly accurate or reliable (Schum and DuCharme 1971). Schum's (1980) didactic example of cascaded inference illustrates the idea in the context of jurisprudence. Suppose a juror is attempting to decide on the guilt or innocence of a defendant, based on information obtained from a witness. The event in question is whether the defendant was at the scene of the crime. The juror must pass through two levels of inference. First, what is the probability that the defendant is guilty given he was seen or not seen at the scene of the crime? Second, what is the probability that the witness's testimony is accurate?

The supervisor and staff accountant working together on a task in public accounting are analogous to the juror and the eyewitness, respectively. The supervisor makes the final decision based on the evidence obtained by the staff accountant. Consider a simple case where the task is a tax-planning problem in which the deduction or capitalization of a particular item is in question. The event in question is whether current authority points to the deductibility of the item. However, the supervisor does not observe this event. Rather, the supervisor observes a report about the event (i.e., the results of the staff accountant's research efforts). The supervisor must decide whether the IRS will allow this item to be deducted.

Previous studies in the accounting literature have found that auditors' judgments are affected by the source of evidence. Bamber (1983) and Haynes (1999) both found that staff accountant competence affected the inferential value of evidence. Bamber (1983) found that, relative to a normative (Bayesian) model, audit managers excessively discounted evidence from a less than perfectly reliable source. Similarly, Haynes (1999) found that auditors were more affected by source reliability in an audit context than in a nonaudit task.

Errors resulting from sources that are determined to be incompetent will generally be random. However, another aspect of reliability is the objectivity of the source. Objectivity will result in nonrandom (asymmetric) errors. Returning to the example of the juror and the witness, the juror may react differently to the testimony from a witness with bad eyesight (random error) than from a witness who is the defendant's mother (asymmetric error). This study focuses on how the objectivity of the staff accountant will affect the supervisor's use of staff research reports. Schum (1980) demonstrates the difference in the diagnosticity of evidence provided by an objective vs. a nonobjective source. This analysis relies on Bayesian conditional probability to construct likelihood ratios. [4]

Hirst (1994) examined objectivity as it relates to the independence of the source of explanations for an unexpected difference in inventory. As expected, audit-firm explanations were found to have greater inferential value [5] than were the exact same explanations from the client's CFO. The current study examines objectivity judgments as a reaction to the possibility of confirmation bias from the source of the information. If a conforming research report is seen as less objective than a nonconfirming report, as tested by H1, then the normative response, based on cascaded-inference theory, would be to discount the weight given to a confirming research report. Discounting the weight given to the research report simply results in reducing the reliance on the research report when calculating a final recommendation to the client. However, if the staff accountant's report does not confirm his/her opinion, cascaded inferencing is not necessary since the report is considered reliable (or at least objective). The abo ve discussion of cascaded inference in the review environment leads to research hypothesis 2.

H2: Ceteris paribus, a staff research report containing evidence that confirms the staff accountant's pre-search opinion will be given less weight by the supervisor than a research report containing nonconfirming evidence (i.e., the evidence in a possibly nonobjective report will be discounted).

Client Advocacy

Advocacy for the client is an important aspect of the public accountant's profession. A consequence of client advocacy is that accountants explicitly or implicitly consider the effect on the client in all the decisions made. Further, they must consider how the client will react as a result of these decisions. Client advocacy increases tax professionals' preference for information that defends the client-favored position (Shields et al. 1995). Johnson (1993) and Cuccia and McGill (2000) found that tax preparers who score higher on a taxpayer advocacy scale express significantly more aggressive (i.e., client-benefiting) opinions than those scoring lower on the scale. Cloyd and Spilker (1999) found that knowing the client-preferred position caused tax professionals to spend more time considering and rely more on evidence that is consistent with that position. Several studies (e.g. Ayres et al. 1989; Duncan et al. 1989; Helleloid 1989; Cuccia 1994; Cuccia et al. 1995) have found that tax accountants attempt to r ecommend reporting positions that are favorable to clients. Hatfield (2000) found that the client-favored position affected the likelihood that staff accountants would bias their information evaluation toward their manager's opinion when held accountable to that manager.

This study considers the directional effect caused by a preference for the client-favorable position. A research question of interest is whether this general preference for client-favorable positions influences the weight given to the research report. This question is stated formally in research hypothesis 3.

H3: Ceteris paribus, a research report that recommends the position with the most favorable client outcome will be given more weight by the supervisor than a research report that recommends a less favorable client outcome.

Source Reliability in a Context of Client Advocacy

Schum (1977) considers the "contrast" effect of background information on the cascading inference process. Using Schum's (1977) example, the fact that a gun was found in the bedroom may influence a juror's weighting of evidence that a loud "crack" was heard by a witness (even if the witness is deemed unreliable). Although Schum (1977) considers prior evidence as the background against which reports are considered, a general preference for, or sensitivity to one of the hypotheses may also serve as a background for the decision maker who is choosing between hypotheses.

In previous studies, the effect of source reliability judgments has proven to differ in various environments. Studies in psychology literature find that subjects generally do not adequately consider the reliability of the source of information when making decisions (Schum et al. 1973; Peterson 1973). In contrast, as discussed above, Bamber (1983) and Haynes (1999) both show the reluctance of auditors making decisions in an audit environment (where type II errors can be very costly) to rely on evidence obtained from a less than perfectly reliable source. Auditors tend to excessively discount evidence obtained by less than perfectly reliable sources. [6] Haynes (1999) shows that these same auditors do not excessively discount similar evidence in a nonaudit task. The effects of source reliability judgments appear to differ depending on the context or situation.

In the current context, the general preference for client-favored solutions may act as a background against which the staff accountant's research report is considered. Specifically, the discounting that occurs due to the possible lack of objectivity may be less for client-favorable research reports relative to research reports not favoring the client. The expectation regarding the moderating effect of the client-favorable position is stated in research hypothesis 4.

H4: The amount of discounting for a possibly nonobjective research report will be less when the report recommends the favorable client outcome than when the report recommends the unfavorable client outcome.

METHODS

Experimental Procedures

Sixty-five tax managers from three different public accounting firms participated as subjects in this study. To insure that subjects properly attended to the instrument, a partner from each firm was responsible for administering and collecting the instruments and informing the subjects of the importance of this study to the firm. Table 1 summarizes general characteristics of these subjects. Subjects completed a computer-based, simulated tax-planning task in which they worked with computer-simulated tax staff members. The fact pattern and tax issue used in this study (whether a planned bonus for the sole shareholder/president will be allowed as a salary deduction or will be classified as a constructive dividend) are the same as those used and developed by Johnson (1993).

Subjects first gave their initial opinion about the deductibility of the proposed bonus. Subjects were then assigned a hypothetical staff accountant who stated his/her opinion prior to performing the information search. Subjects were informed that the actions of the staff accountant in the experiment were patterned after actual staff accountants who performed a similar task on the exact same issue and facts. [7] The subjects then received a portion of a research report back from the staff member that represented the results of their search for and evaluation of evidence. Participants used this report to revise their recommendations to the client. The revision of the subjects' opinions was used to calculate the weight given to the research reports.

Research Design

The two independent variables of interest in this study are Report Consistency and Report Result. Report Consistency is a dichotomous variable that represents whether the staff accountant's research report is consistent/inconsistent with his/her original opinion. The outcome of the simulated staff accountant's information search and evaluation (Research Report Result) was manipulated at two levels, "deductible" or "not deductible." Subjects did not see a full research report. Rather they saw only the conclusions of the staff that were supported by alluding to the existence of unnamed and unexplained cases. [8] The portion of the experimental instrument that manipulates these two variables is included in the Appendix. [9] To control for the possibility that the results may be affected by the participants' initial opinions, a third independent variable, Initial Opinion, was added to the design. Initial Opinion was treated as a covariate and is a measured variable ranging from 1 (definitely not deductible) to 1 0 (definitely deductible). The resulting design is a 2 X 2 ANCOVA.

The dependent variable for H1 is the subject's rating of the staff accountant's objectivity. This variable was directly evaluated by asking subjects, "How objective do you feel the staff accountant was as he/she searched for relevant authority?" This question was answered on a Likert scale from 1 (very unobjective) to 10 (very objective). The dependent variable used for H2-H4 is a proxy for the weight given to the staff accountant's research report. This measure is based on the basic model of serial integration (Anderson 1982). [10]

[R.sub.2] = [R.sub.1] + w (E - [R.sub.1]), (1)

where:

w = the weight given to the staff member's report;

[R.sub.1] = the participant's initial response (on a ten-point scale) to the client issue;

E = the value of new evidence (the staff's research report) on the same scale as the participant's responses [R.sub.1] and [R.sub.2]; [11] and

[R.sub.2] = the participant's response after receiving the staff member's research report.

The model describes how new evidence (E) affects the decision-maker's response. In a simple two-response, one addition of evidence setting, the second response is a function of the movement away from the first response. The movement is determined by how much weight is given to the new evidence. For example, if w = 1, then [R.sub.2] = E, which means that the new information was relied on completely when the decision maker updated his/her response. If w = 0, then [R.sub.2] = [R.sub.1] which means that the new information did not change the decision-maker's opinion. Solving for w in equation (1) above produces the following intuitive representation of weight: [12]

W = ([R.sub.2] - [R.sub.1])/E - [R.sub.1]. (2)

In equation (2) above, the numerator is the raw movement from response one to response two brought about by the new evidence. [13] The denominator is the room to move. Therefore, if the decision-maker's movement on the response scale is equal to the available room to move, then w = 1.

RESULTS

Hypothesis 1 predicts that supervisors' perceptions of staff accountant objectivity differ based on whether the research report is consistent with the staff accountant's initial opinion. Table 2 shows the ANCOVA results as well as the cell means [14] for the objectivity dependent variable. The main effect of report consistency was significant at the .004 level based on a directional F-test. The mean objectivity rating for a consistent memo is 5.1, while the mean objectivity rating for an inconsistent report is 6.5 (on a scale from 1--very unobjective to 10--very objective). These means demonstrate that subjects perceived confirming research reports as less objective than non-confirming research reports.

Hypothesis 2 is a test of the normative processes indicated by the theory of cascaded inference in a tax context where source reliability is limited to perceived source objectivity. Hypothesis 2 predicts a main effect of report consistency on weight. Three responses, considered to be outliers, were deleted due to shifts away from the research report result (negative shift resulting in a negative weight value), which do not make sense in this task. [15] Inclusion of these responses would bias the results in favor of supporting H2 since all three occurred in conditions where H2 predicted discounted weights. Table 3 displays the ANCOVA results that indicate a significant main effect for Report Consistency with a p-value of less than .004. [16] The weight given to inconsistent reports is .47, compared to a mean weight of only .25 for consistent reports. This result provides support for H2.

To insure the robustness of this result, an additional test using the participant's final recommendation as the dependent variable was employed. Using this dependent variable, the main effect of Report Consistency is not expected to be significant since it only represents a magnitude effect. Report Consistency would simply cause the final recommendation to be more or less extreme (i.e., closer to the endpoints of the scale), in the direction suggested by the research report due to the differing amount of weight given to the research report. However, a significant interaction between Report Result and Report Consistency would be consistent with H2. Table 4 reports the results of this analysis. The significant main effect of Report Result demonstrates the importance of the staff's research report on the participants' final recommendations. The interaction predicted by H2 is also significant, and the mean pattern is in the expected direction. The final recommendation for an inconsistent "deduct" report is 7.9 c ompared with 7.6 for a consistent "deduct" report. Conversely, the final recommendation for an inconsistent "do not deduct" report is 4.9 compared with a 5.7 for a consistent "do not deduct" report. When subjects received inconsistent research reports, they had more extreme final recommendations. This finding corresponds to the main test of H2 using weight as the dependent variable.

Hypothesis 3 relates to a well-documented preference for client-favorable information and predicts that supervisors will give more weight to a research report supporting the position with the client-favorable outcome than they will give to a research report supporting the opposite outcome (independent of objectivity issues). This hypothesis is tested by examining the main effect of report result on the weight given to the research report. The result, displayed in Table 3, shows that the main effect is significant at the .03 level (based on a directional F-test). The F-test on the direction of the mean differences supports the hypothesis that the preference for the client-favorable outcome influences how staff research reports are incorporated into the supervisor's final client recommendation.

Since more of the subjects started with a "deductible" opinion (n = 33) than a "not deductible" opinion (n = 23), a confounding problem could exist. The results of the test of H3 could be due to a confirmation bias by the subjects. Subjects could prefer research reports that simply confirm their own original opinion, which could result in an overall preference for "deduct" research reports. This finding would be consistent with a confirmation bias by manager-level subjects. An interaction between Manager Initial Opinion and Report Result would suggest a confirmation bias at the manager level. Additional analysis where Subject Initial Opinion and Report Result were allowed to interact did not reveal a significant interaction. Further, the main effect of Report Result is significant even after controlling for the possibility of manager-level confirmation bias.

Hypothesis 4 considers the possibility that the preference for client-favored positions may influence the effect of cascaded inference. The effect of cascaded inference is the discount given to a possibly biased report relative to a nonbiased report. Hypotheses 4 predicts that this discount effect will be smaller for research reports with "deduct" conclusions than reports with "do not deduct" conclusions. Referring to Table 3, the discounting for "deduct" reports (.56 - .33 = .23) is the same as the discounting for "do not deduct" reports (.39 - .16 = .23). Therefore, the results do not support H4. In addition, it should be noted that simulated staff accountants started with an initial opinion that was either "deduct" or "do not deduct." The eventual research report was either consistent or inconsistent with this opinion. Given the research design, the interaction of Report Result and Report Consistency is confounded with the simulated staff's initial opinion. Therefore, any results regarding this interactio n would need to be interpreted with caution.

SUMMARY AND CONCLUSIONS

This study considered whether the review process detects and mitigates confirmation bias in the tax research process. The research question for this study contained three key components. First, does the possibility of confirmation bias by staff accountants (evidenced through a confirming research report) affect the perceived objectivity of the staff accountant? Second, does the possibility of confirmation bias affect the supervisor's judgment as expected by cascaded-inference theory? Third, does client advocacy moderate the discounting of potentially biased research reports?

The results of this study support the first two research questions. First, the perceived objectivity of the staff accountant was diminished when the staff accountant provided a research report that confirmed his/her initial opinion (H1). This finding not only supports the intuition behind the source objectivity predictions, but also provides support for the construct validity of the report consistency manipulation. Second, the weight given to a confirming research report was less than the weight given to a disconfirming research report (H2). This finding is consistent with cascaded-inference theory, which suggests that perceived source reliability (objectivity) should influence the judgments of supervising accountants. These findings support the expectation that the review process mitigates some of the bias introduced by the confirming tendencies of staff accountants.

The third hypothesis, which was also supported, suggests that client preference affects the weight given to a research report. Previous research has found that a preference for the client-preferred position (client advocacy) affects tax professionals' recommendations. The current study suggests that client preference also affects how staff research results are incorporated into the supervising tax professional's recommendations. The final hypothesis predicted that client advocacy would moderate the effect of source reliability. This hypothesis was not supported by the evidence.

The generalizability of this study may suffer from certain aspects of the task that were necessary for internal validity reasons. Examples of this include a scenario that forces the subject to choose all or none of the bonus as deductible; a research report that does not include the underlying documents or details; and a staff accountant who is simulated by the computer. However, the purpose of this experiment was to control the subject's environment to more accurately examine the effects of the key variables. Subsequent studies can examine the robustness of these results in a less controlled environment or in an experiment where some of these restrictions are relaxed. In addition, the main findings of this study are dependent on the assumption that confirming research reports proxy for possibly biased research reports. This assumption is, to some extent, supported by the findings regarding H1.

Research in the accounting literature has revealed problems associated with confirmation bias in audit as well as tax contexts. The results of the current study suggest that the review process can detect and mitigate some of these problems. Future research will be necessary to further understand how client advocacy may influence the review process and to determine if the effectiveness of the review process, in a tax research task, is dependent upon whether the outcome of the research report is consistent with client preferences.

Richard Charles Hatfield is an Assistant Professor at Drexel University.

This paper represents a portion of my dissertation at the University of Florida. I thank my dissertation committee, Gary McGill, Sandy Kramer, Doug Snowball, and Alan Sawyer for providing support and guidance. I also thank John Lynch, Donna Bobek, Kristin Wentzel, Chris Agoglia, Pierre Liang, and the two anonymous reviewers for their many helpful comments and Linda Johnson for the use of her case.

(1.)This task is very likely iterative. Supervisors may have staff accountants revise their search efforts as well as their research reports.

(2.)Supervisors making decisions with evidence provided by staff accountants may not accurately assess the risk associated with their recommendations. Ultimately, this bias may affect the costs faced by tax accounting firms for overly aggressive positions. Such costs include client audit, client penalties, nontax costs (e.g., contracting costs to restructure a failed transaction), preparer penalties, and costs associated with an unhappy client (e.g., loss of revenues or preparer litigation).

(3.)To recognize confirmation bias at the staff level, the supervisor must know the staff accountant's pre-research opinion. To insure that this is descriptive of current practice, prior to this study, several managers and partners were interviewed to discuss this assumption. Every manager/partner interviewed stated that this was a realistic scenario and that they typically have some knowledge of the staff accountant's opinion after the initial meeting to discuss the research task.

(4.)While the directional effects on the diagnosticity of evidence consistent with this normative model are also expected in this study, the model is not tested here. It is not practical to provide subjects with the necessary conditional probabilities or expect them to combine the probabilities correctly.

(5.)Hirst (1994) calculated inferential value as the inferred Bayesian likelihood ratio. This calculation is not feasible in the current study since probabilities were not elicited from subjects.

(6.)These studies construct likelihood ratios based on Bayes's Theorem of conditional probabilities as a normative mechanism to determine if subjects adequately consider source reliability in their inferential judgments.

(7.)In a separate study, staff accountants from the same firms, as well as other firms, were given the same information and performed an information evaluation task. The research report results provided to the manager subjects in this experiment are based on the staff accountant responses from the previous experiment.

(8.)Discussions with practitioners made it clear that it would be highly unrealistic to provide a report result without any rationale. Therefore, the reports used different subsets of issues, all of which were included in the client scenario as justification for the report conclusion (employee skills and no comparable industry for the deduct research report; poor dividend history, lack of ceiling on bonus, and no increase in employee responsibility for the no-deduct research report). Presumably, by using justifications already included in the client facts, the justification used to reach a conclusion added no new information. However, it is possible that the relative relevance of these justifications could be confounded with the manipulation. This effect was minimized by pre-testing (with 30 tax professionals) these justifications. Tax professionals were asked to indicate which justification was most relevant. A Binomial test failed to reject the null hypothesis that justifications for the "deduct" research report were no more or less likely than justifications for the "do not deduct" research report (p-value [greater than] .5). Further, three justifications were used for the no-deduct research report vs. two for the deduct research report that attempts to bias (if a bias exists) against the predictions of H3 and H4.

(9.)To insure that random assignment occurred, correlations between certain measured variables (e.g., general and specific experience and initial position) and the manipulated independent variables were analyzed. There were no significant correlations.

(10.)This model was chosen for two reasons. First, anchoring and adjustment has been found to be a relatively pervasive decision-maker heuristic and is more descriptive of human decision making than a more normative Bayesian model (Ashton and Ashton 1988; Fischhoff and Beyth-Marom 1983; Hogarth 1975; Slovic and Lichtenstein 1971). Second, solving for weight with this model provides a very intuitive measure. This model is very similar to Hogarth and Einhom's (1992) estimation model, but is not dependent on the confirming/disconfirming nature of the evidence.

(11.)An important assumption here is that the stimulus and both responses can be measured on the same scale. Both responses as well as the stimulus were provided on a ten-point Likert scale from "definitely not deductible" to "definitely deductible." The stimulus was scored as either a 9 or a 2 on that scale, depending on the version, by having the simulated staff provide a response on the same scale based on the evidence they had viewed. The stimulus could also be scored as the endpoints of the scale. When this was done, the results for the main effects become more significant.

(12.)Six subjects had extreme initial positions that did not allow them any room to revise their opinion (resulting in a 0 in the denominator of equation (2)). Including these observations, by giving a 0 value to weight, did not alter the results. Therefore, the results reported in this study do not include these six subjects.

(13.)For example, if the first response was 4 and the research report suggested 9 (E = 9) and the second response is 7, then w is calculated as follows: (7 - 4)/(9 - 4) = .60.

(14.)Due to the covariance analysis, the means discussed throughout the results section are adjusted for the covariate (least squares means).

(15.)An example of a negative shift would be a participant with an initial response of 7 (on the 1--10 scale) receiving a research report suggesting a 9 on that same scale, resulting in a revised response of 5. According to equation (2) above, the calculated weight would be - .5.

(16.)It is worth noting that the covariate, subject's initial opinion, was also significant. This main effect was unexpected. It is not apparent why the subject's initial opinion would directly affect this dependent variable.

REFERENCES

Anderson, N. H. 1982. Methods of Information Integration. New York, NY: Academic Press.

Ashton, A. H., and R. H. Ashton. 1988. Sequential belief revision in auditing. The Accounting Review 63: 623--641.

Ayres, F. L., B. R. Jackson, and P. A. Hite. 1989. The economic benefits of regulation: Evidence from professional tax preparers. The Accounting Review 64 (2): 300--312.

Bamber, E. M. 1983. Expert judgment in the audit team: A source reliability approach. Journal of Accounting Research 21: 396--412.

Barrick, J. A., C. B. Cloyd, and B. C. Spilker. 2000. Does the review process mitigate confirmation bias in tax research? Working paper, Brigham Young University.

Batson, C. D. 1975. Rational processing or rationalization? The effect of disconfirming information on religious belief. Journal of Personality and Social Psychology 32: 176--184.

Biggs, S. F., and T. J. Mock 1983. An investigation of auditor decision processes in the evaluation of internal controls and audit scope decisions. Journal of Accounting Research 21 (1): 234--255.

Cloyd, C. B., and B. Spilker 1999. The influence of client preferences on tax professionals' search for judicial precedents, subsequent judgments, and recommendations. The Accounting Review 74 (July): 299--322.

Cuccia, A. D. 1994. The effects of increased sanctions on paid tax preparers: Integrating economic and psychological factors. The Journal of the American Taxation Association 16 (1): 41--66.

-----, K. Hackenbrack, and M. Nelson. 1995. The ability of professional standards to mitigate aggressive reporting. The Accounting Review 70 (2): 227--248.

-----, and G. McGill. 2000. The role of decision strategies in understanding professionals' susceptibility to judgment biases. Journal of Accounting Research 38 (1): 419--435.

Duncan, W. A., D. W. LaRue, and P.M.J. Reckers. 1989. An empirical examination of the influence of selected economic and noneconomic variables in decision making by tax professionals. Advances in Taxation 2:91--106.

Fischhoff, B., and R. Beyth-Marom. 1983. Hypothesis evaluation from a Bayesian perspective. Psychological Review July: 239--260.

Hatfield, R. C. 2000. The effect of accountability on the evaluation of evidence: A tax setting. Advances in Taxation 12: 105--125.

Haynes, C. M. 1999. Auditors' evaluation of evidence obtained through management inquiry: A cascaded-inference approach. Auditing: A Journal of Practice & Theory 18 (2): 87--104.

Helleloid, R. T. 1989. Ambiguity and the evaluation of client documentation by tax professionals. The Journal of the American Taxation Association 11 (2): 22--36.

Hirst, D. E. 1994. Auditor's sensitivity to source reliability. Journal of Accounting Research 32: 113--126.

Hogarth, R. 1975. Cognitive processes and the assessment of subjective probability distributions. Journal of the American Statistical Association 70: 271--289.

-----, and H. Einhorn. 1992. Order effects in belief updating: The belief-adjustment model. Cognitive Psychology 24: 1--55.

Johnson, L. M. 1993. An empirical investigation of the effects of advocacy on preparers' evaluations of judicial evidence. The Journal of the American Taxation Association 15 (1): 1--22.

Kaplan S. E., and P. M. J. Reckers 1989. An examination of information search during initial audit planning. Accounting, Organizations and Society 14: 539--550.

Kida, T. 1984. The impact of hypothesis-testing strategies on auditors' use of judgment data. Journal of Accounting Research 22 (1): 332--340.

Koehler, J. J. 1993. The influence of prior beliefs on scientific judgments of evidence quality. Organizational Behavior and Human Decision Processes October (56): 28--55.

Marchant, G., J. R. Robinson, U. Anderson, and M. S. Schadewald. 1993. The use of analogy in legal argument: Problem similarity, precedent, and expertise. Organization Behavior and Human Decision Processes 55: 95--119.

Peterson, C. 1973. Special issue on cascaded inference. Organizational Behavior and Human Performance 10.

Schum, D. A., and W. DuCharme. 1971. Comments on the relationship between the impact and the reliability of evidence. Organizational Behavior and Human Performance 6: 111--131.

-----, -----, and K. DePitts. 1973. Research on human multistage probabilistic inference processes. Organizational Behavior and Human Performance 10: 318--348.

-----. 1977. Contrast effects in inference: On the conditioning of current evidence by prior evidence. Organizational Behavior and Human Performance 18: 217--253.

-----. 1980. Current developments in research on cascaded-inference processes. In Cognitive Processes in Choice and Decision Behavior, edited by T. S. Wallsten, 179--210. Mahwah, NJ: Erlbaum.

Shields, M. D., I. Solomon, and K. D. Jackson. 1995. Experimental research on tax professionals' judgment and decision making. In Behavioral Tax Research: Prospects and Judgment Calls, edited by J. S. Davis. Sarasota, FL: American Taxation Association.

Slovic, P., and S. Lichtenstein. 1971. Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational Behavior and Human Performance 6: 649--744.

Watson, P. C. 1960. On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology 12: 129--140.
                   SUMMARY OF SUBJECT CHARACTERISTICS
                                            Standard
Characteristic                        Mean  Deviation  Min.  Max.
Years of Experience in Tax Practice   6.97     3.46     3     19
Percentage of Time Spent on Closely
 Held Corporate Clients              48.42    22.50     5     90
Percentage of Time Spent on
 Tax-Planning Issues                 28.81    22.20     0     80
Initial Position on Tax Issue
 (from 1: definitely constructive
 dividend to 10: definitely deduct)   6.38     2.02     1     10
                 THE EFFECT OF RESEARCH REPORT RESULTS
                AND THEIR CONSISTENCY WITH RESEARCHERS'
                    INITIAL OPINIONS ON SUPERVISORS'
                 PERCEPTIONS OF RESEARCHER OBJECTIVITY
                        Panel A: ANCOVA Results
                              d.f.  Sum of Squares  F-statistic
Research Report Result (RRR)    1        9.46          2.57
Report Consistency (RC)         1       27.94          7.60
Subject's Initial Opinion       1         .05           .01
(RRR) X (RC)                    1         .50           .14
Model                           4       37.86          2.57
Error                          57      209.55
                              p-value
Research Report Result (RRR)    .06 [*]
Report Consistency (RC)         .004 [*]
Subject's Initial Opinion       .91
(RRR) X (RC)                    .36 [*]
Model                           .05
Error
                      Panel B: Least Squares Means
                           (Grand Mean = 5.8)
                                Research Report Result
                                        Deduct          Do Not Deduct
             Consistent with             5.4                 4.8
Report         initial opinion          n = 16             n = 12
Consistency  Inconsistent with           7.0                 6.0
               initial opinion          n = 17             n = 17
             Marginal Means              6.2                 5.4
             Marginal Means
Report            5.1
Consistency
                  6.5
(*.)p-value is based on a directional test
based on the hypothesized direction of the mean
difference. This test is equivalent to a
contrast comparing the appropriate cells with
a one-tailed t-test.
                     THE EFFECT OF RESEARCH REPORT
                   RESULTS AND THEIR CONSISTENCY WITH
                    RESEARCHERS' INITIAL OPINIONS ON
                    THE WEIGHT GIVEN TO THE RESEARCH
                         REPORT BY SUPERVISORS
                        Panel A: ANCOVA Results
                              d.f.  Sum of Squares  F-statistic
Research Report Result (RRR)    1        .38           4.19
Report Consistency (RC)         1         70           7.67
Subject's Initial Opinion       1        .62           6.80
(RRR) X (RC)                    1        .00            .14
Model                           4       1.54           4.23
Error                          51       6.19
                              p-value
Research Report Result (RRR)    .023 [*]
Report Consistency (RC)         .004 [*]
Subject's Initial Opinion       .01
(RRR) X (RC)                    .97
Model                           .005
Error
                      Panel B: Least Squares Means
                           (Grand Mean = .36)
                                Research Report Result
                                        Deduct          Do Not Deduct
             Consistent with             .33                 .16
Report        initial opinion           n = 13             n = 12
Consistency  Inconsistent with           .56                 .39
              initial opinion           n = 14             n = 17
             Marginal Means              .44                 .28
             Marginal Means
Report            .25
Consistency
                  .47
(*.)p-value is based on a directional
test based on the hypothesized direction
of the mean difference. This test is
equivalent to a contrast comparing the
appropriate cells with a one-tailed
t-test.
               THE EFFECT OF RESEARCH REPORT RESULTS AND
                  THEIR CONSISTENCY WITH RESEARCHERS'
                 INITIAL OPINIONS ON SUPERVISORS' FINAL
                            RECOMMENDATIONS
                        Panel A: ANCOVA Results
                              d.f.  Sum of Squares  F-statistic
Research Report Result (RRR)    1       84.35          100.6
Report Consistency (RC)         1         .88            1.1
Subject's Initial Opinion       1       89.95          107.2
(RRR) X (RC)                    1        4.08            4.9
Model                           4      225.28           67.14
Error                          57       47.81
                              p-value
Research Report Result (RRR)   .0001
Report Consistency (RC)        .31
Subject's Initial Opinion      .0001
(RRR) X (RC)                   .03
Model                          .0001
Error
                      Panel B: Least Squares Means
                           (Grand Mean = 6.6)
                                Research Report Result
                                       Deduct           Do Not Deduct
             Consistent with             7.6                 5.7
Report         initial opinion         n = 16              n = 12
Consistency  Inconsistent with           7.9                 4.9
               initial opinion         n = 17              n = 17
             Marginal Means              7.7                 5.3
             Marginal Means
Report            6.7
Consistency
                  6.4
(*.)p-value is based on a directional test
based on the hypothesized direction of the
mean difference. This test is equivalent to
a contrast comparing the appropriate cells
with a one-tailed t-test.


APPENDIX

Subjects were given a self-contained, computer-driven instrument. Following are sections of this instrument illustrating the measurements used to calculate the dependent variable as well as the manipulations of the independent variables. The case materials used here are derived from the tax case of Johnson (1993). Prior to the following questions and manipulations, subjects received a set of client facts along with the code and regulations surrounding the issue of importance. The issue here is whether a client can deduct a bonus paid to a sole-shareholder employee or if the payment will be treated as a constructive dividend.

The Following Question Represents the Measure of the Participant's Initial Opinion

What recommendation would you make to the client based solely on the code and regulation provisions? That is, would you recommend that the bonus will be:

(1) considered reasonable and therefore deductible, or

(2) considered not reasonable and therefore not deductible (i.e., treated as a constructive dividend).
  Definitely             Uncertain              Definitely
not Deductible                                  Deductible
      +         +  +  +      +      +  +  +  +       +
      1         2  3  4      5      6  7  8  9      10


*New Screen

Since the code and regulations do not definitively clarify how the payment of the bonus should be treated, you decide to have a staff member from your office look for some similar court cases to see how the courts have treated this issue. Assume now that you have a staff from your office perform an information search at your firm's tax library. You will be assigned a staff by the computer. The staff you receive is based on actual staff members from Big 6 accounting firms who have participated in a separate exercise in which they make information search and evaluation decisions regarding the same client facts and issue that you have been given.

*New Screen

You have been assigned Staff E. Staff E has 12 months' experience with your firm. E has worked on a variety of clients and has not yet specialized in a single client area. You have not worked with E before, but you find that E's evaluations are similar to others with similar amounts of experience.

Prior to your meeting with E, E reads the code and regulations on this issue. During the meeting, you tell E the issue and ask E to find some court cases where taxpayers are faced with a similar situation. Before E goes to the tax library, E states the following opinion, "After looking at the code and regulations on this issue, I feel that the client will (will not) be able to deduct the bonus as compensation, since the IRS will not (will) likely treat the bonus as a constructive dividend."

*New Screen

Your assigned staff member leaves your office to research the issue. The simulated search results are based on actual search results of actual staff accountants who participated in a separate exercise. In that exercise, the staff participants examined a population of court cases on this issue. In that population of cases, half of the cases found for the taxpayer, while the other half found for the IRS. The participants chose a subset of cases from that population that they felt were most related to the client facts and issue. In answering the remaining questions, assume that the population of court cases examined by the staff participant represents the true state of judicial authority.

*New Screen

Below is a segment of Staff E's research memo.

I have located two court cases that had similar facts to our client's facts. These two cases appear to be the most similar to our client's facts and issue. These two cases point toward treating the bonus as a deductible salary expense (constructive dividend). In these cases, the issues that lead to the taxpayer's victory (defeat) were the following: employees' skills were the key for the company's success, no comparable industry to find average salaries (poor dividend history, lack of ceiling on bonus, and no increase in employee responsibility).

After examining all of the court cases I found, I believe that if the decision were up to me alone, I would recommend to the client that the deduction of the bonus will (will not) be allowed if paid.

* New Screen

Staff E responded to the same question that you answered earlier regarding whether the bonus should be considered unreasonable compensation and therefore not deductible or reasonable compensation and therefore deductible. This response is shown below.
Definitely not                                          Definitely
  Deductible                     Uncertain              Deductible
      +            (X)     +  +      +      +  +  +  X      +
      1             2      3  4      5      6  7  8  9      10


* New Screen

What recommendation would you make to the client based on all of the information you have at this point. That is, would you recommend that the bonus will be:

(1) considered reasonable and therefore deductible, or

(2) considered not reasonable and therefore not deductible (i.e., treated as a constructive dividend).
  Definitely                                        Definitely
not Deductible               Uncertain              Deductible
      +             +  +  +      +      +  +  +  +      +
      1             2  3  4      5      6  7  8  9      10
COPYRIGHT 2001 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hatfield, Richard Charles
Publication:Journal of the American Taxation Association
Geographic Code:1USA
Date:Mar 22, 2001
Words:7883
Previous Article:Effects of Unitary vs. Nonunitary State Income Taxes on Interstate Resource Allocation: Some Analytical and Simulation Results.
Next Article:Experimental Evidence on the Relation between Tax Rates and Compliance: The Effect of Earned vs. Endowed Income.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters