Printer Friendly

Reply to discussion of impression management with graphs: effects on choices.

The discussant's comments fall into two categories: (1) issues pertaining to the experimental design and (2) concerns about the research question. We answer each in turn.


One issue raised by the discussant relates to our choice of dependent variable. We acknowledge that our definition of accuracy is liberal. Nevertheless, it provides an appropriate means of testing our primary hypotheses. A stricter definition of accuracy--such as the one that the discussant suggests--would have confounded subjects' choices of the distorted company with their choices of the other suboptimal company. Such an approach would have inflated the apparent effect of our experimental treatment. In contrast, our dependent variable counts as errors only those subject responses that are clearly due to the experimental treatment (i.e., graphical distortion).

Consider the following example. Even when the graph is not distorted, a subject may still pick a "lowest" or "medium" company as the "best" company in which to invest. However, the predicted treatment effect is that the likelihood of occurrence for this error would be higher when the graph is distorted. Hence, our test simply compares whether subjects are more prone to picking a lowest or medium company as the best when it is distorted than when it is not distorted. By doing so, we control for the base rate of inaccuracy (when the graph is not distorted) and test for the marginal effect of the treatment on inaccuracy. We agree with the discussant's concern that the accuracy rate reported in the results may be inflated (e.g., a choice of "medium company as the best in which to invest" is being interpreted as "accurate" as his example illustrates). This inflation thus reflects noise in measurement, but does not bias in favor of the hypothesized treatment effect from the distorted graphs because our dependent va riable is only affected when subjects choose the distorted company.

The discussant also questions our use of students as subjects. We do not think this is a problem. The use of students as subjects is primarily a concern for research investigating the effects of experience or expertise on task performance. Students are appropriate subjects in research that seeks to understand basic decision-making processes (Ashton and Kramer 1980) and for experiments where the researcher focuses on subjects' responses to changes in generic stimuli (Swieringa and Weick 1982). Our study possesses both of those characteristics. We agree that students are not good surrogates for professional investors (i.e., analysts). Our research questions, however, do not involve predictions concerning the effects of task experience. Moreover, with recent advances in the Internet and computer technology, the accessibility of investment opportunities and markets to individual investors has surged. As a result, more "naive" (i.e., less sophisticated) investors are now participating in the marketplace. We cite a number of papers that suggest that students, especially accounting majors, possess at least as much knowledge as does the average investor. Therefore, our use of student subjects is appropriate for research investigating the basic question of the effect of graph design on choices made by non-expert decision makers.

The discussant also speculates that our results may have occurred because our subjects were either not aware of the distortions or considered them to be unintentional. We did not collect process data that would allow us to definitively answer those speculations. Neither possibility, however, affects our results. If subjects were not consciously aware that some of the graphs were distorted, yet made choices that were affected by those distortions, then that provides evidence that such manipulations are indeed a matter for concern. Similarly, if subjects' choices were affected by the distorted graphs that they had noticed but discounted as being unintentional, then that still shows that graph design affects choice.

A related issue concerns our subjects' motivation. As we explain in the paper, subjects were provided with two incentives. First, they received extra credit, worth 2.5 percent of the course grade, for participating in the experiment. For many students, that is enough points to make a difference of a letter grade in the course. Second, subjects had the opportunity to earn $25 based on the quality of their choices. The experiment lasted less than one hour and took place in a college town; thus, this potential prize is considerably above the average hourly wage earned by most subjects.

One other experimental design issue raised by the discussant concerns the adequacy of the information provided to subjects. We wanted to have a realistic task, while maintaining strict experimental control and internal validity. With the magnitude of cues and information suggested for provision by the discussant, such experimental control would easily have been compromised, resulting in a loss of not only internal but also external validity. We chose to emphasize an appropriate degree of realism, while simultaneously seeking to maximize experimental control and internal validity, a necessary condition for external validity.


Many of the discussant's comments concern the desirability of conducting a different experiment to address other research questions. We agree that it would be interesting and useful to investigate why subjects' choices are affected by improperly designed graphs. It would also be interesting and useful to investigate alternative methods for sensitizing people to the potential biases that improperly designed graphs might cause. As the discussant acknowledges, however, those are entirely different research questions than the one we investigate. We chose to focus our study directly on the question of whether improperly designed graphs affect choices. We believe that this is an appropriate initial question to ask. Prior research has documented the existence of improperly designed graphs. The next logical question is whether such graphs affect decision making. Our results suggest that improperly designed graphs can indeed affect choices. Given our findings, it is now appropriate to undertake additional research tha t seeks to understand why those effects occur and how to mitigate them.


Ashton, R. H., and S. S. Kramer. 1980. Students as surrogates in behavioral accounting research: Some evidence. Journal of Accounting Research 18 (Spring): 1-15.

Plumlee, R. D. 2002. Discussion of impression management with graphs: Effects on choices. Journal of Information Systems (Fall): 203-206.

Swieringa, R. J., and K. E. Weick. 1982. An assessment of laboratory experiments in accounting. Journal of Accounting Research 20 (Supplement): 56-101.
COPYRIGHT 2002 American Accounting Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Arunachalam, Vairam; Pei, Buck K.W.; Steinbart, Paul John
Publication:Journal of Information Systems
Geographic Code:1USA
Date:Sep 22, 2002
Previous Article:Discussion of impression management with graphs: effects on choices.
Next Article:Impact of information technology on public accounting firm productivity.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters