Printer Friendly

Psychology's tangled web: deceptive methods may backfire on behavioral researchers.

In Marmion, Sir Walter Scott describes with memorable succinctness the unanticipated pitfalls of trying to manipulate others: "O, what a tangled web we weave, when first we practice to deceive."

The poet's tangled web aptly symbolizes the situation that some psychological researchers now find themselves struggling with.

Consider a study conducted recently by Kevin M. Taylor and James A. Shepperd, both of the University of Florida in Gainesville. Seven introductory psychology students took part in their pilot investigation, which measured the extent to which performance on a cognitive task was affected by experimenter-provided feedback after each of several attempts. Because of a last-minute cancellation by an eighth study recruit, the researchers asked a graduate student to pose as the final participant.

Only the graduate student knew beforehand that feedback was designed to mislead participants in systematic ways about their successes and failures on the task.

At the conclusion of the trials, an experimenter who had monitored the proceedings briefly left the room. Although they had been warned not to talk to one another, the seven "real" participants began to discuss the experiment and their suspicions about having been given bogus feedback. A brief comparison of the feedback they had received quickly uncovered the researchers' deceptive scheme.

When the experimenter returned, participants' acted as though nothing had happened. The experimenter announced that the trials had included deception and asked students on three separate occasions if they had become suspicious of anything that happened during the laboratory experience. All of them denied having had any misgivings, in interviews as well as on questionnaires, and divulged nothing about their collective revelation.

At that point, the experimenter dismissed the students and expressed confidence that they had provided useful data. The graduate stand-in, who purely by chance had witnessed the participants' secret deliberations, unburdened him of that illusion.

In a letter published in the August 1996 American Psychologist, Taylor and Shepperd bravely fessed up to having had the tables turned on them by their own students. In the process, they rekindled a long-running debate about whether psychologists should try to fool research subjects in the name of science.

"Our seven participants do not represent all experimental participants," the Florida investigators concluded. "Nevertheless, their behavior suggests that, even when pressed, participants cannot be counted on to reveal knowledge they may acquire or suspicions they may develop about the deception or experimental hypotheses."

Deceptive techniques have gained prominence in psychological research, and particularly in social psychology, since the 1960s. Moreover, studies that place participants in fabricated situations featuring role-playing confederates of the investigator have attracted widespread attention.

Consider a 1963 investigation conducted by the late Stanley Milgram, which still pervades public consciousness. Although ostensibly recruited for a study of learning and memory, volunteers unwittingly took part in Milgram's exploration of the extent to which people will obey authority figures.

Many volunteers accepted the exhortations of a stem experimenter to deliver increasingly stronger electric shocks to an unseen person in an adjoining room every time that person erred in recalling a list of words. Nearly two-thirds of the participants agreed to deliver powerful shocks to the forgetful individual--a confederate of Milgram's who received no actual shocks but could be heard screaming, pounding the wall, and begging to leave the room.

Milgram's study inspired much debate over the ethics of deceiving experimental subjects and whether data collected in this way offer clear insights into the nature of obedience or anything else. It also heralded a growing acceptance of deceptive practices by social psychologists.

Researchers have tracked this trend by monitoring articles appearing Journal of Personality and Social Psychology, regarded by most as a premier publication. Only 16 percent of empirical studies published there in 1961 used deception. That proportion rose to nearly 47 percent in 1978, dipped to 32 percent in 1986, and returned to 47 percent in 1992. The investigators generally agree that an even larger portion of studies published in other social psychology journals have included deceptive techniques.

Deception still occurs relatively frequently in social psychology studies, although less often than in 1992, holds psychologist James H. Korn of Saint Louis University, who has investigated the prevalence of these practices and written a book titled Illusions of Reality A History of Deception in Social Psychology (1997, State University of New York Press). Dramatic cases of experimental manipulation like Milgram's obedience study rarely appear anymore, he adds.

Instead, deception now usually involves concealing or camouflaging an experiment's true purpose in order to elicit unguarded responses from volunteers. For instance, researchers running a study of the effects of misleading information on memories of a traffic accident presented in a series of slides may simply tell participants that they're conducting an analysis of attention. After completing the study, volunteers get a full explanation of its methods and goals from experimenters in a debriefing session.

In Korn's view, this mellowing of deceptive tactics partly reflects an injunction in the current ethical guidelines of the American Psychological Association that "psychologists never deceive research participants about significant aspects that would affect their willingness to participate, such as physical risks, discomfort, or unpleasant emotional experiences."

Yet, as Taylor and Shepperd's humbling experience demonstrates, even the softer side of deception can have rough edges. In fact, the clandestine knowledge of the Florida recruits underscores the need to do away entirely with deception in psychological research, argue economist Andreas Ortmann of Bowdoin College in Brunswick, Maine, and psychologist Ralph Hertwig of the Max Planck Institute for Human Development in Munich, Germany. Suspicions of being misled may affect subjects' responses and complicate interpretation of the results.

A number of like-minded critics have challenged the ethics of using deceptive techniques, whether or not they generate compelling findings. Ortmann and Hertwig take a more practical stand. They regard deceptive procedures--even in mild forms promulgated by a significant minority--as the equivalent of methodological termites eating away at both the reputation of all psychological researchers and the validity of their findings.

People in general, and the favorite experimental guinea pigs--college undergraduates--in particular, have come to expect that they will be misled in psychology experiments, Ortmann and Hertwig argue. Each new experiment in which participants are deceived and debriefed sets off another round of extracurricular discussions about psychologists' sneaky ruses, they say.

This process transforms interactions between a researcher and a participant into a real-life episode of a repeated prisoner's dilemma game, the researchers contended in the July 1997 American Psychology. In these games, two or more people choose either to cooperate or to pursue self-interest in some task over many trials.

A selfish choice by one person yields a big payoff to that player and virtually nothing for everyone else; cooperation by all players moderately benefits everyone; and multiple selfish moves leave everyone with little or nothing. Cooperation in these games quickly unravels when an identifiable player consistently opts for self-interest (SN: 3/28/98, p. 205).

Likewise, participants who have reason to believe that they will somehow be deceived by psychologists are likely to turn uncooperative in the laboratory--and in ways that a researcher may not notice, according to Ortmann and Hertwig.

"Psychologists have good intentions and follow their ethical manual, but they still foster mistrust in their subjects when they use deception," Ortmann says. "This is a question of clean research design, not just ethics."

Most experimental economists avoid using deceptive methods because of concerns about their corrosive effects on the discipline's reputation among potential research participants, the Bowdoin researcher holds. Economics studies depend on highly structured games, such as the prisoner's dilemma, which require volunteers to enact an explicit scenario; investigators monitor changes in behavior over a series of trials. Each participant receives a monetary payment based on his or her overall performance in the experiment.

In contrast, Ortmann and Hertwig assert, psychologists usually do not ask volunteers to assume a specific role or perspective in performing mental tasks, do not conduct multiple trials, and pay participants a flat fee or nothing. Recruits get a poor feel for the purpose of these experiments and are likely to second-guess the scientist's intentions, especially if they suspect that a study includes deception.

Such suspicions flare up all too easily in studies of ongoing social interactions in which one participant secretly plays a fixed role at the researcher's behest, notes psychologist William Ickes of the University of Texas at Arlington. Volunteers who try to talk spontaneously with such a confederate often note an unnatural or bizarre quality to the conversation and become wary of the entire experiment, Ickes says.

The Texas scientist, who studies empathic accuracy (SN: 3/23/96, p. 190), focuses on undirected and unrehearsed encounters between volunteers.

Deception-free methods do not sift out all experimental impurities. Economics research, for instance, places individuals in abstract, potentially confusing situations that may have limited applicability to real-life exchanges of money and goods, Ortmann says. Still, the use of deceptive techniques would stir up mistrust among research subjects and throw the whole enterprise off course, he argues.

Critics of Ortmann and Hertwig's call to outlaw all forms of experimental deception defend its use in judicious moderation. Three psychologists elaborate on this view in the July American Psychologist.

"The preponderance of evidence suggests that deceived participants do not become resentful about having been fooled by researchers and that deception does not negatively influence their perceptions about psychology or their attitudes about science in general," states Allan J. Kimmel of the Ecole Superieure de Commerce de Paris, France.

Several surveys of people who have participated in psychological studies that included deceptive tactics find that, compared to their counterparts in nondeceptive experiments, they report having enjoyed the experience more and having learned more from it, Kimmel says.

Deceptive studies that include careful debriefing sessions preserve psychology's reputation, adds Arndt Broder of the University of Bonn in Germany. In his own department, Broder notes, during the debriefing the researchers explain the nature and necessity of experimental deceptions to all participants, most of whom agree to take part in further studies.

Sometimes researchers have no alternative but to hide their intentions from participants, contends Korn. A total ban on deception would obstruct certain types of work, he says, such as explorations of how people form and use ethnic and religious stereotypes.

Positive attitudes expressed on surveys and continued willingness to show up for experiments do not reassure Ortmann and Hertwig that participants accept experimental situations and researchers' directions at face value.

It is not necessarily deception when a researcher fails to tell participants the purpose of an experiment, they say, but it is always deception if a researcher tells falsehoods to participants in the course of an experiment.

At that point, a tangled web of social interactions may begin to trip up scientific progress. Participants who unravel a scientific fib may feel that they should not know more about an experiment than the researcher tells them and that such knowledge may invalidate their responses, maintain Taylor and Shepperd. As a result, perceptive volunteers zip their lips and make nice for the investigator.

All sorts of unspoken inferences by participants can intrude on the best-laid research plans, even if they exclude deception, argues Denis J. Hilton of the Ecole Superieure des Sciences Economiques et Commerciales in Cergy-Pontoise, France.

In the September 1995 Psychological Bulletin, Hilton analyzed how volunteers' assumptions about the meaning of experimental communications in several areas of psychological research can affect their responses.

For example, participants often read more into response scales than experimenters had intended. In a study noted by Hilton that asked volunteers how often they had felt irritated recently, those given a scale ranging from "several times daily" to "less than once a week" reported relatively minor irritations, such as enduring slow service at a restaurant. Those given a scale ranging from "several times a year" to "less than once every 3 months" cited more extreme incidents, such as a marital fight. The time frame provided by the scales shaped the way in which irritating episodes were defined and tallied.

Further complications in interpreting responses ensue when participants mistrust a researcher's objectives, Ortmann asserts.

"The question of whether deception matters deserves further inquiry," he remarks. "Too often, we as scientists don't think carefully about methodological issues and take for granted our experimental conditions."
COPYRIGHT 1998 Science Service, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:deceptive methods lead to research subject's mistrust, thereby negating the work
Author:Bower, Bruce
Publication:Science News
Article Type:Cover Story
Date:Jun 20, 1998
Words:2040
Previous Article:Medicine for menopause: researchers study herbal remedies for hot flashes.
Next Article:Rethinking ink: printing the pages of an electronic book.
Topics:


Related Articles
Research foul-ups and blunders.
Null science: psychology's statistical status quo draws fire.
Detecting Deception.
Development and Evaluation of a Web-based Classroom.
Polygraph testing: a utilitarian tool.
Psychology's love-hate relationship with love: critiques, affirmations, and Christian responses.
Theoretical issues in the relationship between psychology and religion: some comments on Reber, Nelson, Slife and Whoolery, and Richardson.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |