Printer Friendly

The study of politics: what does replicability have to do with it?

The purpose of this rejoinder is twofold. The first is to question the call to extend a "replication standard" for qualitative research in political science. (See Golden, 1995a). The second is to question how useful the focus on "replicability" is for research in political science in general (see King, 1995a, and the debate on "Verification/ Replication" in PS, 1995 28(3): 443-99).

What underlies a replication standard for qualitative research? I believe that the call for a replication standard for qualitative research is applicable to only certain kinds of research, and that, even within its own limitations, is probably misguided. For the most part, when proponents of a replication standard have anticipated dissension over the requirement of replication for qualitative research, they have pointed to logistical, pragmatic, and privacy burdens that such a requirement may impose on researchers.

In contrast, my main contention concerns the very conceptualization of research as replicable. I argue that the assumptions underlying a replication standard are strongly and narrowly positivist. The replication standard itself is reductionist, and its use would marginalize a portion of the qualitative research now being conducted in political science.

The notion of a replication standard presumes that the researcher is adhering to narrow positivist principles. Positivist research agendas delimit research in terms of verifiable, observable, and individual factors. Likewise, methodological individualism delimits the focus to observable action, observable in the sense of individual, verifiable, rational units of analysis. For example, rational choice studies understand social phenomena as the consequence of deliberate, individualist action. In her article, "Replication and Non-Quantitative Research," Miriam Golden discusses the archival placement of all interview materials and field notes that constituted a particular research project, and from which presumably a "replication experiment" could be performed. From her perspective, data collection appears as the agglomeration of individual, observable referents by which social phenomena are constituted and analyzed.

However, part of contemporary qualitative research in political science (as in other disciplines) contests these positivist assumptions, in particular the proof-related, intentionalist, and individualist biases.(1) For example, in my own work on the political process of citizenship politics, I examine three different dimensions: political agency, the interactions of institutions and agents, and the constructed nature of political processes. The process-oriented framework in which my field work and interviews were carried out contradicts the assumptions upholding the notion of replication. From my perspective, social phenomena are neither the aggregate of a series of deliberative actions nor simply evidenced by verifiable observable action.

In my research, field notes, background interviews, interview sets, content analysis of press reports, government documents and other primary sources, and survey data, combine as indicators of a constellation of factors (institutional, ideological, structural, and collective as well as individual-level factors) shaping contemporary citizenship politics. From this perspective, field notes and interview sets could not be reconstituted as sites from which to conduct potential replication experiments. This critique is not simply an issue of "decontextualizing" data but of taking the standard literally, and considering whether a replication of the research results could be carried out on the basis of the archived materials.

These reservations about the assumptions and applicability of a replication standard highlight another, related problem with the argument that we should "conceptualize our research projects from the outset as potentially replicable" (Golden 1995b, 13). Namely, it sets up the standard to encourage reductionist and narrowly positivist outcomes.

What is encouraged is research that is confined to a series of data points or entries, which are easily archived and superficially verifiable. Such data, which can include standardized interviews and surveys, may appear more systematic, but as Albert Hirschman has noted, can be sadly lacking in necessary insight and accuracy as compared to narratives. Narratives are not easily reduced to data entries nor are they readily replicable in the ways alluded to by proponents of the standard.(2) By encouraging graduate schools, journals, and tenure/ promotion committees to adopt "replication" standards, the proponents of a replicability standard, in effect, are weighting the scales in favor of only certain kinds of research, and marginalizing anew other kinds of research.

The arguments thus far presented have not been about the impracticalities of instituting a replication standard, but about the necessarily narrow and positivist conceptualization of research as replicable. Even if one's research falls within the parameters of "potentially replicable" work, the arguments for the replication standard bear questioning. So, let us step back into the confines of positivist research, and imagine a positivist foraging in the field. What use is a replication standard?

Again, according to Golden, the standard most importantly "would improve our own research techniques in the field . . . we will design our research more carefully, select respondents more systematically, and record interviews more fully" (1995b, 13). Methodological clarity and responsibility appear central as well to Gary King's concerns; he notes at the outset of his article in PS that scholars should be able to answer questions, including "how were the respondents selected? Who did the interviewing?" (1995, 444)

The argument for the standard appears to be justified in large part on methodological grounds. Yet, how it will help rectify methodological problems is neither self-evident nor economical. Most of the proponents of a replicability standard are clearly committed to strengthening, standardizing, and clarifying the methodological aspects of comparative work. So, why be roundabout and talk in terms of replication? In my own field - comparative politics - focused discussion on methodology would certainly be beneficial to all ranks of comparative scholars, from students embarking on their dissertation research to senior researchers. There is no doubt that "methods courses" for field work have always been a weak area in our discipline.(3) Yet, the debate on methodology should not be subsumed under the replicability question. Whereas a focus on methodology would force all of us to question our methodological premises and aims, the debate on replication simply assumes positivist methodologies, as evidenced by much of the commentary on replication/verification in the September 1995 issue of PS.

Of course, the focus on replication has been only partly about methodology. It has also been about "replication." It is at this point that I move to encompass quantitative and qualitative research in my comments. The purpose or usefulness of a replication standard for research in political science appears to be severalfold. Beyond the methodological aims, Gary King (PS, 1995, 444-47) mentions verification, evaluation, and pedagogical experimentation as reasons to institute a replication standard. Verification is most dramatically the check for falsified data, academic fraud, what I call the "white mice chase." If medical researchers can paint their mice white and behold albino mice, certainly we political scientists can be as ignoble and imaginative. Verification is an important, though certainly not central, aspect of our work.

Even here, it should be noted that, for the most part in the sciences, verification is commonly understood as the replication of experiments, and not as replication from data sets.(4) To focus on the replication of research results from data sets, whether quantitative or qualitative data sets, is actually to return to methodological considerations (e.g., how were variables chosen and coded; how were interviews chosen and coded). Or, it is to check for deliberately fraudulent data collection, which is probably quite rare.(5) If the replication should refer to the research problem, it is admittedly much more difficult for political scientists to replicate our "experiments" than it is for many scientists, notwithstanding that the "world is our laboratory"!

The second purpose, evaluation, appears to cover both the quality and kind of work. The use of evaluation based on the replication standard speaks directly to the kind of research legitimated and encouraged by the replication proponents. It encourages, for the most part, strictly positivist research agendas. Finally, the argument for the replication standard, because it affords the possibility of pedagogical experiments, is clearly not economical. It would be smarter to target interesting and appropriate data collections if that were one's aim.

Replication is not, in fact, a key issue for research in political science. Yes, there are methodological problems in political science, in both qualitative and quantitative research. But, a direct discussion of our methods is the solution, not a discussion of replication. Defining the debate in terms of replication displaces a frank exchange about methodology and, even more insidious, dismisses an important portion of our research. Yes, there are verification and fraud problems in political science. But, the "white mice" issue should not be blown out of proportion, requiring the deposition of all field notes, data sets, and interviews. Perhaps, there are even interesting pedagogical and scholarly possibilities in the existence of data collection archives.(6) But, the costs to the field are great. So, let us put aside the call for a replication standard, and move to act more directly on the actual problems facing the discipline.

Notes

1. There is a large body of literature in the social sciences which critiques positivist methodologies and offers alternative frameworks. See, for example, Paul Rabinow and William Sullivan, eds., Interpretative Social Science: A Second Look (University of California Press, 1988). My point here, however, is not to recount in detail alternative methodologies but to take note of their existence and position.

2. L. Sandy Maisel makes a similar point on the kind of research that would be discouraged by a replication standard. See his article, "On the Inadequacy and Inappropriateness of the Replication Standard," (PS, Vol. 28, no. 3, pp. 467-70, especially p. 468).

3. The paltry literature on and lack of attention paid to methodology and field work in political science stands in contrast to other social science disciplines. Consider, for example, the large body of literature on the methodologies of oral history, ethnography, and historical sociology.

4. With regard to the (natural) sciences, David Goodstein, in his article, "Scientific Fraud" (The American Scholar, vol. 60, no. 4, Autumn 1991, pp. 505-15) has noted that while, in fact, it is rare for experiments to be replicated, "the idea runs deep that things are causally related in a relatively straightforward way and are therefore reproducible" (p. 513).

5. None of the commentary on replication/verification in the September 1995 PS suggests that there is a high incidence of academic fraud in the discipline. For his part, Goodstein points out that the incidence of deliberate scientific fraud in the (natural) sciences is actually quite low (p. 512).

6. For example, the research that could be conducted on the paradigms shaping any one body, field, or era of work is conceivably very rich. But, this kind of interpretive research on data collections is not what motivates replication proponents.

References

Golden, Miriam. 1995a. "Replication and Non-Quantitative Research." PS: Political Science & Politics 28(3):481-83.

-----. 1995b. "Comments on Replicability and the Study of Comparative Politics." APSA-CP. 6(2): 13.

King, Gary. 1995. "Replication, Replication." PS: Political Science & Politics 28(3) 444-52.

About the Author

Mariam Feldblum is a senior research fellow in the Division of Humanities and Social Sciences, Caltech, and assistant professor of politics on leave at the University of San Francisco.
COPYRIGHT 1996 Cambridge University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Feldblum, Miriam
Publication:PS: Political Science & Politics
Date:Mar 1, 1996
Words:1840
Previous Article:A look at the 'Beliefs in Government' study.
Next Article:Toward the integrated study of political communications, public opinion, and the policy-making process.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters