Printer Friendly

Institutional Review Board mission creep: the common rule, social science, and the nanny state.

In this article, I scrutinize the process by which scientific research on human subjects is regulated by Institutional Review Boards (IRBs). At the outset, let us agree that at least some biomedical scientific research on human subjects must be externally monitored and that whether government should sometimes be involved in that process is at least an open question. We simply cannot forget the lessons learned from Nuremburg and Tuskegee. My argument, however, is that although the IRB process may have been at least marginally well suited to serve its original mission (to protect federally funded biomedical research subjects from physical harm), that process has become buried in an avalanche of new and unrelated socially constructed mandates. Today, the IRB process consumes an inordinate amount of time, energy, and resources in attempting to prevent a growing list of imagined harms, minor harms, or highly unlikely harms. Consequently, IRBs no longer serve their original mandate well. Worse, they have surreptitiously undermined legitimate and useful social science, science education, and freedom of inquiry. Despite a growing body of scholarly criticism, seasoned with IRB horror stories, the beat goes on ("Communications Scholars' Narratives" 2005).

Mission drift denotes a devolutionary process familiar to most scholars who study the history of public institutions: the process of co-opting a successful and well-conceived process (or in this case a marginally successful process), then gradually and mindlessly expanding it until it is no longer capable of performing its original function--the familiar Peter Principle, as applied to institutions (Peter and Hull 1969) The gradual expansion of public schools from relatively simple, locally administered educational institutions to complex socioeconomic institutions remotely controlled by a web of local, state, and federal agencies is a prime example of mission drift. Mission creep, the term I prefer here, signifies a more deliberate, sneaky, and nefarious form of devolutionary change than the more unintentional, randomized "drift" evident in other government institutions.

During the past thirty years, the IRB has devolved to become an ineffective means of regulating the diverse activities that the government ambiguously calls "scientific research on humans." Moreover, the government's continued reliance on monopolistic, one-size-fits-all institutionalized solutions, such as the IRB process, clearly threatens the future of behavioral science, if not of biomedical science, by overloading the system with paperwork and by wasting the time, effort, and resources of everyone involved, including researchers, board members, students, teachers, and government officials. Even more troubling, the process undermines science education and the last vestiges of "academic freedom."

We may well recognize that some IRBs at some institutions are less overworked, more efficient, and less intrusive than others, and therefore are less likely to elicit controversy (Ferraro et al. 1999). Some colleges and universities focus more on teaching than on research, and many of those institutions do not rely on the IRB process to regulate students' behavioral research. In addition, significant puzzles surround the social dynamics that emerge between local IRBs and researchers (Keith-Spiegel 2005). Finally, across the board, many researchers imperceptibly employ IRB avoidance and deliberately design their own research and their students' research to minimize IRB scrutiny. In light of the foregoing considerations, it is difficult to assess scientifically either researchers' satisfaction with the IRB process or the costs and benefits associated with the process as a whole. Prevention of imaginary harms can be especially tough to quantify!

Nevertheless, I argue from a utilitarian standpoint that the protection afforded research subjects across the social-science disciplines by the IRB program is now far outweighed by the costs of implementing it. These costs include not only sacrificed time and energy on the part of government, researchers, and IRB members, but also a variety of long-term, hidden costs, most notably, the undermining of the teaching of social science in colleges and universities.

History of IRB Mission Creep

The history of IRB mission creep is wrought with mind-boggling complexity. The 1960s marked the first rumblings of committee review of federally funded scientific research on human beings. In 1966, Surgeon General William Stewart issued the first federal policy statement on the protection of research subjects in research funded by the Public Health Service. This policy called for "prior review of the judgment of the principle investigator or program director by a committee of his institutional associates" (Levine 1988, 353). This peer review was to monitor the investigators' "judgments" about whether a research project might harm research subjects.

Since 1966, numerous revisions and emendations of that original concept have come forth. In the 1970s, the need for large-scale governmental oversight of biomedical research seemed justified by a series of highly publicized scandals, such as the one related to the Tuskegee Syphilis Study. Back in 1963, behavioral research had it own quasi-scandals, most notably Stanley Milgram's highly deceptive, but nevertheless harmless experiment on obedience and individual responsibility. In 1974, amidst a firestorm of research-related concerns (including fetal research), Congress established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and ultimately passed the National Research Act. This law required the Department of Health, Education, and Welfare (which later became the Department of Health and Human Services [DHHS]) to issue regulations via IRBs for all research the department funded. It also created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (Chadwick and Dunn 2000). This regulatory net would gradually be extended to research not funded by government.

Indeed, one of the early symptoms of mission creep is the proliferation of these politically charged "fact-finding commissions" and their subsequent "findings," which, most often, lead to new bureaucratic flow charts that merely reshuffle the structural relationships among watchdog agencies, commissions, and committees within the various levels of government. (The most recent example of such "flow-chart reform" in the United States was the creation of the Department of Homeland Security.)

During the congressional hearings held in the early 1970s, considerable debate occurred concerning the locus of control over scientific research involving human subjects. Would it be at the federal level or at the institutional level? The compromise solution to the control problem was to empower local IRBs to regulate research within their own respective institutions, with the DHHS providing "guidance." Federal "guidance" was initially supplied by the Belmont Report (1979), which stipulated a rights-based moral structure designed to focus IRB concerns on securing the informed consent of research subjects. That moral framework gradually devolved into a mechanical checklist of "do's and don'ts" expressed in increasingly more complex verbiage and convoluted rules. Hence, we have the initial phase of IRB mission creep from the application of deontological moral principles to institutionalized interpretation of those principles embedded in codified rules and procedures.

Meanwhile, the Food and Drug Administration (FDA) developed its own IRB structure that extended not only to federally funded research but to any research involving drugs, biologics, and eventually medical devices. Henceforth, there would be two separate sets of IRB guidelines: one administered by the FDA, the other by the DHHS. In 1981, some of the major differences between these agencies were ironed out, but only the FDA regulations specifically defined the IRB's role: to "assure the protection of the rights and welfare of the human subjects" (21 Code of Federal Regulations, sec. 56. 102 [g]; Chadwick and Dunn 2000). Today, that mandate has been obscured by numerous "flow chart reforms" and the persistent inability of the National Institutes of Health, the FDA, institutions in general, and researchers to communicate effectively with one another.

Beginning in 1991, seventeen federal agencies adopted the "Common Rule" as the basis for their regulation of research. This rule was stated in the text of 45 Code of Federal Regulations Pt. 46, with references to numerous internal documents of the various agencies. As Hamilton observes, "The contrasting language and organization of these documents demonstrates that between 1979 and 1991, regulation became much more specific yet less decipherable, less doable, and even less discoverable" (2005, 192).

Nevertheless, throughout the 1990s, most colleges and universities voluntarily adopted the Common Rule as the basis for regulating both federally funded and non-federally funded research at their institutions. Today, most have institutionalized their own IRBs, which oversee not only federally funded research but all research that produces "generalizable knowledge"--that is, not only biomedical research, but also harmless behavioral research, and not only faculty research, but also student research. Thus, we have the second major phase of IRB mission creep.

In December 2000, the National Bioethics Advisory Commission (NBAC) drafted a report that recommended sweeping changes in the structure and process of IRBs (NBAC 2001). These recommendations portended a whole new level of mission creep by proposing that all research involving human subjects be reviewed, regardless of the locus of funding; that all such regulation be brought under a single agency; and that IRB committee members be certified. Most of these recommendations remain currently locked away in bureaucratic limbo. But beware!

Finally, in 2003 came the promulgation of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, which regulates how health plans and providers may use and disclose patient information. Based on patients' right to privacy over all medical information, regardless of the harms that might be associated with disclosure, this rule has the potential for saddling IRBs with yet another phase of mission creep and even more regulatory complexity (Holt 2003). In the world of science, where researchers are expected to replicate each others' research and to share risk-based information in a timely fashion, any policy that undermines transparency threatens the whole scientific enterprise. As the HIPPA rule spreads throughout the institutions of social science, we can expect less replicable science and more outright scientific fraud.

Over the years, the IRB regulatory structure has been subject to numerous revisions, restructurings, and elaborations, but the overall drift of these changes has always been toward the expansion of IRBs' scope and authority (AAUP 2001). Recent expansions include new rules for the regulation of "clinical trial websites" and proposed new rules for "the Registration of IRBs and Independent Ethics Committees" (McDaniel, Baker, and Lansink 2002, 32). As Chadwick and Dunn sum up the situation, "Like many highway projects, the IRB system was sound when it was designed, but became out-of-date and overloaded almost from the start" (2000, 21). Interestingly, most of the actual overload was initiated by individual institutions and local IRBs, not by federal mandates.

Despite the numerous structural and procedural changes, and despite radical changes in the nature of biomedical and social-science research, the Common Rule itself has proved to be institutionally resistant to systemic change. This resilience can be readily attributed to the fact that it now governs the research of twenty different, turf-protecting federal agencies. Meanwhile, the nature of scientific research has evolved significantly. In the 1970s, most research was conducted by single researchers with only a few research subjects, lower financial stakes, fewer lawyers and politicians tinkering with the system, and therefore fewer overt conflicts of interest (Hamilton 2005, 193). Today, however, large-scale federally funded research projects are conducted on many different institutional sites, which creates jurisdictional puzzles for local IRBs, increased regulatory expense, and high cost-benefit ratios (Burman et al. 2001).

During this same period, the amount of federal money distributed across disciplines has grown exponentially, even in the social sciences. Today, most public and private research institutions and many corporations rely heavily on federal research dollars. Institutional success in research now hinges on researchers' ability to "bring home the bacon" in the form of lucrative federal research grants. As the sheer volume of federally funded research increases, major research institutions invariably end up with overworked IRBs, bureaucratic delays, and outright mistakes. When the colleges and universities voluntarily began to submit their non-federally funded research and student research for IRB overview, the floodgates were opened wide.

To complicate the process even more, IRBs are "courts of last resort": there is no external monitoring of IRB decisions and no appeals process. As institutionalized monopolies, these committees are shielded from external scrutiny, immune from assessment, and therefore systematically unaccountable for their decisions. If the IRB disapproves a scientist's research or demands substantial protocol revisions, he is simply out of luck.

Conceptual Problems with the IRB Regulatory System

The problems associated with the IRB regulatory system are well documented (see Peckman 2001; Hamilton 2002, 2005). Emanuel and associates (2004) classify them as structural, review-procedure, and performance-assessment problems. The IRB process is also rife with conceptual ambiguity. I focus here on four socially constructed conceptual oddities that contribute substantially to IRB mission creep: the system's overly broad definition of research as "generalizable knowledge"; its failure to distinguish clearly between biomedical and behavioral risk; its overreliance on the concept of "vulnerable populations"; and its systematic failure to distinguish between "conducting scientific research" and "teaching scientific research."

Research as "Generalizable Knowledge"

Federal regulations define research as "a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge" (21 Code of Federal Regulations, sec. 46, 102 [d]). Any research that does not meet this standard is "excluded" from IRB scrutiny. The basic problem is how broadly any particular IRB might construe the concept of "generalizable knowledge." Part of the puzzle certainly springs from the differences in how the various scientific disciplines arrive at their generalizations. These methodological differences are often represented by the terms quantitative research, which is typical of biomedical research, and qualitative research, which is typical of behavioral research.

In the narrow sense, the term generalizable might be interpreted reasonably as synonymous with quantifiable. This category would seemingly include any research that employs statistical analysis of collected data. It would certainly include all surveys, questionnaires, and so forth. It would seemingly exclude all journalistic or historical research that involves interviewing a single person. However, if researchers interview two persons and compare their answers, are they not, in a sense, generalizing? So, if we construe generalizable in the broadest sense, any research that makes generalizations apparently falls into this category. Consequently, the malleability of the concept "generalizable" has made it difficult to decide whether all, some, or none of the research in journalism, communication, ethnology, and history come under the jurisdiction of the Common Rule. One of the recent squabbles over IRB regulation, for example, involved federally funded oral-history research.

During the late 1990s, the American Historical Association (AHA) and the Oral History Association (OHA), local IRBs, and the government wrestled with the question of whether oral-history research falls within the jurisdiction of IRBs under the Common Rule definition of research as "generalizable knowledge." The problem was magnified no doubt by oral history's growing popularity and by the proliferation of oral-history projects in the United States. Throughout the 1990s, history research was gradually enveloped by the drift of IRB regulatory zeal. Although it was often granted "exempt" status, many researchers found themselves mired in IRB red tape. Many oral historians argued that oral research ought to be entirely excluded from IRB scrutiny. The growing number of complaints to the AHA and OHA led to political action.

Oral-history research is based on interviewing research subjects and archiving those conversations as transcripts or recordings for subsequent use by other historians. In other words, many oral-history projects separated data collection from generalization. Moreover, these conversations by their nature are interactive because the interviewer's questions are shaped by the interviewee's previous answers. When oral historians began to submit their research protocols to IRBs, the boards were usually composed of natural scientists and biomedical researchers. One of the first problems to surface was that many local IRBs required oral historians to submit detailed questionnaires prior to conducting interviews and to destroy the tapes and transcripts. Other problems encountered included a host of privacy issues that stemmed from archiving the conversations (Shopes 2000).

Two politically charged issues were at stake in the oral-history debacle. First, what set of rules would be adopted to ensure informed consent and privacy for research subjects? Second, would professional associations or local IRBs be responsible for articulating and enforcing these rules? Historians decided strategically to base their argument in favor of professional control on the simple idea that oral-history interviews do not qualify as research under the government's vague definition. For the AHA and the OHA, this strategy created a troubling dilemma. These professional organizations would have either to admit that what they do is not really "research," at least as defined by the government, which would undermine the scientific status of historical research, or to admit that it is research and thus have to submit to onerous IRB oversight, which would almost certainly hamstring the oral-history movement. Of course, they could have pursued other defensive strategies, such as seeking substantive changes in the Common Rule, but that quest would have required the cooperation of at least twenty government agencies and years of procedural rigmarole.

In August 2003, the professional associations' efforts resulted in an ad hoc ruling by the Office of Human Research Protection that granted "exclusion" status to most oral-history research, provided that the results of the interviews are not "generalized" and that the researcher does not intend to quantify the results. This ruling will almost certainly open the door to ad hoc "exclusion" status for other qualitatively oriented disciplines, such as ethnology, communications, journalism, and cultural anthropology. Of course, one might legitimately question how the determination of generalizability relates to the protection of research subjects.

It is amazing that a simple governmentally instituted conceptual ambiguity such as that inherent in "generalizable knowledge" can lead to jurisdictional conflicts among government, IRBs, and professional associations. It is equally remarkable that the resolution of these conflicts tends to generate ad hoc rulings. Conceptual ambiguity not only contributes to mission creep, but also tends to propagate concatenations of these ad hoc rulings.

Biomedical and Behavioral Risk

"Risk" is a complex teleological (goal-directed) concept that builds on a foundation of other concepts, most notably "harm," which itself is a moving target etymologically and subject to cultural drift. Most utilitarian ethicists employ a hedonistic calculus in assessing risk: they treat harm not as a free-standing concept, but as a ratio between potential costs (pains) and benefits (pleasures). This approach requires estimation of the magnitude (greater-lesser) of those harms, the probability (probable-improbable) of suffering them, and their duration (longer-shorter). Then researchers formulate a cost-benefit ratio. In theory, as the magnitude, probability, and duration rise, the more salient "informed consent" becomes.

In the real world, some observers might judge the assumption of particular risks to be objectively irrational and unacceptable, whereas others might regard the assumption of those risks to be rational and acceptable. For example, risks involving harms of low magnitude, low probability, and short duration would seem to be much easier to justify rationally than harms of high magnitude, high probability, and long duration. For the protection of research subjects, however, the initial focus must be on the magnitude of the initial harm. Magnitude, however, is inexorably contextual. Many desperate biomedical research subjects, for example, are already suffering from major harms, such as fear of imminent death, excruciating pain, or major disabilities, so they are often rationally willing to take greater risks.

Therefore, in the real world of scientific research, risk assessment by third parties on behalf of research subjects is notoriously imperfect because it must take into account these highly individualized and variable contexts. Unfortunately, risk assessment will always be imperfect, and unanticipated consequences will always plague research on humans. However, as good science progresses, unanticipated consequences become anticipated, and risk assessment becomes increasingly reliable.

Another problem with the review process is that the IRBs themselves may not be professionally qualified to assess risks, nor does the law empower them to formulate objective cost-benefit ratios. Instead, they tend simply to follow the ambiguous checklists expounded by the Common Rule. These guidelines, however, institutionalize a culturally based zero-risk preference; that is, IRBs interpret the Common Rule as a mandate to identify and to prevent any imaginable risk, regardless of the magnitude, likelihood, or duration of the possible harms. Zero-risk preference has increased not only IRBs' workload, but also the time and expense of conducting and teaching social-science research. Even more significant, zero-risk preference has led to another common practice: IRB avoidance, or scientists' tendency to choose research topics and methodologies excluded or exempt from IRB scrutiny. Unfortunately, this tendency usually means avoiding any interesting, useful, or remotely controversial research that might conflict with a college's mission statement, elicit a lawsuit, or offend generous alumni. In the light of endemic mission creep, however, IRB avoidance itself has become increasingly difficult to execute.

Many notoriously fuzzy distinctions involving harm show up in the IRB literature. The National Research Council, for example, has identified six categories of possible harm for which research subjects might be at risk: physical, psychological, social, economic, legal, and dignitary (Citro, Ilgen, and Marrett 2003). However, the most salient distinction is that between biomedical risks, which are physical, and behavioral risks, which are psychological, social, economic, legal, and dignitary (Labott and Johnson 2004).

In general, the magnitude, probability, and duration of harms tend to be more objective and measurable in biomedical research than in behavioral research. A good example of the regulation of biomedical research is the FDA's requirement of clinical trials to determine the safety and effectiveness of new drugs and medical devices. Risk assessment in this context entails anticipating potential physical harms, such as death, pain, and disability, as well as potential physical benefits, such as the prevention of death or the alleviation of pain and disability. Moreover, biomedical harms also tend to involve benefits and harms of greater magnitude than the harms normally associated with typical behavioral research. In short, biomedical risks are, at least in certain respects, more objective. (1)

Behavioral risks can be classified in terms of both psychological risks and social risks. Psychological risks include depression, altered self-concept, increased anxiety, decreased confidence in others, guilt, shame, fear, embarrassment, boredom, frustration, the reception of unpleasant information about oneself, and inconvenience. Social risks include stigma, decreased opportunities, and negative changes in relationships. Labott and Johnson (2004) conclude that social risks are less tangible, that beating them offers the subject no potential benefits, that probable harms and benefits are difficult to estimate, and that an argument is made that the absence of physical risk suggests no risk. Overall, behavioral risks are obviously much more difficult to specify, let alone quantify.

Although IRBs tend to subject behavioral research to less scrutiny, via "exempt" and "expedited" status, it is ultimately up to individual IRB chairs to decide whether a project falls into one of those categories. Therefore, almost all social-science research must be submitted to IRBs and exposed to the paternalistic instincts of individual chairs and their committees. This procedure adds substantially to researchers and IRB members' workloads, especially at major research institutions.

Another important distinction between biomedical and behavioral research is that research subjects recruited for behavioral research usually have little or nothing to gain from participation, whereas many patients enrolled in biomedical research studies receive, at the least, free health care or access to experimental drugs (Labott and Johnson 2004). In the absence of financial incentives, the more bureaucratic hoops that research subjects are exposed to and the more difficult and time-consuming the IRB process, the more difficult it becomes to recruit a sufficient number of subjects for behavioral research. It takes much less regulatory zeal to destroy behavioral research than it takes for biomedical research.

One seemingly promising way to get at the distinction between biomedical and behavioral risk is the government's concept of minimal risk. The Code of Federal Regulations defines minimal risk as a situation in which "the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those encountered in daily life or during the performance of routine physical or psychological examinations or tests" (1991, 45 CFR 46.102 I). The standard to be applied refers to the kinds of risks we all encounter in our daily lives, such as driving to work, crossing the street, or answering questions over the telephone (NBAC 2001). (Of course, it is well known that driving to work in the United States entails bearing substantial risks.) Once a line of research is designated as involving "minimal risk," the IRB chair can personally subject it to an "expedited review" without convening the entire committee. In principle, this option would seem to be reasonable way to case the burden on IRBs and researchers, but the concept of "minimal risk" in the real world is far from clear, and different IRB chairs often interpret it differently.

Moreover, in the information age the preservation of privacy has become a major political concern. Unfortunately, the concept of confidentiality is itself socially constructed and often viewed through the lens of deontological (rights-based) theory. The right to confidentiality, therefore, is often asserted as an absolute claim, independent of cost-benefit scrutiny. As paternalistic IRBs seek to enforce this zero-risk concept of unbounded confidentiality paternalistically on behalf of research subjects, it becomes more difficult for researchers to construct protocols and to share information with other researchers.

Finally, IRBs and institutions themselves are highly contextualized. Some institutions with deep pockets are highly conservative and especially wary of lawsuits, whereas others are much less so. One IRB chair might classify all surveys, questionnaires, and interviews as involving minimum risk, whereas another, more paternalistic chair might imagine a host of risks. The basic problem is that even under ideal scientific conditions, it is extremely difficult to predict the magnitude, probability, and duration of behavioral risks. In our litigious society, this ambiguity has bred what one critic calls "the brave new world of research surveillance" (Nelson 2004) as well as windfall profits for trial lawyers and public-relations departments. However, litigation associated with behavioral research is conspicuous by its absence.

Vulnerable Populations

Another conceptual problem with the IRB regulatory mechanism is that it encourages committees to filter informed consent through the lens of vulnerable populations; that is, IRBs are required to decide whether the research subjects under their protection are members of designated classes of persons with diminished decision-making capacity: prisoners, children, fetuses, pregnant women, the mentally ill, and the elderly. Designation as a member of a vulnerable population implies diminished capacity to consent and therefore transfers consent from individual research subjects to paternalistic IRBs. The problem is that any research that involves vulnerable populations, regardless of the magnitude and probability of the risks, automatically sends the convoluted IRB application form before the full board, which usually leads to the manufacturing of imagined harms, overly paternalistic committee decisions, and revised protocols.

The concept of a "vulnerable population" is surprisingly elastic. Much of that elasticity springs from failure to specify the vulnerabilities. Kipnis, for example, recognizes six classes of vulnerability: cognitive (capacity to decide), juridic (subject to authority of others), deferential (willingness to defer decision making to others), medical (serious health-related condition), allocational (lack of goods--for example, health care--being offered by researcher), and infrastructural (researcher's ability to conduct the research safely and effectively) (2001, G6). Under the banner of the ever-expanding concept of "vulnerability," research projects on disaster victims, prostitutes, homosexuals, and disgruntled employees have raised IRB red flags of vulnerability and provoked IRB creativity.

Unfortunately, anthropologists and ethnologists have discovered how easily the vulnerability label can be applied to virtually any cultural group. Even the simple act of observing behavior can be construed as harmful to some primitive tribes. Moreover, the application of the IRB mechanism in these contexts is laughable at best, given that nonliterate primitive tribes cannot begin to understand the medico-legal jargon typical of IRB forms. In at least one case, an IRB rejected an anthropological research project involving a violent primitive tribe because the researcher was deemed vulnerable (Boster 2006).

By focusing inquiry on the classification of research subjects, IRBs tend to pay more attention to the research subjects' decision-making capacity relative to these elastic groupings than to a determination of the actual magnitude, probability, and duration of the harms to which the subjects are said to be vulnerable. Given that behavioral risks are extremely difficult to quantify, many IRBs overscrutinize harmless research on those vulnerable populations. Of course, in the long run this conservative regulatory approach will have a negative impact on the quantity and quality of research conducted on vulnerable populations--ironically, the very groups most likely to benefit from the research. As Yan and Munir point out, children and individuals with developmental disabilities are not only at risk as research subjects, but also at risk of being excluded from scientific research by overzealous protectionism:
 The parens patriae doctrine by legal guardians and IRBs should not
 only work in the direction of protection by exclusion, but by
 protection through inclusion. Often the risks are minimal, and the
 arguments that such participants are unable to consent are
 overstated. Furthermore, the conflicts of commitment by IRBs also
 may inadvertently prioritize institutional precautions and legal
 concern.... As it stands, urgent action is needed as most children
 and individuals with DD [developmental disabilities] receive less
 mental health care, poorer quality of care, and are underrepresented
 in mental health research. (2004, 45)


Moreover, as Whittle and associates (2004) point out, IRBs are not sufficiently instructed on how to distinguish adequately between the decision-making capacity of older children, who are capable of exercising informed consent to participate in research, and younger children, who are not.

Critics of the IRB process argue that board actions are not really aimed at protecting vulnerable research subjects from dangerous research, but at protecting vulnerable institutions from potential lawsuits and public-relations fiascos hastened by a growing cultural obsession with zero-risk lifestyles, an ever-drifting concept of harm, and growing regulatory tentacles.

Conducting Research and Teaching Research

Back in the 1990s, most U.S. colleges and universities voluntarily adopted the Common Rule as a means of regulating biomedical and behavioral studies performed on their campuses, regardless of whether the government provided funding for that research or not. In many institutions, this system also came to encompass research conducted by students. In social science, however, the goals of conducting research differ from those of teaching research. Overzealous IRB scrutiny of harmless student research can easily delay the completion of student projects, erode the student-teacher relationship, diminish student interest in scientific research, and systematically stifle behavioral research at the undergraduate and graduate level. Much of what I have to say here is based on my own experience.

Teaching college students how to conduct behavioral research has as much to do with motivating them to want to do research as it does with teaching them how to do it. Students are usually motivated to conduct research on topics in which they have a passionate interest. Some student research projects, however, are not doable within course time constraints or with the student's knowledge base, and of course some student topics are simply ill conceived. Good teaching seeks to minimize the production of ill-conceived student research, but sometimes students can learn a great deal about how science works from less-than-perfect projects. In fact, all research is imperfect. Science ultimately has to do with the discovery of its imperfections through an extended process of trial and error. So most teachers try to balance student motivation with instruction of the fundamentals of research. IRBs tend to interfere with striking this balance by reducing science to "nuts and bolts" and to conformity to the Common Rule, often at the expense of interactive student-teacher relationships.

Moreover, when individual IRBs construct their own forms, they invariably interpret the Common Rule differently, which generates a great deal of systemic variation. Sufficient training for IRB members, teaching faculty, or students is rarely provided. These conditions often foster an unwelcome element of surprise in one's dealings with an IRB. It is also professionally and personally embarrassing when an IRB disapproves a teacher-approved student project.

When IRBs strike down or force modifications in harmless student research because of what some IRB members consider to be flawed research designs, students become discouraged and are deprived of the opportunity to learn from firsthand experience. Of course, most teachers do not appreciate time-consuming, fastidious IRB interference when they are trying to teach forty students how to conduct social-science research. Unfortunately, when the teacher is a junior faculty member and the overly paternalistic IRB chair is a senior professor, complaints are rare.

My IRB experience with graduate student projects on leadership was eye opening. A colleague and I taught the course. We spent hours checking student IRB forms, and half the semester was consumed in getting their protocols past the committee chair. All of these projects involved harmless interviews and questionnaires to be done in the workplace. The overwhelming majority of the students' employers not only supported their research, but in many instances were paying for them to attend graduate school. All of my students found the IRB debacle to be nitpicking nonsense. Many of them ultimately received an "incomplete" for the course. It would be convenient simply to blame our IRB chair for this debacle. However, that person was not only a highly competent and cooperative IRB chair and an established social scientist, but also an extraordinarily cooperative friend of mine. In short, the IRB fiasco is not about persons, but about a system.

After that initial experience, the program redefined the project so that all students could get IRB approval by providing the same answers on the form. This adaptation made IRB compliance less onerous, but it severely limited the student's choice of topics and deprived them of the opportunity to do real science. Since then, the course has introduced a whole new kind of research option for students that avoids IRB involvement. I surmise that in most educational settings, the demands of IRB compliance have led to requiring topics and projects that are easier to get past boards.

Alternatives to IRBs

It must be possible to protect research subjects in behavioral research without the IRB bureaucracy's involvement. The central issue is the locus of control: Who should be responsible for monitoring social-science research: an extraneous IRB, an academic department chair, a professional association, or an individual researcher? Scientists and their respective professional associations surely might get together and develop something more useful than the current system. Several piecemeal solutions seem promising, at least on the surface. One obvious reform that many colleges and universities have already adopted entails retooling the prevailing IRB structure by distinguishing between IRBs that regulate biomedical research and those that regulate behavioral research. The social-science board presumably would have at least a few social scientists as members, which would help to ameliorate some of the confusion between quantitative and qualitative research. That provision alone, however, cannot solve the economic problems associated with often unpaid, overworked IRBs, nor can it prevent overcautious, risk-averse social-science boards from manufacturing imaginary harms.

The current system, which has gradually devolved into a legal bulwark to protect deep-pocketed institutions from liability, has elevated collective responsibility over individual responsibility. Why sue a poverty-stricken graduate student in ethnology for asking embarrassing questions when you can sue a well-endowed university? Therefore, a more radical approach would be to transfer oversight of social-science research from the traditional IRB to academic departments and thereby reempower department chairs to regulate the research conducted by their own faculty, undergraduates, and graduate students without the added IRB burden. Ultimately, even IRBs must rely on the individual researcher's integrity. Realignment of responsibility admittedly would probably require substantial tort reform in order to protect colleges and universities from deep-pocket liability. It is still not at all clear, though, whether the liability risk associated with social-science research is real or imagined. I know of no such litigation.

Nevertheless, individual responsibility might be supplemented with the creation of a required course for both new faculty and students on behavioral-research ethics and the laws that govern informed consent and privacy. These new courses would emphasize the researcher's responsibility to comply with laws that protect research subjects. More important, the courses might also be used to acculturate the concept of scientific research as a form of personal expression on par with artistic expression. This effort might help to revive the lost right to scientific expression as a constitutionally protected activity, balanced by the scientist's duty to minimize objective harm to research subjects. Once the courses take root, perhaps the next generation of social scientists will be less willing to subjugate their research interests to the whims of omnipotent, external committees and more likely to cultivate responsible research more dedicated to freedom of inquiry. The reassertion of individual responsibility might also contribute to more useful and innovative social-science research in the future. Unfortunately, the cultural, political, and legal environment that currently envelops scientific research has become so group conscious and risk averse that we may have already "crossed the Rubicon"; in other words, the devolutionary forces that threaten the foundations of scientific culture in the United States may have already taken their toll.

At present, the IRB bureaucracy seems so entrenched, the ideology so pervasive, and the social scientists so weak-kneed that substantial reform appears unlikely. Social scientists' passive response, thus far, to the rising tide of censorship is certainly problematic, and, as Fish points out, often involves "divided loyalties." "However," he argues, "if social scientists do not stand up to fight the relentless institutional encroachments on academic inquiry, nothing of substance will remain open to their inquiry" (2005, 383). A single disgruntled, courageous researcher and an army of civil-rights lawyers may be enough to file a lawsuit in defense of the last vestiges of academic freedom. As Philip Hamburger (2004) has observed, however, the Supreme Court may itself be ill equipped to protect social science from this "new censorship."

Some signs suggest that the academic community may be poised to confront the IRB juggernaut. In June 2006, a subcommittee of the American Association of University Professors (AAUP) issued a report highly critical of the government's regulation of human subjects. The report calls for a national conference, coordinated by the AAUP, to consider the possibility of joint action. The committee concludes its report with the following warning: "[I]t cannot be strongly enough stressed that unless a focused strategy is adopted, and concrete steps taken, nothing will change. Indeed, it is possible that the requirement of advance IRB approval will come to be imposed even more broadly than it currently is" (AAUP 2006). Nevertheless, even if the AAUP manages to generate a unified front to resist the IRB juggernaut, the prospects for success seem dim, given the prevailing cultural environment.

Conclusion: The Rise of the Nanny State

IRB regulation's mission creep clearly reflects a much larger cultural shift in our understanding of moral responsibility. It involves a subtle movement toward the institutionalization of rule-driven collective responsibility at the expense of individual responsibility on the part of researchers and research subjects. For trial lawyers engaged in the liability industry, this movement almost universally signals a parallel shift from the relatively shallow pockets of individual researchers to the more lucrative deep pockets of institutions. As educational institutions circle the wagons in self-defense against an almost boundless liability threat, we may confidently anticipate an explosion of risk-free, politically correct, and mostly irrelevant scientific research. For powerless, rationally self-interested social-science researchers--often junior faculty members in pursuit of promotion and tenure--the best survival strategy will always be IRB avoidance, steering clear of all research that might be remotely associated with even the most ephemeral harms and avoiding politically charged or potentially offensive research topics. Why waste valuable time, energy, and resources on topics that might be sucked into an IRB black hole? For undergraduate and graduate social science students, avoidance of these black holes will lead not only to a decline in their interest in social science, but also to less research experience for them and thus to a dim future for social science in the United States.

Perhaps the most unsettling feature of the IRB regulation of scientific research is that it feeds our growing cultural obsession with a zero-risk public life. In the post-9/11 era, our unrealistic expectations for security and protection from remote harms, minor harms, and even personal inconvenience have greatly increased the government's powers. The gradual expansion of watchdog institutions--IRBs, ethics committees, advisory commissions, and presidential councils--has a cumulative effect not only on our personal liberty, but also on the nature and quality of scientific research. In the end, these watchdog commissions, which tend to change every four years, invariably become more political than moral, As government agencies continue to usurp political control over science through the expansion of governmentally sponsored research funding and overlapping regulatory commissions, we must continue to ask leadership's most important question: Who is watching the watchers?

When we consider the politics of balancing our collective interests in security, personal liberty, and the advancement of science, we must admit that these interests often seem to be at odds. Hypothetically, we can imagine a full-fledged Nanny State. On the surface, it seems to be an extraordinarily safe state in which to live, a place where paternalistic legislatures and efficient, omniscient, watchdog agencies regulate all personal risk taking: no more driving faster than ten miles per hour, no more smoking, no more fast food, no more offensive language, no more "wardrobe malfunctions," no more violent video games, no more lotteries, and so forth.

The fallacy of the Nanny State, however, is that in the absence of reliable scientific research, all risks are simply unknown and hence equally unacceptable. The Nanny State does not really make us any safer; only rigorous scientific research can do so by revealing the magnitude, probability, and duration of the potential harms that accompany human activities. The Nanny State does, however, make our fives very inoffensive, unobtrusive, and boring. So as our collective skins grow thinner in the face of an ever-increasing intolerance of unknown risks, we must be wary of the growth of insidious forms of bureaucratic control. The Nanny State not only encroaches on our personal liberty, but also undermines our fragile scientific institutions. Indeed, scientists themselves may soon find themselves on the government's official list of vulnerable populations.

Acknowledgments: Special thanks to Ann Hamilton for her valuable critique and editorial assistance in the preparation of an earlier version of this article, which was presented at the 2005 meeting of the Association for Politics and the Life Sciences in Washington, D.C.

References

American Association of University Professors (AAUP). 2000. Protecting Human Beings: Institutional Review Boards and Social Science Research. Washington, D.C.: AAUE Available at: http://www.aaup.org/AAUP/About/committees/committee+Repts/commA/protecting. htm. Retrieved January 7, 2007.

--. 2006. Research on Human Subjects: Academic Freedom and the Institutional Review Board. Washington, D.C.: AAUP. Available at: http://www.aaup.org/AAUP/About/ committees/committee+repts/CommA/ResearchonHumanSubjects.htm. Retrieved January 7, 2007.

Boster, J. 2006. Toward IRB Reform. Anthropology Newsletter 47:21-22.

Burman, W. J., R. R. Reves, D. Cohn, and R.T. Schooley. 2001. Breaking the Camel's Back: Multicenter Clinical Trials and Institutional Review Boards. Annals of Internal Medicine 134: 152-57.

Chadwick, G. L., and C. M. Dunn. 2000. Institutional Review Boards: Changing with the Times? Journal of Public Health Management and Practice 6: 19-27.

Citro, C. F., D. R. Ilgen, and C. B. Marrett, eds. 2003. Protecting Participants and Facilitating Social and Behavioral Sciences Research. Washington, D.C.: National Academies Press.

Code of Federal Regulations. 1991. 45 CFR 46.102 I.

Communication Scholars' Narratives of IRB Experiences. 2005. Journal of Applied Communication Research 33: 204-30.

Emanuel, E. J., A. Wood, A. Fleishman, A. Bowen, K. Getz, C. Grady, C. Levine, et al. 2004. Oversight of Human Participants Research: Identifying Problems to Evaluate Reform Proposals. Annals of Internal Medicine 141: 282-91.

Ferraro, F. R., E. Szigeti, K. Dawes, and S. Pan. 1999. A Survey Regarding the University of North Dakota Institutional Review Board: Data, Attitudes, and Perceptions. Journal of Psychology 133: 272-80.

Fish, J. M. 2005. Divided Loyalties and the Responsibility of Social Scientists. The Independent Review 9, no. 3: 375-87.

Hamburger, P. 2004. The New Censorship: Institutional Review Boards. Supreme Court Review, 271-354.

Hamilton, A. 2002. Institutional Review Boards: Politics, Power, Purpose, and Process in a Regulatory Organization. Ph.D. diss., University of Oklahoma. Available at: http://members.cox. net/annhamilton/index.htm. Retrieved May 4, 2006.

--. 2005. The Development and Operation of IRBs: Medical Regulations and Social Science. Journal of Applied Communication Research 33: 189-203.

Higgs, Robert. 1994. Banning a Risky Product Cannot Improve Any Consumer's Welfare (Properly Understood), with Applications to FDA Testing Requirements. Review of Austrian Economics 7: 3-20.

Holt, E. 2003. The HIPAA Privacy Rule, Research, and IRBs. Applied Clinical Trials (June): 48-66. Available at: http://www.actmagazine.com/appliedclinicaltrials/articleDetail.jsp?id= 80209. Retrieved May 4, 2006.

Keith-Spiegel, P. 2005. The IRB Paradox: Could the Protectors Also Encourage Deceit? Ethics and Behavior 15: 339-49.

Kipnis, K. 2001. Vulnerability in Research Subjects: A Bioethical Taxonomy. In Ethical and Policy Issues in Research Involving Human Participants, vol. 2, compiled by the National Bioethics Advisory Commission (NBAC), G1-G13. Bethesda, Md.: NBAC. Available at: http://georgetown.edu/research/nrcbl/nbac/pubs.html. Retrieved May 4, 2006.

Labott, S. M., and T. P. Johnson. 2004. Psychological and Social Risks of Behavioral Research. IRB: Ethics and Human Research 25:11-15.

Levine, R. J. 1988. Ethics and Regulation of Clinical Research. 2d ed. New Haven, Conn.: Yale University Press.

McDaniel, D., M. Baker, and J. Lansink. 2002. IRB Accreditation and Human Subject Protection. Applied Clinical Trials (January): 32-38.

National Bioethics Advisory Commission (NBAC), comp. 2001. Ethical and Policy Issues in Research Involving Human Participants. Vol. 2. Bethesda, Md.: NBAC. Available at: http://georgetown.edu/research/nrcbl/nbac/pubs.html. Retrieved May 4, 2006.

Nelson, C. 2004. The Brave New World of Research Surveillance. Qualitative Inquiry 10: 207-18.

Peckman, S. 2001. Local Institutional Review Boards. In Ethical and Policy Issues in Research Involving Human Participants, vol. 2, compiled by the National Bioethics Advisory Commission (NBAC), XXX-XXX. Bethesda, Md.: NBAC. Available at: http://georgetown. edu/research/nrcbl/nbac/pubs.html. Retrieved May 4, 2006.

Peter, L. J., and R. Hull. 1969. The Peter Principle: Why Things Always Go Wrong. New York: William Morrow.

Shopes, L. 2000. Institutional Review Boards Have a Chilling Effect on Oral History. Perspectives (September). Available at: http://www.historians.org/perspectives/issues/2000/ 0009/0009viel.cfm. Retrieved May 4, 2006.

Whittle, A., S. Shaw, B. Wilfond, G. Gensler, and D. Wendler. 2004. Institutional Review Board Practices Regarding Assent in Pediatric Research. Pediatrics 113: 1747-752.

Yan, E. G., and K. M. Munir. 2004. Regulatory and Ethical Principles in Research Involving Children and Individuals with Developmental Disabilities. Ethics and Behavior 14: 31-49.

(1.) Editor's note: For an argument that even in biomedical regulation, optimal risk bearing for each individual ultimately remains a subjective matter involving costs and benefits that are inaccessible to third parties, however well intentioned they may be, see Higgs 1994.

Ronald F. White is a professor of philosophy at the College of Mount St. Joseph.
COPYRIGHT 2007 Independent Institute
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:White, Ronald F.
Publication:Independent Review
Geographic Code:1USA
Date:Mar 22, 2007
Words:7827
Previous Article:The middle class and the Swedish welfare state: how not to measure redistribution.
Next Article:The rise, fall and rise again of privateers.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters