Printer Friendly

Academic integrity and legal scholarship in the wake of Exxon Shipping, footnote 17.

INTRODUCTION

Midway through his majority opinion in Exxon Shipping Co. v. Baker, (1) Justice Souter inserted a short, three-sentence footnote:
   The Court is aware of a body of literature running parallel to
   anecdotal reports, examining the predictability of punitive awards
   by conducting numerous "mock juries," where different "jurors" are
   confronted with the same hypothetical case. See, e.g., C. Sunstein,
   R. Hastie, J. Payne, D. Schkade, W. Viscusi, Punitive Damages: How
   Juries Decide (2002); Schkade, Sunstein, & Kahneman, Deliberating
   About Dollars: The Severity Shift, 100 Colum. L. Rev. 1139 (2000);
   Hastie, Schkade, & Payne, Juror Judgments in Civil Cases: Effects
   of PlaintifFs Requests and Plaintiff's Identity on Punitive Damage
   Awards, 23 Law & Hum. Behav. 445 (1999); Sunstein, Kahneman, &
   Schkade, Assessing Punitive Damages (with Notes on Cognition and
   Valuation in Law), 107 Yale L.J. 2071 (1998). Because this research
   was funded in part by Exxon, we decline to rely on it. (2)


To the casual reader, Footnote 17 seems innocuous enough. But to members of the scholarly and legal communities it is anything but bland. Perhaps not since Footnote 4 in United States v. Carolene Products (3) or Footnote 11 in Brown v. Board of Education (4) has a footnote in a Supreme Court opinion generated so much debate. (5)

Why has this little footnote created such a big stir? Part of the answer is that Footnote 17 is akin to the beast in the tale of the blind men and the elephant. In an effort to "see" the elephant, each man touches a different (but only one) part: the elephant's leg, its tail, and its trunk. As a result, the men can't agree on whether the elephant looks like--a pillar, a rope, or a tree.

And so it goes with Footnote 17. When experimentalists read it, they tend to focus on the first sentence and express resentment at Justice Souter's use of a second set of quotes around the word "jurors." To them, the scare quotes mock experiments for their failure to capture the experience of serving on a real jury. (6)

To legal academics working in the area of punitive damages, the second sentence of Footnote 17--the string citation--moves to the fore. In their corner of the world, the footnote reignites fierce debates over the various studies Justice Souter cites. (7) Representing one side, Jeffrey Rachlinksi argues that "[t]he Justices say they did not rely on the [studies] Exxon funded, even though it is actually very good work AND they seem to have been affected by it." (8) On the other side, David Hoffman--among other important academics (9)--has expressed agreement with the Court's "skepticism" about the cited works. (10)

The third and final sentence of Footnote 17 ("Because this research was funded in part by Exxon, we decline to rely on it.") also has been the subject of much discussion, especially among those who conduct empirical research. But while the first two sentences of Footnote 17 have generated debate and even resentment, the reaction to the third sentence has been almost uniformly positive. Even those who disagree over the value of the cited studies praise sentence three. As Rachlinkski wrote, the Justices "disparage funded studies in Footnote 17. Now, I agree funded studies can be suspect ... so maybe that is a good move." (11) Hoffman put it this way: "I was ... fairly shocked to see the Court acknowledge the problem of deep capture in such an open way." (12) Hoffman and Rachlinski are hardly alone. When we looked over the existing commentary, we came across only one scholar who questioned the Court's refusal to rely on the cited studies; (13) the other commentaries all expressed enthusiasm for the Court's approach. (14)

For the reasons we lay out in Parts I and II, the exuberance over Footnote 17 may be understandable, but at bottom we think it is misguided: courts and attorneys should not assess the integrity of academic work solely by whether an interested party funded it. Rather, the legal community should adopt the standards already in place in the scientific community. Part III explains the most prominent of these standards--reliability, validity, and transparency--and demonstrates how, with the help of empirically minded law scholars, appellate court judges and litigators could apply them.

I. THE OVERWHELMING RESPONSE TO FOOTNOTE 17, SENTENCE 3: "GOOD FOR THE COURT"

When it comes to the last sentence of Footnote 17, the response we have heard (and read) time and time again is: "Good for the Court!" (15) And indeed this was our first response. Coincidentally enough, when Exxon Shipping came down, one of us (Epstein) was leading a workshop on conducting empirical legal scholarship, and had just finished a lecture on how hypothesis testing differs from representing a client. While a lawyer helps make the best case for his client by dismantling the opposition's arguments, a scientist strengthens her hypothesis by making the strongest possible case against it. (16)

To the students in the workshop, Footnote 17 embraced this very idea--that research is, or at least should be, different from lawyering. The footnote reflects the intuition that money exerts a pull on perspective: when people pay you, your work product tends to reflect their interests.

And, in fact, this is not mere intuition; substantial evidence exists to support the claim that conclusions follow from the money. (17) As Nature, the science journal, summarizes it,
   [E]vidence [on commercial sponsorship of research] is consistent
   with the truism that although, in principle, science may be
   objective and its findings independent of other interests,
   scientists can be imperfect and subjective. There are circumstances
   where selection of evidence, interpretation of results or emphasis
   of presentation might be inadvertently or even deliberately biased
   by a researcher's other interests. (18)


Doubtless this is true. As all data analysts know, every empirical project requires the researcher to make scores of decisions--from how to draw a random sample to the particular regression models, of the hundreds estimated, to present--all of which could go either way. Even seemingly straightforward tasks confront the researcher with a nearly uncountable number of choices.

By way of illustration, suppose we wanted to study the amount of sponsored research undertaken by legal scholars. We might start with a stack of law review articles and then, for each article in which the author thanks an industry, government agency, or organization for funding support, we would enter a "1" on our spreadsheet. Articles that do not explicitly acknowledge such support would receive a "0." (This is known as "coding data," which is the process of translating properties or attributes of the world (i.e., variables) into a form that researchers can systematically analyze and is a part of nearly every empirical project.) (19)

Sounds simple, right? Wrong. For this article, we attempted to implement this very protocol and immediately ran into problems. What, for example, do we do with an author who works for the National Bureau of Economic Research (NBER)? While the author's acknowledgment may not mention funding for the particular article in question (and so would not count as a funding disclosure under our coding system), isn't remuneration support? Maybe yes, maybe no. Then there is the author employed by the National Center for State Courts. If he does not acknowledge support from the National Center, we confront the same problem as we did for the NBER, plus yet another complication: the National Center receives contributions from numerous corporations, including 3M, Dow Chemical, Ford, and so on. (20) This raises the question of what to do about indirect funding--even if the author acknowledges support from, say, the National Center for State Courts, ought we dig deeper and check sponsorship records or even the affiliations of board members?

The point is that many of the decisions we made in this small-scale study--as in all research--could have gone either way. Did we try to make them without regard to the conclusions we would ultimately reach? Sure, but humans tend to be biased in small, imperceptible ways--and, just as surely, they may allow their funder to affect their choices. If our benefactor were concerned with exposing the role of industry in legal research, perhaps we would have scoured the records of the various organizations. If, on the other hand, our funder was an industry interested in minimizing the role of corporate support, we may have adopted more restrictive coding rules. Either way, even this seemingly small decision--just one of many we would make over the course of the research--could have a big impact our conclusions. As Nature explains, "[i]t would be naive to think that such incentives have no effect on what gets published, and indeed several studies of the clinical literature have demonstrated associations between the conclusions of a study and the source of its funding." (21)

II. ANOTHER RESPONSE: "A MISGUIDED APPROACH TO ACADEMIC RESEARCH"

All of this goes to a simple point: treating funded research with some degree of skepticism is not without merit. For this reason, it easy to imagine why many would agree with Barday's sentiment: the type of "hired-gun" research condemned in Exxon Shipping is "problematic even if the results [are] accurate because ... it creates an appearance of bias." (22)

On reflection, though, Barday's reaction and, more importantly, Footnote 17 itself push the point way too far, evincing a misunderstanding of empirical research. With regard to Footnote 17, this problem manifests in two ways: a failure on the part of the Justices first, to consider the troubling implications of their approach to sponsored; and second, to contemplate (much less apply) existing norms within the scientific community. (23) Both claims deserve some elaboration.

A. The (Troubling) Logic of Footnote 17

Of the troubling aspects of the majority's view, two are worthy of attention. First, regardless of whether we take a narrow or wide view of Footnote 17's breadth, (24) under current circumstances no one--no judge, Justice, attorney, or organization--can implement the Court's command with any degree of certainty. The reason is simple: the vast majority of law journals lack conflict of interest policies, (25) meaning that it is entirely possible that studies the Court cites favorably and without disclaimer were funded by an industry with an interest in the case or even by the party itself.

Actually, we need not speculate because this occurred in Exxon Shipping. In one spot, Justice Souter cited (with approval) (26) a study that drew on "the jury verdict database constructed by the Institute for Civil Justice (ICJ) at RAND." (27) It takes only a few clicks on the Internet to learn that RAND's clients and contributors have included GlaxoSmithKline, Pfizer Inc., Aetna, and other industries that we might reasonably expect desire limits on punitive damages. (28)

Then there is an article by Eisenberg et al., which the Exxon Shipping Court also used--or misused as Eisenberg relayed to Epstein (29)--with approval. By all accounts, this is a methodologically sound study but it is nonetheless worth noting that four of the seven co-authors are affiliated with the National Center for State Courts. (30) As it turns out, among the Center's corporate sponsors is none other than the ExxonMobil Foundation, which contributed

$10,000 or more for at least twenty years. (31) It seems absurd to believe that the Court should have "decline[d] to rely" on the Eisenberg et al. research, but Footnote 17 could suggest as much.

From this perspective, one might argue (although we do not take this position), that the biggest mistake made by the authors of the Footnote 17 studies was not actually in taking the money to do the research, but simply in saying they took the money. Which raises a second problem with Footnote 17: it sets up a perverse incentive structure. If researchers accept funding and disclose their source, a judge is free to ignore their work. If researchers accept funding and do not disclose, they may not be violating any formal policy of the law journals (though perhaps they would be violating their funders' policies), (32) but some in the legal community--ourselves included--would be deeply troubled.

Alternatively, under Footnote 17's logic, researchers could decline funding altogether even if it meant foregoing the project due to a lack of resources. The problem here is all too obvious: rejecting funding for important research may not be in the public's best interest. As the editors at Nature explain: "[We do not want to suggest] that commercial research is a bad thing per se, or that financial conflicts of interest are leading to a widespread undermining of research integrity. In general, we believe that good business goes hand-in-hand with good science, and the basic research community has clearly benefited greatly from the academic-industrial links that have grown so rapidly over the past few years." (33)

Indeed, we can only imagine the consequences of medical researchers declining industry money or of medical journals rejecting articles solely because the authors received commercial support. Surely many valuable studies would never see the light of day (that is, if they were undertaken at all). By way of example, consider a recent volume of the New England Journal of Medicine in which two of the three original articles received industry support. The first, the lead article, demonstrated how dosages of the drug simvastatin (which works by slowing the production of cholesterol in the body) affect people with certain genetic markers differently. (34) Merck, a producer of a popular brand of simvastatin, Vytorin, was among the study's funders. The second article evaluated the efficacy and safety of adalimumab for use in the treatment of juvenile rheumatoid arthritis. (35) The researchers concluded that adalimumab therapy seems to be "an efficacious option." (36) The brand name of adalimumab is Humira, which is made by Abbott Labs, a supporter of the study and the employer of three of researchers. (37)

Would proponents of Footnote 17's last sentence have wanted the New England Journal of Medicine to reject these studies because of "an appearance of bias?" Would they "decline to rely" on the studies' findings because the investigators took money from the very industries that stood to benefit from them? We think not--especially not if they had high cholesterol or a child with rheumatoid arthritis, in which case they might be worse off if the studies had never seen the light of day.

More generally, these two articles are just the tip of the iceberg. Industry money is widespread in medical and even basic science research. An analysis of biomedical studies (out of Massachusetts), for example, found that one out of every three authors had financial interests in the results. (38)

Industry funding of empirical legal studies seems to be less frequent (though without disclosure policies it is hard to develop even a ballpark estimate), (39) but that is beside the point. Discouraging commercial sponsorship of legal research should be no more attractive than it is for medical research. As Hasen puts it: "[T]here will be cases ... in which there are no extant studies on an empirical question at the heart of a case. At that point, it makes sense for litigants to fund such research." (40) Judicial decisions may cause huge sums of money to change hands, may grant individuals freedom or cut short their lives, and may alter or preserve our most important social, political, and economic institutions--and judges often base these decisions on academic research. Footnote 17 threatens to cut off the flow of information to judges, and it is hard to believe that this will result in better decision-making. In short, legal academic research is simply too important to be underfunded.

B. The (Wrong) Way to Assess Empirical Research

These concerns with the logic of Footnote 17, however serious, may be the least of it. More irksome is the Court's ignorance (or blatant disregard) of the time-tested standards--most notably, reliability, validity, and transparency (the subjects of Part III below)--that the academic community uses to assess empirical research. By dropping the footnote, the Exxon Shipping Court implies that, even if research is reliable, valid, and transparent, it is still biased or otherwise lacking integrity if parties with a direct interest in the lawsuit (41) support it. Apparently, the Court believes automatic disqualification is the only solution.

To us, this borders on an ad hominem attack--and one fueled, most unfortunately (and however inadvertently), by some in the legal academy. Exhibit 1 is a critique of one of the works cited in Footnote 17 that starts this way: "This article addresses tort reform claims made by [Sunstein et al.] in Punitive Damages: How Juries Decide and related articles, research that was largely underwritten by the Exxon Corporation." (42) The snarkiness is qualified in a footnote, (43) but other writers have been quick to repeat it. Barday, for example, cites it to support her claim that "some scholars" are now questioning "whether industry money has infiltrated the sanctity of the legal academy and begun to distort legal scholarship." (44)

More to the point, in much the same way Footnote 17 is jarring, we find it odd to spearhead a scholarly critique by taking a stab at the funding source. A better approach is the one taken by Richard Lempert. In a critique of another study funded by Exxon, he wrote that it "would be a mistake" to dismiss the study "because of Exxon's sponsorship alone.... The research reports of these scholars and the policy recommendations they derive from their research deserve careful attention, regardless of who has paid for the study." (45)

In other words, judging research by who funded it is simply not the way the academic community proceeds. Kevin Quinn, a professor at Berkeley is exactly right when he writes:
   While it may well be the case that some research that is funded by
   interested parties is bad research (and thus should be ignored
   because it is bad research), it is not necessarily the case that
   all, or even most, such research is biased or inaccurate. One
   should be able to determine the quality of the research based on
   things other than who funded the research. Competently done
   empirical research should follow certain rules.... (46)


Most notably, the research must be reliable, valid, and transparent.

By this, neither we nor Quinn mean to imply that funding is irrelevant. Certainly, if a party with a direct or even indirect interest--economic, social, or otherwise--in the litigation supports the research, more skepticism may be in order in light of evidence that funding can affect research conclusions. What we do mean to suggest is that the Footnote 17 approach is wrong: we should not reject research based solely on the funding source. Simply because research is funded by industry does not make it biased in favor of the funder; and simply because research is not funded by industry does not make it unbiased. It is probably the case that money is more likely to follow views than views are to follow money. Indeed, as James Lindgren relayed to us, it is hard to identify scholars who have changed their views because they received financial support from a corporation, organization, or foundation. (47) Interested groups do not seem to select scholars at random; they select scholars who are doing the type of work they wish to further. Only by applying the criteria of reliability, validity, and transparency (outlined below) can we reach conclusions about the integrity of the work.

III. EDUCATING THE COURTS

The takeaway so far is straightforward enough: Money is important, but not all-important. Funding provided by an interested party may--or should--engender a healthy skepticism about the quality of the research, but the Court's solution of rejecting research merely because an interested party supported it is not just simple-minded; it is misguided.

Rather than continue to express frustration in response to the Court's approach, empirical researchers should use Footnote 17 as an opportunity to educate appellate judges and lawyers about the appropriate standards for judging the integrity of research, as well as to urge the academic community--especially the legal academy--to enhance existing practices and procedures.

In practice, this means explaining the criteria empirical scholars use to assess research--the clearest, simplest in conception, and time-tested of which are reliability, validity, and transparency (48)--and providing suggestions on how to implement them through easy-to-follow signaling devices: reliability via replication, validity via peer review, and transparency via disclosure policies. (49)

A. Implementing Reliability through a Replication Standard

A threshold requirement of empirical research is reliability: the extent to which it is possible to replicate a measurement, reproducing the same value (regardless of whether it is the right one) on the same standard for the same subject at the same time. If any one of us stepped on the same bathroom scale 100 times in a row, and if the scale were working reliably, it would register the same weight 100 times in a row (even if the weight was not accurate). (50)

If the bathroom scale produced a different weight each time we stepped on it, we would conclude that scale was unreliable, and either take it to a repair shop or throw it out. We would do exactly the same with unreliable measures (or measurement procedures) in empirical research. If we asked two colleagues to reproduce our study on funding sources in the law reviews using our procedures (51) and they coded the National Center on State Courts as an "industry" and we coded it as an "organization," we would deem our procedures unreliable and try to develop new ones.

Unreliable measurement procedures are of concern for many reasons, but especially relevant here is that they may provide evidence that the researcher, intentionally or otherwise, has biased a measure in favor of her pet hypothesis. That is one reason why researchers take many steps to ensure the reliability of their studies--mostly, they attempt to minimize human judgment from measurement, which typically entails writing down very precise rules for coders to follow.

Obviously we cannot expect judges and lawyers to retrace or even review all our precautionary steps. What academic researchers can do instead is signal their commitment to reliability in a very straightforward way: by adhering to the replication standard. This standard commits researchers to supplying enough information about their study--including their data--that a third party could replicate their results "without any additional information from the author." (52)

How might we signal our intent to follow this standard? One approach is to convince the law journals to adopt a replication policy. Here is an example from Political Analysis, the most cited (53) journal in political science:
   Political Analysis adheres to a simple replication standard. Unless
   otherwise noted, appropriate replication materials, including, at a
   minimum, sufficient data and computer code to allow any reader to
   reproduce the results of the article, will be permanently posted on
   the Society's Political Analysis Web site.... (54)


Given law reviews' obsession with documenting textual material--unpublished conference papers, e-mail communications, blogs, and the like--we cannot imagine that they would have much trouble implementing a replication policy. In fact, it is hard to understand why they haven't done so already. Data used by researchers in an article to advance key claims should be treated with as much (if not more) care and respect than, say, a blog posting cited in a footnote.

B. Signaling Validity Through Peer Review

The second criterion, validity, is the extent to which a reliable measure reflects the underlying concept being measured. Suppose a professional football player stepped on the best scale in the world--the "gold standard" scale--and it registered 300 pounds. Further suppose that he then stepped on and off an ordinary bathroom scale 100 times in a row and it registered only 75 pounds each time. We would say that the bathroom scale was reliable but not valid. Only if the scale registered 300 pounds, say, 100 times in a row would we conclude that the scale was reliable and valid. Put another way, a reliable measure is not necessarily valid, but a valid measure must be reliable.

For this reason, academic researchers strive to create measurement procedures that are both reliable and valid. Attaining and assessing validity, though, is much harder than it is for reliability primarily because legal scholars typically lack gold standards. To determine whether, for example, the National Center for State Courts is better classified as an industry or something else, there is no universally regarded authoritative source we can consult.

This is not to say that researchers fail to assess the validity of their measures. They do, typically by adhering to established criteria such as face validity, lack of bias, and efficiency. (55) What we want to suggest instead is that we cannot expect lawyers and judges to analyze all the precautions investigators have taken to ensure valid measures, just as we cannot expect them to conduct their own reliability analyses of our work (or even inspect the work we have done). Instead, we must signal our commitment to validity.

One method for so doing, however imperfect, is peer review. (56) While it may have its detractors, we believe its benefits far outweigh any perceived costs. As Richard Lempert once suggested, it is a form of cross-examination of our work, (57) and we couldn't agree more. Indeed, it is such an important form that many empirical legal scholars look skeptically at non-peer reviewed research in much the same way that Chief Justice Rehnquist questioned studies that never faced cross-examination at trial. In his dissent in Atkins v. Virginia, the Chief criticized the majority's reliance on certain surveys that did
   not indicate why a particular survey was conducted or, in a few
   cases, by whom, factors which also can bear on the objectivity of
   the results. In order to be credited here, such surveys should be
   offered as evidence at trial, where their sponsors can be examined
   and cross-examined about these matters. (58)


We believe peer review, a form of cross-examination conducted by experts in the relevant field (coupled with the disclosure policies discussed in the next section), can serve the interests of truth-finding at least as well as traditional cross-examination at trial.

Along these lines, one of the great ironies of Exxon Shipping is that the Court had no misgivings about citing studies that apparently did not undergo peer review. (59) Footnote 13 may provide a particularly egregious example. Justice Souter cites the RAND Institute for Civil Justice, D. Hensler & E. Moller, Trends in Punitive Damages--a study that has the following note on its cover: "The RAND unrestricted draft series is intended to transmit preliminary results.... Unrestricted drafts have not been formally reviewed or edited. The views and conclusions expressed are tentative. A draft should not be cited or quoted without permission of the author, unless the preface grants such permission." (60) In the preface, the authors further underscore the point: "This paper presents the preliminary results of ... an analysis of trends in punitive damage awards. This research ... is still underway. This draft has not been peer-reviewed, and some numbers and text may be revised before final publication." (61) It is simply inexplicable that the Exxon Shipping Court cited this study with approval but "declined to rely" on a book published by the University of Chicago Press (Sunstein et al.'s Punitive Damages." How Juries Decide).

Some legal publications--including the Journal of Law, Economics & Organization and the Journal of Empirical Legal Studies--have sought to eliminate the problem by instituting peer review, and we should applaud them for doing so. (62) Peer review is a hallmark of serious academic research. It is a requirement for full membership in the Association of American University Presses, a regularized part of the National Science Foundation and the National Institutes of Health grant review process, and a requirement of the top journals in the sciences and social sciences. (63)

As for the traditional, student-edited law reviews, many in the academy have argued, and argued for decades, that they should move to peer review systems. Perhaps Footnote 17 provides additional rationale for their project, but at the end of the day, we think it is probably quixotic. (64) If law reviews have not moved to peer review by now, they may never.

What empirical legal researchers should do instead is commit themselves to publishing their work in peer reviewed journals. This might entail creating new journals along the lines of the Journal of Law, Economics & Organization or the Journal of Empirical Legal Studies; (65) it may even result in a division of specialization, in which the law reviews publish traditional doctrinal or interpretive work and the peer reviews publish more data-driven studies. Either way, committing ourselves to cross-examination by our peers would go some distance toward signaling our commitment to producing valid research.

C. Transparency and the Importance of Disclosure Policies

Finally, let us consider the practice Footnote 17 most explicitly implicates: transparency. Just as reliability and validity in our methods is crucial, so too is transparency in our competing interests. This is obvious for the consumers of academic research. In evaluating research, readers should look more closely, even more skeptically, at the work if financial incentives are involved, as we have stressed throughout. But transparency is equally valuable for the producers of academic research. Were we to commit to disclosing any competing interests, it would have the effect of making all our work more credible. As Nature Neuroscience explains, "[i]f financial disclosure in scientific publication becomes the norm, this will increase public confidence in the integrity of the research enterprise--a result that will be to everyone's benefit." (66)

To signal their commitment to transparency, Nature and many other science and medical journals have instituted various disclosure policies. (67) Some legal publications, most notably the Journal of Empirical Legal Studies (JELS), have followed suit: "Authors are expected to disclose any commercial or other associations that might pose a conflict of interest with a submitted article."

This is a step in the right direction. No one reading an article in JELS can make the claim that the authors took money for their research that they did not disclose.

Unfortunately, though, JELS is the exception. We learned as much by asking the top twenty law reviews whether they had competing interest disclosure policies; none did. (68) This is a striking fact. As Shari Diamond pointed out to us, we in the legal community are punctilious to the point of obsession about conflict-of-interest policies. (69) Our current law review model is inconsistent with the legal model, and we must convince our law reviews of the irony.

There is yet another complication with instituting disclosure policies in the law reviews: discerning exactly what counts as a conflict of interest. Note that the JELS's policy speaks of commercial or other associations. The "or other" is especially crucial in legal research--a point Exxon Shipping highlights. When we were trying to discover what motivated Justice Souter to write Footnote 17, Linda Greenhouse suggested to us that the footnote might have had more to do with the Second Amendment case, District of Columbia v. Heller (70) than with Exxon Shipping. (71) Apparently one or more of the Heller dissenters may have harbored anger over the majority's use of historical studies written by authors with ties to gun groups. (72)

If this is so, then Heller and Exxon (73) suggest the need to develop disclosure policies that make transparent not only financial ties but organizational affiliations as well. Along these lines, Nature's policy might serve as a starting point for legal publications:
   Competing interests are defined as those of a financial nature
   that, through their potential influence on behaviour or content or
   from perception of such potential influences, could undermine the
   objectivity, integrity or perceived value of a publication....

      It is difficult to specify a threshold at which a financial
   interest becomes significant, but note that many US universities
   require faculty members to disclose interests exceeding $10,000 or
   5% equity in a company. Any such figure is necessarily arbitrary,
   so we offer as one possible practical alternative guideline: Any
   undeclared competing financial interests that could embarrass you
   were they to become publicly known after your work was published.
   (74)


A detractor might argue that this "embarrassing" standard is vague and unworkable, but it is better to err on the side of inclusiveness. Not much harm will come from a researcher disclosing a fairly nebulous conflict, whereas concealment of even the most tenuous connection might raise eyebrows. Of course, policies of this sort occasionally lead to very long disclosures. (75) But that should pose no difficulty for our law journals, which, after all, pride themselves on careful documentation in footnotes.

CONCLUSION

Reliability, validity, and transparency. These are the most prominent standards empirical researchers use to assess the integrity of research. Clearly the legal academy has some work to do to better signal its commitment to them. But just as clearly these standards--not the Court's approach--are the best available criteria to detect bias in research.

Which leaves only one question: Do the dejected Footnote 17 studies meet these standards? That is to say, are they reliable? Valid? Transparent? We leave these to the experts on juries and punitive damages--our fellow elephant-touchers--to address. What we can say is that these are the questions the Justices should have posed in Exxon Shipping, and the ones the legal academy should insist they pose in the future.

(1.) 128 S. Ct. 2605 (2008) (holding that punitive damages awards in maritime cases generally should not exceed a one-to-one ratio to compensatory awards, thus reducing the punitive damages Exxon was to pay from $2.5 billion to $507 million). It seems likely that the decision will have implications beyond the admiralty context. See, e.g., Jeffrey L. Fisher, The Exxon Valdez Case and Regularizing Punishment, 26 ALASKA L. REV. 1 (2009).

(2.) Exxon Shipping Co., 128 S. Ct. at 2626 n.17.

(3.) 304 U.S. 144, 152 (1938). For more on "the most famous footnote in constitutional law," which announced the idea of various levels of judicial scrutiny, see, e.g., Felix Gilman, The Famous Footnote Four: A History of the Carolene Products Footnote, 46 S. TEX. L. REV. 163 (2004).

(4.) 347 U.S. 483, 494 (1954). The footnote cited social science studies to support the claim that school segregation harms black children. For various controversies surrounding Footnote 11, see, for example, Jack M. Balkin, Rewriting Brown: A Guide to the Opinions, in WHAT BROWN V. BOARD OF EDUCATION SHOULD HAVE SAID 50-53 (Jack M. Balkin ed., 2001).

(5.) See, e.g., Adam Liptak, From One Footnote, a Debate Over the Tangles of Law, Science and Money, N.Y. TIMES, Nov. 24, 2008, at A16; Tony Mauro, Souter Causes Stir with Footnote in 'Exxon' Case, LEGAL TIMES, July 8, 2008, available at http://www.law.com/jsp/scm/PubArticleSCM.jsp?id=1202422815132;Election Law Blog, http://electionlawblog.org/archives/011086.html (June 25, 2008); Posting of Ashby Jones to Wall Street Journal Law Blog (July 7, 2008), http://blogs.wsj.com/law/2008/ 07/07/justice-souter-and-the-mystery-of-footnote-17; Posting of Dave Hoffman to Concurring Opinions, http://www.concurringopinions.com/archives/2008/06/capture_academi.html (June 25, 2008).

(6.) Thanks to Shaft Diamond for making this point to us. For discussion of methodological concerns associated with mock juries, see Brian H. Bornstein, The Ecological Validity of Jury Simulations: Is the Jury Still Out?, 23 L. & HUM. BEHAV. 75 (1999); Robert M. Bray & Norbert L. Kerr, Use of the Simulation Method in the Study of Jury Behavior." Some Methodological Considerations, 3 L. & HUM. BEHAV. 107 (1979); David L. Breau & Brian Brook, "Mock" Mock Juries: A Field Experiment on the Ecological Validity of Jury Simulations, 31 L. & PSYCHOL. REV. 77 (2007).

(7.) For a glimpse into these debates, see, for example, Neal R. Feigenson, Can Tort Juries Punish Competently? 78 CHI.-KENT L. REV. 239 (2003) (book review); Steven Graber, Commentary, Punitive Damages and Deterrence of Efficiency-Promoting Analysis: A Problem Without a Solution?, 52 STAN. L. REV. 1809 (2000); Richard Lempert, Juries, Hindsight, and Punitive Damage Awards: Failures of a Social Science Case for Change, 48 DEPAUL L. REV. 867, 868-71 (1999); Robert J. MacCoun, Epistemological Dilemmas in the Assessment of Legal Decision Making, 23 LAW & HUM. BEHAV. 723 (1999); Catherine M. Sharkey, Punitive Damages: Should Jurors Decide?, 82 TEX. L. REV. 382 (2003) (book review); Neil Vidmar, Experimental Simulations and Tort Reform: Avoidance, Error, and Overreaching in Sunstein et al.'s Punitive Damages, 53 EMORY L.J. 1359 (2004).

(8.) Email from Jeffrey J. Rachlinski, Professor of Law, Cornell Law School, to Lee Epstein, Professor of Law, Northwestern University School of Law (Aug. 1, 2008) (on file with authors).

(9.) See, e.g., Lempert, supra note 7; Vidmar, supra note 7.

(10.) Hoffman, supra note 5; see also David A. Hoffman, How Relevant is Jury Rationality?, 2003 U. ILL. L. REV 507 (2003) (book review).

(11.) Rachlinkski, supra note 8.

(12.) Hoffman, supra note 5.

(13.) Hasen, supra note 5.

(14.) See, e.g., Hoffman, supra note 5; Liptak, supra note 5; Rachlinski, supra note 8; Shireen A. Barday, Note, Punitive Damages, Remunerated Research, and the Legal Profession, 61 STAN. L. REV. 711 (2008).

(15.) See, for example, Barday, supra note 14, for criticism of remunerated research in the wake of Exxon Shipping.

(16.) This follows from Lee Epstein & Gary King, The Rules of Inference, 69 U. CHI. L. REV. 1 (2002). For a more detailed discussion of the "distinct structural paths" taken by scientists and lawyers to "arrive at their respective truths," see Sheldon Krimsky, The Funding Effect in Science and Its Implications for the Judiciary, 13 J.L. & POL'Y 43, 46-48 (2005).

(17.) The academic literature covering the effects of research funding on outcome is quite extensive. For the findings of eight recent studies--all showing a "funding effect"--see Krimsky, supra note 16, at 57-58.

(18.) Nature.com, Competing Financial Interests, http://www.nature.com/ authors/editorial_policies/competing.html (last visited Apr. 5, 2010).

(19.) See Lee Epstein & Andrew Martin, Coding Variables, in 1 ENCYCLOPEDIA OF SOCIAL MEASUREMENT 321 (Kimberly Kempf-Leonard ed., 2005).

(20.) A list of the Center's sponsors is available at https://www.ncsconline.org/D%5FDev/fotc/fotc_contribute.htm# 1eaders-10kplus.

(21.) Editorial, A New Policy on Financial Disclosure, 4 NATURE NEUROSCIENCE 961, 961 (2001).

(22.) Barday, supra note 14, at 712.

(23.) In an email to Epstein, Kevin Quinn, a law professor at Berkeley, put it this way: Taken at face value, the Court's reasoning here would seem to imply that either they don't understand the rules of inference and hence can't judge the quality of empirical work (and thus use the rough rule of thumb articulated below [i.e., funding equals bias]) and/or they are concerned that fraud was committed so that they can't trust the authors' descriptions of their work.

E-mail from Kevin Quinn, Professor of Law, U.C. Berkeley School of Law (Aug. 26, 2008) (on file with the authors)

(24.) For example, whether the Court will "decline to rely" on studies funded by the parties to the suit or any third parties with an interest in the litigation. For more on this point, see infra note 71.

(25.) See infra Part III.

(26.) Exxon Shipping Co. v. Baker, 128 S. Ct. 2605, 2625 n.13 (2008).

(27.) Eric K. Moller, Nicholas M. Pace & Stephen J. Carroll, Punitive Damages in Financial Injury Verdicts, 28 J. LEGAL STUD. 283, 307 (1999).

(28.) Rand Corporation, Major Clients and Grantors, http ://www.rand.org/about/clients_grantors.html (last visited Apr. 5, 2010).

(29.) The Court relies on Theodore Eisenberg et al., Juries, Judges, and Punitive Damages: Empirical Analyses Using the Civil Justice Survey of State Courts 1992, 1996, and 2001 Data, 3 J. EMPIRICAL LEGAL STUD. 263,278 (2006) to support the proposition that jury punitive damages awards are unpredictable, although Eisenberg is of the opinion that those awards are reasonably predictable. See, e.g., Theodore Eisenberg & Martin T. Wells, The Predictability of Punitive Damages Awards in Published Opinions, the Impact of BMW v. Gore on Punitive Damages Awards, and Forecasting Which Punitive Awards Will Be Reduced, 7 SUP. CT. ECON. REV. 59 (1999).

(30.) Paula L. Hannaford-Agor is a Principal Court Research Consultant; Robert C. (Neil) LaFountain is a Senior Court Research Analyst; G. Thomas Munsterman is a Principal Court Management Consultant; Brian J. Ostrom is a Principal Court Research Consultant. See National Center for State Courts, Division Staff, http://www.ncsconline.org/D_RESEARCH/staff.html (last visited Apr. 5, 2010).

(31.) National Center for State Courts, Contributors, https://www.ncsconline.org/D_Dev/fotc/fotc_contribute.htm# 1eaders_10kplus (last visited Apr. 5, 2010).

(32.) For example, "[u]nless otherwise provided in the grant," the National Science Foundation requires acknowledgement of research support "in any publication (including Web pages) of any material based on or developed under [the grant], in the following terms: 'This material is based upon work supported by the National Science Foundation under Grant No. (NSF grant number).'" THE NAT'L SCI. FOUND., PROPOSAL AND AWARD POLICIES AND PROCEDURES GUIDE, at IV-4; IV-9 (2009), available at http://www.nsf.gov/pubs/policydocs/pappguide/nsf09_29/nsf0929.pdf.

(33.) Editorial, supra note 21.

(34.) SEARCH Collaborative Group, SLCO1B1 Variants and Statin Induced Myopathy--A Genomewide Study, 359 NEW ENG. J. MED. 789 (2008).

(35.) Daniel J. Lovell et al., Adalimumab with or Without Methotrexate in Juvenile Rheumatoid Arthritis, 359 NEW ENG. J. MED. 810 (2008).

(36.) Id. at 810.

(37.) Id. at 819-20.

(38.) Sheldon Krimsky et al., Financial Interests of Authors in Scientific Journals: A Pilot Study of 14 Publications, 2 SCI. ENG'G ETHICS 395 (1996) (cited in Editorial, supra note 21).

(39.) Barday, supra note 14, at 718-19, examined thirty-three law review articles on punitive damages that included a financial disclosure. Of the thirty-three, fourteen were industry-funded (thirteen by Exxon, one by G.D. Searle & Co.).

For this article, we conducted a small-scale study of the 499 articles published in 2008 in the law reviews at the top twenty-five schools (as ranked by U.S. News & World Report) and in six peer-reviewed legal journals (Journal of Empirical Legal Studies; Journal of Legal Studies; Journal of Law, Economics & Organization; Journal of Law and Economics; Law & Society Review; Law & Social Inquiry). We included articles only; not book reviews, symposia introductions, etc. Finally, to determine whether (and from where) the author(s) received research support, we looked at the first few footnotes in the article. Thank you's and acknowledgements counted as support; the author's affiliation did not. The dataset is available at: http://epstein.law.northwestern.edu/research/ Footnote17.html

In only 26.45% of the 499 articles did the authors disclose one or more funding sources but only 7% of the total sources were foundations or industries; the rest were government agencies and universities. (Of course it is possible that we have underestimated industry and foundational support because universities and their centers often receive funding from these sources. We leave that matter to another day.) Breaking down the numbers by law reviews versus peer-review journals presents a somewhat different picture. Of the 156 articles in the six refereed journals--two of which (Journal of Law, Economics & Organization and Journal of Empirical Legal Studies) have explicit disclosure policies-12% disclosed industry or foundation support; that figure is only 5% for the traditional law reviews (p < .05). This may reflect a norm among social scientists to disclose funding sources even in the absence of a stated policy, the fact that two of the peer-review journals have disclosure policies (and the traditional law reviews do not), or simply a lack of financial support for papers published in the traditional, student-edited law reviews.

(40.) Election Law Blog, supra note 5.

(41.) Or even indirect interest, depending on how one reads Footnote 17. See infra note 71.

(42.) Vidmar, supra note 7, at 1359-60 (emphasis added). In fairness, we should point out that in the balance of the article, Vidmar does offer a serious methodological critique of the Sunstein et al. studies. Still, we were struck by the way he started his review.

(43.) In note 2, Vidmar writes:
   I raise here the role of Exxon in funding the studies. However, I
   leave for another day, and perhaps for other writers, exploration
   of issues involving the role of Exxon in vetting which studies
   would be funded and which would not, the use of the studies in
   Exxon's own litigation, and the fact that most of the experiments,
   albeit not all, were published in student-edited law reviews rather
   than in peer-reviewed social science journals. These issues deserve
   more extensive treatment. However, in the remainder of this article
   I limit myself to addressing on their own merits the research
   experiments and the authors' conclusions from those experiments.


Id. at 1360 n.2 (citations omitted).

(44.) Barclay, supra note 14, at 714-15.

(45.) Lempert, supra note 7, at 868-69.

(46.) E-mail from Kevin Quinn, supra note 23.

(47.) Conversation with James Lindgren, Professor of Law, Northwestern Univ. Sch. of Law, in Chi., Ill. (Sept. 3, 2009).

(48.) For other criteria, see Epstein & King, supra note 16.

(49.) We draw some of the material that follows from id. at 83-87; Lee Epstein & Andrew D. Martin, Quantitative Approaches to Empirical Legal Research, in OXFORD HANDBOOK ON EMPIRICAL LEGAL STUDIES (Peter Cane & Herbert Krtizer eds.) (forthcoming 2010).

(50.) As we explain momentarily, a scale that is both reliable and valid will give a reading that is both the same and accurate 100 times in a row.

(51.) See supra note 39.

(52.) See Gary King, Replication, Replication, 28 PS: POL. SCIENCE & POL. 444 (1995).

(53.) See Oxford Journals, Political Analysis, http://oxfordjournals.org/our_journals/polana/isi2009.html (last visited Apr. 5, 2010) (citing the 2008 ISI Journal Citation Reports).

(54.) Political Analysis, Information for Authors, http://www.oxfordjournals.org/our_journals/polana/for_authors/ general.html (last visited Apr. 5, 2010).

(55.) See Epstein & King, supra note 16, at 89-95.

(56.) Of course, this tracks Daubert's approach, under which peer review is one indicator of"good science." As Justice Blackmun wrote, "[a]nother pertinent consideration is whether the theory or technique has been subjected to peer review and publication. Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability.... But submission to the scrutiny of the scientific community is a component of 'good science,' in part because it increases the likelihood that substantive flaws in methodology will be detected." Daubert v. Merrell Dow Pharm., 509 U.S. 579, 582 (1993).

(57.) Lempert, supra note 7, at 869-70.

(58.) Atkins v. Virginia, 536 U.S. 304, 327-28 (2002).

(59.) E.g., Neil Vidmar & Mary R. Rose, Punitive Damages by Juries in Florida, 38 HARV. J. ON LEGIS. 487, 492 (2001), cited in Footnote 14 of Exxon Shipping.

(60.) Deborah Hensler & Erik Moiler, Trends in Punitive Damages: Preliminary Data from Cook County, Illinois and San Francisco, California (RAND Working Paper No. DRU-1014-ICJ 1995), available at http://www.rand.org/pubs/drafts/ 2008/DRU1014.pdf (emphasis added).

(61.) Id.

(62.) For a list of publications that have instituted peer review, see http://www.lexisnexis.com/lawschool/prodev/lawreview/nonstudent 200611.pdf.

(63.) See Association of American University Presses, AAUP Membership Benefits and Eligibility, http://aaupnet.org/membership (last visited Apr. 5, 2010); U.S. Department of Health and Human Services, Peer Review Process http://www.grants.nih.gov/grants/peer_review_process.htm#Overview (last visited Apr. 5, 2010). For an argument as to "the inherent limitations of the system [of peer-review] as a quality-control mechanism," see Susan Haack, Peer Review and Publication." Lessons for Lawyers, 36 STETSON L. REV. 789 (2007). But see Alan W. Tamarelli, Jr., Daubert v. Merrell Dow Pharmaceuticals: Pushing the Limits of Scientific Reliability--The Questionable Wisdom of Abandoning the Peer Review Standard for Admitting Expert Testimony, 47 VAND. L. REV. 1175, 1198 (1994) ( "[C]ross-examination is not as effective a filter as peer review. Neither courts, parties, nor juries have the time, expertise, or money to evaluate independently the degree to which each piece of testimony is rooted in the scientific method.").

(64.) On the other hand, some of the more traditional journals are apparently moving in this direction. See, e.g., John P. Zimmer & Jason P. Luther, Peer Review as an Aid to Article Selection in Student-Edited Legal Journals, available at http://www.sclawreview.org/peerreview/peerreviewessay.pdf.

(65.) Full disclosure: Epstein is a co-editor of the Journal of Law, Economics & Organization.

(66.) Editorial, supra note 21. The editor-in-chief of Science put it this way: "For whatever reasons, society is now concerned about possible sources of bias and seeks assurance through disclosure that the data or opinions presented are those of a disinterested party." Donald Kennedy, Disclosure and Disinterest, 303 SCIENCE 15 (2004).

(67.) See, e.g., Bernadette M. Broccolo & Jennifer S. Geetter, Today's Conflict of Interest Compliance Challenge: How Do We Balance the Commitment to Integrity with the Demand for Innovation?, 1 J. HEALTH & LIFE SCI. L. 1 (2008).

(68.) See also Barday, supra note 15, at 730 ("[C]urrently law reviews do not require that authors disclose sources of support for their work at all, but they should....").

(69.) See, e.g., MODEL RULES OF PROF'L CONDUCT R. 1.7-.11 (2009) (covering conflicts of interest). Barday, supra note 15, at 731 makes this point as well ("The legal profession has recognized the importance of avoiding conflicts of interest in other contexts.").

(70.) 128 S. Ct. 2783 (2008).

(71.) Linda Greenhouse covered the Supreme Court for the New York Times between 1978 and 2008. In an email (on file with Epstein), she wrote that during a private conversation with her, a justice who voted in dissent in Heller, "mentioned with evident distaste that a number of historical studies on the meaning of the Second Amendment had been funded by the N.R.A."

If Ms. Greenhouse's explanation is right, Footnote 17 may have especially broad implications. It means that the Court could reject any research funded by any group with an interest in the litigation--not just industry and not just economic interests. As Quinn, supra note 23, put it: "Is National Science Foundation-funded research going to be ignored by the Court if the United States is a party to the suit?"

(72.) For example, one of the historical studies relied upon by the majority, Clayton E. Cramer & Joseph Edward Olson, What Did "Bear Arms" Mean in the Second Amendment?, 6 GEO. J.L. & PUB. POL'Y 511 (2008), was co-authored by Professor Olson, who sits on the board of the National Rifle Association--certainly a party with an interest in the case. See Hamline University School of Law, Joseph Olson, http://law.hamline.edu/ node/784 (last visited Apr. 5, 2010).

(73.) Recall our earlier discussion of the Eisenberg, et al. and RAND studies both cited with approval in Exxon Shipping.

(74.) Nature.com, supra note 18.

(75.) Here's an example from a New England Journal of Medicine article on Humira, a medication for the treatment of rheumatoid arthritis:
   Dr. Lovell reports receiving consulting fees from Abbott
   Laboratories, Amgen, BristolMyers Squibb, Centocor, Pfizer,
   Hoffmann-La Roche, Novartis, Regeneron, and Xoma and speakers' fees
   from Amgen and Wyeth Pharmaceuticals. Dr. Reiff reports receiving
   consulting and lecture fees from Amgen, Wyeth Pharmaceuticals,
   Pfizer, and Abbott Laboratories and serving as a clinical
   investigator for Abbott Laboratories, Amgen, and Regeneron. Dr.
   Jung reports receiving consulting fees from Pfizer and lecture fees
   from Talecris. Dr. Sandborg reports receiving grant support from
   Immunex and Abbott Laboratories, as well as unrestricted continued
   medical education grants from Abbott Laboratories, Barr
   Pharmaceuticals, Bristol-Myers Squibb, Centocor, Hoffmann--La
   Roche, and Amgen. Dr. Vehe reports receiving lecture fees from
   Abbott Laboratories. Dr. Horneff reports receiving consulting and
   lecture fees from Wyeth Pharmaceuticals. Dr. Huppertz reports
   receiving consulting fees from Wyeth Pharmaceuticals, Essex
   Pharmaceuticals, and Abbott Laboratories; lecture fees from Wyeth
   Pharmaceuticals and Essex Pharmaceuticals; and support for
   attendance at scientific meetings from Wyeth Pharmaceuticals and
   Essex Pharmaceuticals. Dr. Mouy reports receiving consulting fees
   from Abbott Laboratories. Dr. Olson reports receiving grant support
   from Abbott Laboratories. Dr. Giannini reports receiving consulting
   fees from Abbott Laboratories, Genzyme, Regeneron, Bristol-Myers
   Squibb, Hoffmann-La Roche, Pfizer, and Centocor; lecture fees from
   Genzyme; and grant support from Amgen, Abbott Laboratories,
   Bristol-Myers Squibb, Genzyme, and Pfizer. Drs. Medich,
   Carcereri-De-Prati, and McIlraith report being employed by Abbott
   Laboratories. No other potential conflict of interest relevant to
   this article was reported.


Daniel J. Lovell et al., supra note 35, at 819-20.

Lee Epstein *

Charles E. Clarke, Jr. **

* Lee Epstein is the Henry Wade Rogers Professor at Northwestern University School of Law. This Article was prepared for the Stanford Law & Policy Review's Symposium on Academic Integrity. For advice and comments along the way, we thank Robert Bennett, Shari Diamond, Theodore Eisenberg, Linda Greenhouse, Chris Guthrie, Gary King, William M. Landes, James Lindgren, Kevin Quinn, Richard A. Posner, Jeffrey Rachlinksi, Jeffrey A. Segal, and Nancy Staudt. For research support, we thank Northwestern University School of Law. This is a revised, expanded, and updated version of the keynote address Epstein delivered at the 2008 annual Conference on Empirical Legal Studies, held at Cornell Law School.

** Charles E. Clarke, Jr. is a J.D. candidate at Northwestern University School of Law, Class of 2011.
COPYRIGHT 2010 Stanford Law School
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Epstein, Lee; Clarke, Charles E., Jr.
Publication:Stanford Law & Policy Review
Date:Jan 1, 2010
Words:8871
Previous Article:Complexities in legislative Suppression of diploma mills.
Next Article:A movement, a lawsuit, and the integrity of sponsored law and economics research.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters