Printer Friendly

Academic fraud today: its social causes and institutional responses.


In this Article, I shall address a topic that we should all like to sweep under the rug if it were possible: academic integrity and its converse, academic fraud. As an initial observation, it is clear that academic fraud can occur outside the university setting at other institutions conducting research on matters of scientific importance. It is equally clear that the risk of academic fraud is not confined to the physical or biological sciences, but can also arise in research in the social sciences and the humanities. At this point, however, I shall not dwell on these differences; rather, I will concentrate on the fundamental risks to the integrity of the research mission of scholars and scientists, both inside and outside universities.

Defining this topic in this broad fashion gives ample clues as to why its mere mention is commonly the source of acute discomfort and anxiety. We do not develop institutional procedures and safeguards to celebrate scholarly integrity, which is surely the norm in research enterprises both within the academy and beyond it. We develop these systematic procedures to deal with the recurrent risk and troublesome cases of academic fraud wherever and however they appear. Our hope is that the presence of these procedures will act as a modest deterrent against the commission of these actions. But when that hope is dashed, we want to have standing procedures in place so that we do not have to improvise under pressure. Fortunately, these cases are rare, but the risks that they carry with them must never be minimized. Each documented incident of academic fraud--indeed each unproven allegation of academic fraud--saps the confidence of the public and the profession in the soundness of the research mission, which exposes every researcher whose work meets the highest standards of scientific integrity to undue scrutiny.

As I hope to show, only a comprehensive approach can meet the challenge that these cases present. The first task in dealing with academic fraud is to identify those activities that present a sufficient threat to the research mission that they should be singled out for special sanction. In other words, just how do we define academic or scientific fraud? The second, or remedial, task is every bit as important as the first. What procedures should be used to examine the cases that fall within this definition? And what additional procedures should be used to identify the extent of the fraud once its existence has been established?

This is a subject with which I have some hands-on experience. One of my most important administrative tasks at the University of Chicago was to chair the Committee on Academic Fraud in 1984, which, to the best of my knowledge, assembled the first set of systematic rules and procedures in the country for dealing with this problem. (1) Academic fraud is a relatively infrequent occurrence that has shattering implications when it occurs. In contrast, conflicts of interest are all-pervasive. I sometimes joke that the best definition of a conflict of interest is "two people" because any two people, almost by definition, always have divergent interests and goals.


In dealing with the wrongs of academic fraud, lawyers know what other academics tend to forget: that remedial issues loom just as important, and maybe more so, than the definition of the underlying wrongs which are logically prior to them. Starting with the former question, the definition of academic fraud cannot be dismissed as some arcane philosophical debate devoid of institutional consequences. In response to that question, the University of Chicago Report of Academic Fraud adopted a standard but narrow definition of the concept:
   Academic Fraud involves a deliberate effort to deceive and is
   distinguished from an honest mistake and honest differences in
   judgment or interpretation. Academic fraud is defined as
   plagiarism; fabrication or falsification of evidence, data, or
   results; the suppression of relevant evidence or data; the
   conscious misrepresentation of sources; the theft of ideas; or the
   intentional misappropriation of the research work or data of
   others. (2)

A number of key points about this definition cry out for attention. The first is the observation that different forms of academic fraud have very different social consequences. When the deceit goes to the soundness of the data and the inferences drawn from it, the consequences reverberate at the core of the scientific mission. The rest of the scientific or research community is now asked to work with false information, which could easily result in endless efforts to replicate results that never occurred, or to start off on new ventures that would never been undertaken had the truth been known. The confusion goes to the supposed knowledge on which further advances depend.

In contrast, incidents of plagiarism and misappropriation do not alter for the worse the nature of human understanding or the direction of scientific research since the truth of the underlying information is not in doubt. No one wants to steal work or ideas that are unsound. Instead what these actions do is undermine the system of incentives and awards in research by allowing one individual to take credit for the work that is done by another. The undeserving person receive recognition, promotion, and rewards that in truth should go to another.

It could therefore be asked why these two kinds of wrongs should be grouped together, to which there are three answers. First, it appears in fact that the two kinds of academic fraud often go together. The people who fabricate data are the ones who are likely to misappropriate the work of others. Second, the conscious violation of the rights of others shows a form of moral dereliction that requires strong social sanction, which is best administered by a unified system that can look at both types of problems at the same time. Third, the types of expertise that are called upon to deal with both problems are sufficiently similar that there is no need to develop separate procedures to respond to each individually.

The second point of debate addresses not the coverage in academic fraud cases, but the kinds of wrongful acts that fall within its scope. On this issue, it is important to note that the definition that I have used refers to academic fraud, not to some generalized notion of academic or scientific misconduct that could conceal a multitude of sins. In that definition, our report defined academic fraud in opposition to honest mistakes and honest differences in judgment, which are its obvious antithesis. The actual typology of wrongs is somewhat more complex because of all the intermediate states of mind that might, in principle, be relevant to the inquiry. In one sense, it is useful to examine this definitional question within the framework of the customary classification of wrongful mental states that is used, for example, to organize the law of tort. That inquiry starts off with intentional wrongs--the deliberate fabrication or omission of data. It then progresses to recklessness--the preparation of data without knowing or caring whether it is true or false. Both of these states of mind mesh well together because of the explicit mental attitude that individuals take toward truth or falsity.

It is instructive to look at tort cases dealing with actions for deceit, in which this exact equivalence has been made. The standard definition of deceit requires that a plaintiff prove that the defendant knowingly made false statements of fact on which the plaintiff relied on to her detriment. One key problem in defining this wrong deals with the mental element of the cause of action. In the highly influential English case of Derry v. Peek, which involved an action for deceit for wrongful statements in a prospectus, the operative standard of liability was held to be fraud, which "is proved when it is shewn that a false representation has been made (1) knowingly, or (2), without belief in its truth, or (3) recklessly, careless of whether it is true or false." (3) The entire point of this exercise was to reject the notion that ordinary negligence was to be regarded as a variant on fraud. That restrictive definition carries over in modem law to two other significant contexts. The first of these is modern securities regulation, where the Supreme Court, in Ernst & Ernst v. Hochfelder, took exactly this approach by refusing to equate negligence with fraud. (4) Although the precise status of recklessness was left open in Ernst & Ernst, lower courts have not hesitated to fill the gap by uniformly holding that fraud encompasses recklessness and not negligence. (5) The same line is drawn in defamation actions brought by public officials and public figures, where the malice standard embraces reckless disregard of the truth, but not any form of negligence. (6)

Yet beyond this equivalence, most courts are not prepared to go. Thus, it is virtually impossible to find support for the view that gross negligence--which is a major deviation from standard research protocol without any intention or desire to get false results--should count as a species of academic fraud. Gross negligence may supply some evidence of reckless behavior, but, strictly speaking, it should never be confused with it. Rather, gross negligence bears a closer resemblance to negligence, or the want of ordinary care in carrying out research protocols. Unfortunately, there is always lots of negligence, and all of it has harmful consequences. But the personal taint is not there, and it diminishes the moral seriousness of fraud to lump negligence in with it under some amorphous standard of academic or scientific misconduct. And of course, results that are just erroneous are the ordinary stuff of science. We positively want to encourage speculation that could be wrong in order to get researchers to take the risks that could lead to major breakthroughs in their field. Formal investigations that reveal honest mistakes are not just unwise; they are positively counterproductive.

The hard question, therefore, is whether we can come up with any theoretical explanation as for why we place the key line between recklessness and gross negligence. At root, I think that the moral intuition has the following utilitarian foundation: we deplore academic fraud because what people do when they commit it is spend real resources to make the academic and scientific communities worse than they ought to be. That is quite different from any form of negligence, where the wrong was the failure to spend sufficient resources to make scientific research better than it was. Gross negligence is evidence of recklessness, admissible as such. But at most it supports an inference of recklessness. It does not create a prima facie case of recklessness, or even allow a finder of fact to find recklessness unless the deviation is so huge that no innocent explanation for it is possible. The point may seem small, but matters often get blurred in the heat of actual disputes. Therefore, it is best to keep the conceptual lines clear in advance.

From what has been said, it is clear what should happen at the opposite end of the spectrum. Ironically, conclusive proof of ordinary negligence by a researcher is generally sufficient to dispel any inference of academic fraud. All this is not to make light of gross or even ordinary negligence. There are enough informal sanctions to deal with the negligence side of the line without hauling out the heavy lumber of a fraud proceeding that could, if findings of fraud are made, result not only in dismissal from a current position, but near-certain excommunication from the research community. In order to impose sanctions that severe, an institution must be very confident of the grounds of the academic offense. The position that we took back in 1984, and which was affirmed by the Gamwell Committee that looked at the matter in 1996, seems on reflection still to be correct.


Once the definition is settled, the next issue is what should be done to address the remedial side of this particular problem? And this, in many ways, is the more important question. This issue's importance is evidenced by the controversy that is currently swirling around the revelations of extensive data manipulation at the Climate Research Unit in East Anglia, where it appears that there are no settled procedures for investigating what looks from the outset to be a comprehensive and systematic fraud on one of the most sensitive issues of our time. This controversy demonstrates an appalling lack of judgment. Even in cases far less grave than this one, it is unwise to tolerate cozy or informal arrangements in which friends or associates of the scientists under investigation are allowed to participate in the case. There need to be established and settled institutional arrangements to vet out any issue as massive and complicated as the Climate Research incident, no matter what the ultimate scientific verdict turns out to be on the underlying issue of academic fraud.

Designing the right procedures was a key issue that we faced in 1984 at the University of Chicago. At that time, there was some sentiment in favor of adopting informal procedures largely confined within departments or sections as the proper approach to an academic fraud investigation. My own views, which eventually prevailed, insisted on much more durable and complex procedures. In particular, in order to avoid any risk of bias, the appointment of the individuals to conduct the investigation had to be done by persons who were not in the department, or even within the discipline, of the accused person. Toward that end, we eventually devised a standing committee of eminent scholars whose job was to appoint the committee of field experts that would take charge of the initial investigation. (7) Once that was done, the Standing Committee would then oversee its work under an appeals process that gave it power to review the applicable legal standards applied in the hearing and to set aside any findings of fact that were deemed to be clearly erroneous.

The first part of this procedure hearkens back to some of the procedures for choosing juries in early England, whereby one jury would select a second jury, whose members would select a third jury, who would then hear the case. The implicit judgment that undergirds this practice is that it is imperative to guard against the risk of undue influence. The second part of the procedures puts the Standing Committee in a different role, which deals with appellate review of the underlying case to give some continuity in the rulings and practices that govern this area. In making this into a formal procedure, the question arose as to whether to allow lawyers in the proceedings. We hit on a compromise solution that seems to have worked well in practice: any individual charged with academic fraud has the option to bring a lawyer with him or her to the meeting. (8) But if that option is exercised, then the University will bring in its own lawyer. But if the person charged declines to bring in a lawyer, the University will also not bring in one of its own. In practice, the dominant, perhaps exclusive, outcome is for individuals charged to rely on the advice of counsel prior to a hearing, but not to bring lawyers into the room. I am aware of no complaints about this result and think that others would be well-advised to follow it.

Many people have asked why it is necessary to introduce all this extra formality into an otherwise collegial setting. There is an answer: formal procedures are necessary largely because of historical evidence as to what happens when formality is disregarded. In 1981, several years before the formation of our committee to develop rules on academic fraud, the New York Times Magazine published an article by Morton Hunt, A Fraud that Shook the World of Science. (9) I consider myself much in Hunt's debt for chronicling what happens when those safeguards are not in place. His vivid account impressed in my mind how small frauds--if that word is ever appropriate--can lead to the most disastrous consequences when only sloppy and ad hoc procedures are available for dealing with serious charges that crop up from the most unlikely of places. Hunt's story reveals an incredible set of background friendships and power relationships which, if left to run their course, would have resulted in major embarrassments for first-class institutions. Yet these consequences can be avoided by establishing a firm institutional response that kicks into action at the first whiff of fraud. Our committee concluded that its initial principle was that all power must be immediately taken out of the hands of anyone who is connected to the persons who are charged with the fraud, including supervisors, colleagues, and coauthors. To reach this goal, our Chicago procedures make it a duty on the part of any official who learns of fraud to report these matters to an appropriate Dean or Chair, or, if need be, the Provost, (10) who can run a brief inquiry to determine whether there is reason to believe that such fraud may have been committed. In addition, that administrator can seek advice from people in research administration or on the Standing Committee on how to proceed. Our position was that no one should be forced to navigate these treacherous waters alone. But once there is reason, any reason, to believe that some fraud might have been committed, that administrator must refer matters over to the Standing Committee.

This preliminary investigation involves a delicate balance of considerations, for no investigation of this magnitude should be undertaken lightly. Yet by the same token, the initial administrative official is not supposed to make any judgment about guilt or innocence. In criminal law parlance, the inquiry is only whether there is some evidence to believe that further investigation is warranted. It is here that the greatest peril lies. If the complainant and first line officials decide to short-circuit the inquiry, the formal institutions cannot kick in. The importance of following protocol therefore must be drilled in from the outset.

In this regard, it is important to fight against natural inclinations. Most working bench scientists do not ponder at length the proposition that all lawyers understand: that no one should be a judge in his own cause. Rather, scientists often labor under the conceit that, as able people endowed with common sense, they can resolve the matter on their own. Their seniors, who dislike these hearings, may urge them to take the informal route. But it is all a mistake. Scientists and scholars are not able to navigate these dangerous waters without firm navigational guides. They are the greater fools for thinking that their intellect protects them from their biases. One per se reason for disqualifying anyone who is close to the scene of the wrong from sitting in judgment is that he is silly enough to think that he is up to the job. Smart scientists and scholars will voluntarily bail out from direct responsibility at the first sign of trouble. It is their less thoughtful colleagues who have to be shown out the door before it is too late. These procedures have to start fight to end right. No exceptions.


A. The Soman Fraud

This conclusion seems impossible once you follow Hunt's story in the New York Times, which I will recount in painful detail. This case starts out simply enough. Dr. Helena Wachslicht-Rodbard was a junior researcher in endocrinology at the National Institutes of Health. Her work specialized in the study of anorexia nervosa, a sometimes deadly condition in which (typically) women suffer huge weight losses because of an abnormal dislike of food. Taken to an extreme, these women can literally starve themselves to death. Strong clinical intervention is almost always required. Wachslicht-Rodbard, Hunt reports, had discovered through some experiments that one marker of the progression of anorexia was how the blood cells of these anorexia patients tended to bind more frequently to insulin than in people without anorexia. Tracing the reduction in binding therefore gives a rough measure of progress toward cure. Wachslicht-Rodbard had submitted an article containing these findings called Insulin Receptors in Anorexia Nervosa, along with a simple mathematical model to analyze their impact, to the New England Journal of Medicine (NEJM) in November 1978.

As is common in science, all plausible submissions are sent out for referee reports to experts in the field for evaluation. No one doubts that these referee reports are the glue that make science work. Without them, there is no peer-reviewed work. But unfortunately, expertise and rivalry often come bound together in a single package. The referees who are asked to review articles often are working in the same area. Indeed, they are often working on the same topic, often in competition with each other. It is a brute fact of academic life that promotion in the research sciences depends on publication of work of sufficient originality. Priority matters in academics just as it does in patent law.

Those competitive forces drove the course of events that Hunt recounted. Dr. Arnold Relman, then Editor-in-Chief of the NEJM, sent the article for review to three readers, one of whom was Dr. Philip Felig, who, at age 43, was a rising star in endocrinology at Yale Medical School. Unlike law professors who tend to be lone wolves, research physicians often work in groups or in teams in which the eminent senior researcher oversees the work of aspiring juniors within the same field. That relationship allows them to leverage their expertise across multiple projects. At the same time, it tempts them to spread themselves too thin. In this instance, Felig delegated his review of the Wachslicht-Rodbard article to one of his junior researchers, Dr. Vijay Soman, whose promising research record in India had earned him a coveted place in Felig's labs. Soman wrote a negative report, recommending rejection, which Felig sent on to the NEJM after a cursory review. Felig knew, obviously, that Soman was working on the same project, but did not take the obvious precaution, which was to assign the review of Wachslicht-Rodbard's paper to someone else in his lab. Nor did Felig hear any warning bells when Soman's recommendation came back for rejection. It is always easy to overlook what seem to be small conflicts of interest.

The two other reviews of Wachslicht-Rodbard's paper, however, came back more favorable, so Relman took the sensible course of asking her to revise and resubmit. As she was doing her revisions, the referee process worked in reverse. Soman and Felig submitted their paper to the American Journal of Medicine. And this time the editorial call went out to Dr. Jesse Roth, who headed the Diabetes Research Laboratory in which Dr. Wachslicht-Rodbard worked. Roth was a childhood friend of Felig's. Following standard practice, he delegated the responsibility for the initial review to Wachslicht-Rodbard, who was "dumbfounded," as Hunt reports, to find that about sixty words from her article had been incorporated into his, along with her mathematical model. (11)

Within the framework of the University of Chicago guidelines, a discovery such as the one made by Wachslicht-Rodbard positively counts as a loud signal that should trigger a full-scale investigation. But in this instance, those procedures were not in place. So Wachslicht-Rodbard's fierce complaints of plagiarism and misappropriation did not have that effect they would have had under formal procedures. It is impossible to know just how much friendship played a role in the lack of action, but regardless, the first opportunity to refer this to formal proceedings was irretrievably lost.

It is important to note that no one in Roth's lab had done anything wrong. Proper procedure dictates that the matter should have been referred to someone at Yale who had not been involved in the process. The likely persons were Dr. Samuel Thier, who was Chairman of the Yale Department of Medicine, or Dr. Robert Berliner, who was Dean of the Yale Medical School. Either man could have referred the matter to a standing committee, if only one had existed. But since none was in place, Berliner should have entrusted the review to someone who was not involved in the underlying incident in order to create some measure of separation between the investigator and the investigated parties.

In this case, subsequent events confirmed the soundness of this view. Dr. Wachslicht-Rodbard did what Jesse Roth would not do. She wrote to Dean Berliner, demanding a medical audit. But he did exactly the wrong thing: he referred the matter down, rather than up and asked Felig for a report. Felig repeated the same mistake. He asked Soman to review the allegation of fraud and report on the matter. As Hunt reports:
   Soman brought him a list of the patients' names and dates (but not
   their hospital charts), plus data sheets bearing figures that, he
   said, were averages compiled from the blood studies of the six
   patients. Felig looked no further; he told Dean Berliner that the
   work had been done as represented, and Berliner
   wrote to Dr. Wachslicht-Rodbard, "There is no question that the
   studies of Soman and Felig were done as described in their
   manuscript," adding that he hoped she would now consider the matter
   closed. (12)

It is hard to imagine a dumber set of responses to an insistent inquiry. Felig swept all the evidence of plagiarism and misappropriation under the rug. Even a nonscientist could figure out that someone should look at the original data, and that this someone should not be the joint author of the paper whose veracity and integrity were called into question. But lax procedures made this way out all too easy. Wachslicht-Rodbard again expressed her outrage, and threatened to "denounce" both Felig and Soman if they presented their data at a forthcoming meeting of the American Federal of Clinical Research that was scheduled for May. (13) This outburst brought forth another ad hoc response, whereby Felig and Roth decided to appoint an outside auditor to take a fresh look at the evidence. This ad hoc response marked a (small) step forward because it broke the cliquish nature of the investigation, even if only slightly. The ground rules for the inquiry were utterly formless, as the initial agreement between friends did not indicate who was chosen, or explain why only one person was chosen, when the audit should be done, and how the results should be reported. It was all hopelessly ad hoc.

Ad hoc, however, has its uses when the effort is to slow the train down. However, there never seemed to be a convenient time for the audit to take place in the months that followed. (It is interesting to note that Hunt does not name the selected auditor which would have been instructive on the question of how independent that person was, and silence often alerts the senses that something could be amiss.) Soon after, perhaps in frustration, Wachslicht-Rodbard resigned her research position to take medical residency. In a world without formal oversight, the disappearance of the gadfly could easily be misconstrued that the problem had worked itself out. That must have been the conclusion of Felig and Soman, who, with horrible judgment, published their joint paper in January 1980, even though Felig had agreed to an informal understanding that their paper would wait until Wachslicht-Rodbard's paper was published.

At this point, matters only turned for the worse. Just as publication took place, Felig was being considered for a position as the Chairman of the Department of Medicine at Columbia's Physicians and Surgeons (P&S). Unfortunately, Felig tripped up on another institutional norm, which is that individuals who have skeletons in their closets disclose them to potential employers as part of the vetting process. Felig did not reveal any information regarding the investigation to the search committee at P&S. Instead he recommended that Soman receive an assistant professorship in the Department of Medicine once Felig took over as its head.

In understanding the enormity of the problem, it is important to set the institutional context. Although lawyers are accustomed to institutions that are small scale, the typical department of medicine is a huge operation that could have more than a dozen sections and five hundred doctors on a bewildering set of academic appointments. Thus imagine the hoary political image: the Chairman of Medicine may not be the King of France, but he is surely the Duke of Burgundy, with vast powers over his own domain. The diplomatic connections are extensive and ornate. The rewards for success are enormous, and the pitfalls are everywhere.

At this point the two issues--the P&S Appointment and the Felig/Soman article incident fused into one. Wachslicht-Rodbard saw the publication and called Roth with unrequited fury. Roth realized that the matter could not be contained any longer, and thus called Felig--again the wrong move--to make arrangements for the appointment of another independent auditor. This time, the right man was selected: Dr. Jeffrey Flier of the Harvard Medical School, now the Dean of the Harvard Medical School. Flier promptly undertook his duties as the auditor, and uncovering the fraud turned out to be child's play. Flier asked Soman to show him the raw data, to give the names of the patients, and to explain why all his data points line up on the regression line. As it turned out, one of the patients in the study was missing, and there was no graph of the original data. Flier quickly realized that the data points formed a curve that did not conform to the theory. Within three hours, the investigation was over, and Soman confessed to the fraud. Clearly in this situation, the definitional issues surrounding academic fraud were not at play; rather, the institutional structures were what mattered. It was an insider culture of acceptance and a resistance to outside influence that kept things bottled up as long as they were.

The breakdown in the system had many collateral consequences. Two are worth mentioning here. The first is what should have happened once it was understood that Soman had committed fraud in this research paper. Under the guidelines established at the University of Chicago, it would have been necessary to appoint a second panel to the grisly task of investigating the extent of the fraud. (14) However, this is no trivial exercise, especially in the medical community. Doctors are prolific. Small advances can generate several papers. Larger ones can generate a dozen or more. Most medical research relies on team production, so that four or five doctors and scientists, or more, collaborate on a given paper. Sometimes each person writes his or her own section, with little knowledge of what has transpired by the authors responsible for writing the other sections. Sometimes, the work is more genuinely collaborative. In other cases, a senior scientist oversees the work of a junior scientist. Thus, the grim task of sifting through the wreckage could easily mean reading dozens of papers. If no fraud is found, so much the better. But if serious errors are found, serious and confounding questions are raised: Was there fraud in this case, or in some other case? If so, which of the many people on the joint papers should be implicated? If these are persons from other institutions, who then conducts the search? Should papers be withdrawn, in whole or in part, because the authorship of one person taints the rest? Should others rely on papers that are withdrawn? Owing to the prevalence of multiple authorship, the entire process can be long and complex

The faulty mechanism also created a mass of collateral consequences when Felig's appointment as Chairman of Medicine at P&S unraveled. After Flier uncovered Soman's fraud, Felig's actions were extremely difficult to defend. His delegation to Soman was unwise, his lack of review of the negative report more so. The decision to publish before the matter was resolved was hasty and ill-considered. The decision to conduct the investigation of Soman himself was inexcusable. His decision to keep mum during the interview process at Columbia was less than candid. When the Columbia faculty went through its painful reassessment of the appointment, Felig was forced to resign. Even though Felig's actions did not amount to academic fraud, gross negligence in supervision is not a qualification for a major academic position. Thereafter, with some reluctance, Felig was taken back at Yale, but stripped of his chair. I leave it to others to decide whether the old-school tie was sufficient to warrant that result. In the end, however, the failure to attend properly to academic fraud has vast institutional implications.

B. The Darsee Fraud

As I was prepping for the work on the Academic Fraud Committee back in 1984, I came across another fraud less dramatic than Soman's but still dramatic in its implications. This time the fraud involved a young research scientist, Dr. John Darsee, who had come to Harvard after some years at Emory. (15) The same pathologies at work in the Soman case came into this fraud as well. Darsee was a star in cardiology, but an incident of fraud would bring about a fall. In May 1981, Darsee was caught faking dates on his reports in order to make it appear that studies which had taken him only a couple of hours to run were made to appear to have occurred over several weeks. No formal review was convened of this obvious fabrication. Instead the matter was referred to Dr. Eugene Braunwald, the head of cardiology and a distinguished scientist in his own right, who terminated Darsee's Fellowship, but accepted the plea that this was an isolated incident. There was no independent committee, and no report to the National Institutes of Health (NIH), which might have taken on the job of investigation itself. Instead Braunwald and his laboratory director, Robert Kloner, reported the matter to Dean Daniel Tosteson, who let the matter run its course. Braunwald and Kloner reviewed the incident themselves and found no other evidence of fraud.

It turned out that they were wrong. Darsee was a repeat offender. Years ago when I reviewed the files, I was struck at what seemed to be literally dozens of articles that had to be withdrawn by Darsee's senior authors. An entire NIH study was ruined, and Harvard returned its grant money, as if that made the difference. For our purposes, the key lesson is clear: informal procedures by insiders will not cut it. Senior faculty and researchers need to follow procedures not only for the protection of their institutions, but also for their own professional protection.

C. Climategate

When I accepted this invitation to speak, Climategate had not yet become a serious issue. There had been of course endless debates over whether global warming, or climate change, posed a serious threat to mankind that required prompt human action on a massive scale, or whether it was a natural phenomenon that ordinary mortals could not avoid by reducing the amount of carbon dioxide emitted from tailpipes. (16) In my view, the alarmist thesis is less compelling than many believe. (17) But the charges of fraud for activities at the East Anglia Climate Control Unit do not depend on holding that view. It is enough that critics have raised serious concerns with the data. (18) Critics have pointed to efforts to remove evidence of both the Little Ice Age, and the Medieval Warming period. They have also presented evidence that some researchers have culled current data to remove colder readings from the overall calculations. The issue of academic fraud is real.

In general, however, the salience of this scientific evidence is of little relevance in dealing with the academic fraud question. The definition of academic fraud in the Chicago procedures covers the standard ways of manipulating data with an eye to concealing the truth. It is, however, no defense to academic fraud charges to demonstrate that the conclusions of the charged individuals were in fact true. That defense rivals the one that Judge Sherman Manton of the Second Circuit offered when he was convicted of bribery in the 1920s. Manton claimed that he took money only from the side that he agreed with in principle. And so it is here: every single substantive contention on climate change that is propounded by the individuals who have doctored the data could be true, and it does not affect the fraud inquiry one whit. The ultimate soundness of the position does not excuse the fraud along the way. Think of it this way: most frauds are committed by individuals who think that they are right and that the pesky data is just being uncooperative. If they are correct in their supposition, then the fraud is less likely to be uncovered because other researchers will be able to replicate results for which our fraudster could claim priority. But fraud it is nonetheless, and so it is here.

At this point the question turns to how to cope with a fraud investigation on a massive scale such as Climategate. From the general reports, it looks as though this matter does not have the benefit of strong procedures within the university, let alone across the full range of institutions that are involved. One proposed investigator, Philip Campbell, the editor of Nature, has resigned after charges that he had defended the East Anglia scientists prior to the investigation. (19) Why a general editor of even a prestigious journal with no subject matter expertise in climate science was appointed is perhaps the greater source of concern. Even more disturbing than high-profile panel members is the risk that the investigation will be run by political and not scientific people who will be unconstrained by any antecedent procedures. How this risk can be controlled is anyone's guess, but my own reaction is that there will be no quiet unless and until we can have confidence in the committee that is chosen. But unlike the usual frauds on technical matters that occurred with Soman and Darsee, here the political swords are drawn so that it will be difficult indeed to find a squad of examiners who are both knowledgeable and impartial. Owing to the likely extent of the potential fraud, there will be huge difficulties in deciding what data to review, which individuals to have testify, and so on down the line.

The scale of the upcoming investigations is daunting. Our creaky University of Chicago procedures were not, and are not, meant to handle any inquiry of this magnitude. It is not clear whether any set of procedures internal to one university can be put together to resolve this question now so deep in the thick of the rising scandal. My own recommendation is that it is probably best for the investigators to make public any and all data so that everyone can have their own crack at its interpretation, which can then be submitted to some inquiry board for further review. There is enough public information that makes this approach seem sensible. Even without formal public hearings, an examination of that sort has been pursued in the past, which has led to the discovery of anomalies in the data, particularly in connection with surface temperatures, with the exhaustive work of the Canadians, Steve McIntyre and Ross McKitrick, in exposing the mistakes that the Climate Research Unit made in its measurements of surface and air temperatures across the earth. Nor is it clear what should be done politically until the dust has cleared, which could take years. The possibility that fraudulent science could go so far astray has come home to roost. My own clear preference is to put all major global warming initiatives on hold until some consensus is reached on the interpretation of the data. The issue here is of a magnitude that has never been seen before, and which hopefully will never be seen again.

D. Fraud in Autism and Vaccines

Recently, the issue of academic fraud arose again in a context that has touched the lives of many individuals. There has been a constant outpouring of complaints that vaccines should be regarded as a cause of autism, a rightly dreaded disease that incapacitates the lives of too many people. But all the reputable science discounts the connection between vaccines and autism. Yet often one shrill voice can fuel a public panic that drives saner voices to one side. This scenario is not an idle one. In 1998, Andrew Wakefield and his coauthors published a study in The Lancet, a prestigious British journal, that purported to establish a connection between the vaccine for measles, mumps and rubella (MMR) and a deadly duo of dangerous conditions: regressive autism and inflammation of the colon. (20) Twelve years after the study was published, it was retracted in The Lancet. (21) The large delay between the publication and retraction suggests once again that the remedial system for dealing with academic fraud is badly broken, for it meant that no home institution of the multiple authors on the study saw it fit to conduct an examination of its own, notwithstanding the outpouring of objections to the study. And The Lancet too was asleep at the switch.

The lengthy delay mattered, for it led to serious human consequences that extended far beyond the laboratory. As related by David Gorski, the tale is almost too horrible for words. With the publication of his article, "Wakefield managed to drive MMR vaccination rates in the U.K. below the level of herd immunity, from 93% to 75% (and as low as 50% in some parts of London)." (22) The levels of measles that Great Britain had regarded as contained in 1994 continued to rise until, by 2008, the disease was once again declared "endemic." Some fraction of measles, mumps and rubella patients die, which means that in this instance, fraudulent research resulted in death. I see no reason why, as a matter of general tort theory, some victims should not be able to prevail in a private suit, perhaps in the form of a class action, against Wakefield and company on a common law theory of deceit. The falsity of the statements he made was manifest. The decline in the level of use is too large to be ignored. The resulting injuries are too specific to deny. There are, to be sure, the usual difficulties with probabilistic causation, given that the background rate of any disease can never be driven to zero. But the large increase in probability justifies the recovery of at least some damages for each loss, even if some discount needs to be made for the background probability of natural occurrences.

In examining the Wakefield fraud, the deception should have been blatant since, as Gorski reports, the science was sloppy beyond belief. No reputable scientist could ever replicate the results, which is a dead tip-off of deep irregularity. In addition, the case reeks of the worst conflict of interest violations imaginable. Evidently, Wakefield had applied for a patent on an alternative vaccine less than a year before he published his article in The Lancet. (23) Furthermore, Wakefield had been paid substantial sums to do this research by plaintiff law firms that wanted to use it as ammunition in lawsuits against various vaccine manufacturers. (24) Needless to say, none of these financial connections were disclosed at the time of publication. (25)

The litigation risk illustrated in this case is not confined to MMR vaccines. There has been an extensive effort to show that Thimersol, a mercury compound used to bond multiple vaccines together in a single dose, has been the cause of autism. The judicial response to this has been to reject the connection after an exhaustive review of the evidence, both in civil litigation (26) and under the various schemes of no-fault compensation for child victims of vaccines. (27)


Thus far, the main discussion of academic fraud has concentrated on ensuring that those who are responsible for serious misconduct are subject to necessary sanctions. But these procedures also have a second function, which is to guard against the risk that someone will bring false charges of academic or scientific fraud. More concretely, there is an ever-present risk that resentful or jealous colleagues might lodge these charges. Putting formal procedures in place is likely to make people think twice before bringing groundless charges because they know full well that the report that exonerates the person charged could also contain criticism of the persons who brought those charges.

Any decision to avoid the use of formal procedures already in place is therefore a strong sign of substantive irregularities. One lesson of what can go wrong when established procedures are not followed arose in the midst of the contentious Vioxx tort litigation. The tale begins in 2000 with the publication of an article in the New England Journal of Medicine that dealt with the combined risks of cardiac and intestinal complications of Vioxx. (28) The article issued a generally favorable report on Vioxx before Merck withdrew it from the market because of cardiac complications in September 2004. After the withdrawal, litigation began in earnest against the company, much of which centered on the preparation of the earlier NEJM Vioxx article.

The most dramatic incident arose in December 2005, when the New England Journal of Medicine announced on its web site "an expression of concern" on the eve of its deposition in the Vioxx litigation brought by various plaintiffs against Merck. (29) An "expression of concern" is a polite way of accusing a company of academic fraud. Both the independent scientists and the Merck scientists answered the first round of charges, as well as a second round of charges that followed. (30) In this instance, on the advice of its public relations advisor, the NEJM posted the attack online the night before its deposition out of concern that it would appear "lax" in its supervision because its editorial revisions of the original Vioxx article had not found serious negative effects from the use of Vioxx. (31) The gist of the dispute concerned a decision to use different end points in clinical trials for the cardiac and intestinal parts of the study. The former ended earlier than the latter to allow for more time for the assessment of the results, an assessment that is more difficult in cardiac cases. All these decisions were made in advance and approved by outside auditors. (32) However, three adverse events had occurred in the Vioxx group (and one in the control group) after the close of the cardiac arm of the trial that would have changed the results slightly. But why this counted as fraud instead of good science was never explained. For these purposes, the most telling point is that the editors of the NEJM did not follow their own procedures, which required that these frauds be referred back to their home institutions for examination. (33) There was of course no health reason to rush their dubious conclusions into print because Merck pulled Vioxx from the market voluntarily some fifteen months earlier. The sole motivation was commercial. And for what end? A recent comparative examination of the toxicities of Vioxx before it was pulled in 2004 and other NSAIDs (nonsteroidal anti inflammatory drugs) yields results that read as follows, where celecoxib is celebrex and rofecoxib is Vioxx. (34)
                 IRR             IRR
              (SERIOUS CV    (SERIOUS CV

Naproxen         0.88            0.91
Ibuprofen        1.18            1.14
Diclopenac       1.27            1.38
Celecoxib        1.03            0.99
Rofecoxib        1.19            1.07

These results show some elevation for Vioxx, but if Vioxx should be taken off the market, then surely ibuprofen should as well, as its added risks are in aggregate slightly higher. But the full story must also take into account gastrointestinal risk, and when those numbers are added in the gaps narrow. Note also that the numbers here only take into account side effects, and do not measure effectiveness, which is high for many people on Vioxx. Regardless of how these numbers are sliced, the case for removal of Vioxx looks in retrospect to be very thin, and the case for liability, when measured against the alternatives, looks to be far thinner than the $4.85 billion settlement that Merck reached with the plaintiffs suggests. (35) Vioxx remains off the market today, even though there are many who think that it should be available for use at least in hospitals for the control of post-operative bleeding. It does of course remain available to sale in many other countries, where its continued use has not generated any new claims of untoward danger or scientific fraud.


What, then, can be learned from these accounts? The first and most important lesson is that no group is immune from the dangers of succumbing to academic fraud. The perpetrators of the frauds that were reported in the articles referenced were bench scientists desperate to gain advancement in their careers. There were physical scientists who had strong ideological commitments to global warming. There was a physician who hoped to reap rewards from patent applications and plaintiffs' lawyers who secretly funded his research. The most conspicuous omission from this list of academic fraudsters is any corporation. The explanation, I think, does not lie in the intrinsic high-mindedness of corporations and their leaders. Rather, it rests on the fact that they are quite exposed to dramatic reputational losses whenever their products malfunction. One need only look at the immense efforts that Toyota has taken in recent months to try to calm the public concern about the quality of its products, even in a case that does not contain a whisper of fraud, and in which the liability concerns are small relative to the other costs of handling a recall and rebuilding a tattered image. Fraud by any such organization carries with it collateral consequences that are at least as great, if not greater, than the liability and regulatory sanctions to which they could be exposed.

What, then, should we conclude from this survey of academic fraud and the postscript on conflict of interest? The major point is that like all other systems of social control, our institutions must be designed to deal with two kinds of error. It cannot be assumed at the outset that we know that commercial firms are either bad or good. On these matters, both scenarios are possible. Likewise, it should not be assumed that any academics are free from suspicion because they do not have any direct and immediate financial motives. There are all sorts of other reasons from personal advancement to political ideology that could fuel various forms of academic and scientific fraud. As the stakes grow larger, the amount of administrative care needed to respond to them has to increase. All the traditional values of neutrality, impersonality, and the rule of law carry over to this area more than one might suppose. In the end, the only blanket judgments that we can make concern the quality of our institutional arrangements. Specific charges against specific groups have to be proven on matters of academic integrity just as they do everywhere else. Those who think that they can short circuit that hard work are only deceiving themselves and opening the way to greater amounts of fraud and misconduct that any responsible society should tolerate.

(1.) For my account of the process at the time, see Richard A. Epstein, On Drafting Rules and Procedures for Academic Fraud, 24 MINERVA 344 (1986).

(2.) UNIV. OF CHI., POLICY ON ACADEMIC FRAUD [section] 1 (1998). I use the definitions from the current policies, which track those of the original 1984 report of the committee that I chaired.

(3.) (1889) 14 App. Cas. 337, 375 (H.L.).

(4.) 425 U.S. 185, 199 (1976).

(5.) See, e.g., Ottmann v. Hanger Orthopedic Group, Inc., 353 F.3d 338 (4th Cir. 2003).

(6.) See New York Times v. Sullivan, 376 U.S. 274 (1964).

(7.) UNIV. OF CHI., supra note 2, [section] 2.

(8.) Id. [section] 4(b)(2).

(9.) Morton Hunt, A Fraud That Shook The World of Science, N.Y. TIMES MAG., Nov. 1, 1981, available at a-fraud-that-shook-the-world-of-science.html?&pagewanted=print.

(10.) UNIV. OF CHI., supra note 2, [section] 3(A).

(11.) Hunt, supra note 9.

(12.) Id.

(13.) Id.

(14.) UNIV. OF CHI., supra note 2, [section] 5(A).

(15.) See Claudia Wallis et al., Medicine: Fraud in a Harvard Lab, TIME, Feb. 28, 1983, available at,9171,955142,00.html.

(16.) See Massachusetts v. EPA, 549 U.S. 497 (2007).

(17.) See Richard A. Epstein, Carbon Dioxide: Our Newest Pollutant, 43 SUFFOLK U. L. REV. (forthcoming 2010).

(18.) See, e.g., Marc Sheppard, IPCC: International Pack of Climate Crooks, AMERICAN THINKER, Feb. 4, 2010,; Marc Sheppard, UN Climate Reports: They Lie, AMERICAN THINKER, Oct. 5, 2009, The much-maligned IPCC is the Intergovernmental Panel on Climate Change, and organization that operates under the auspices of the UN.

(19.) Climate Change Investigator Resigns over Interview Defending Researchers, TELEGRAPH, Feb. 12, 2010, available at earth/environment/ climatechange/7219070/Climate-change-investigator-resigns-over-interview -defedingresearchers.html.

(20.) A.J. Wakefield et al., Ileal-Lymphoid-Nodular Hyperplasia, Non-Specific Colitis, and Pervasive Developmental Disorder in Children, 351 LANCET 637 (1998).

(21.) Editors, Retraction-Ileal-Lymphoid-Nodular Hyperplasia, Non-Specific Colitis, and Pervasive Developmental Disorder in Children, 375 LANCET 455 (2010).

(22.) Posting of David Gorski to Science-Based Medicine, (Feb. 8, 2009).

(23.) For the details, see Brian Deer, The Wakefield Factor, (last visited Mar. 24, 2010).

(24.) Brian Deer, MMR Doctor Given Legal Aid Thousands, TIMESONLINE, Dec. 31, 2006,

(25.) Id.

(26.) Doe v. Ortho-Clinical Diagnostics, Inc., 440 F. Supp. 2d 465 (M.D.N.C. 2006).

(27.) See Pafford v. Sec'y of Health & Human Servs., 451 F.3d 1352 (Fed. Cir. 2006) (en banc) (rejecting causal connection to systematic juvenile rheumatoid arthritis); see also Ctrs. for Disease Control & Prevention, Thimerosal in Seasonal Influenza Vaccine, (last visited Mar. 24, 2010) ("There is no convincing evidence of harm caused by the low doses of thimerosal in vaccines, except for minor reactions like redness and swelling at the injection site.").

(28.) Claire Bombardier et al., Comparison of Upper Gastrointestinal Toxicity of Refecoxib and Naproxen in Patients with Rheumatoid Arthritis, 343 NEW. ENG. J. MED. 1520 (2000).

(29.) I have written in two other places at length on what I regard as a serious breach of professional ethics in the Vioxx litigation. See RICHARD EPSTEIN, OVERDOSE: HOW EXCESSIVE REGULATION STIFLES PHARMACEUTICAL INNOVATION 209-18 (2006); Richard Epstein, Conflicts of Interest in Health Care: Who Guards the Guardians?, 50 PERSP. IN BIOLOGY & MED. 72, 84-88 (2007).

(30.) See Gregory Curfman et al., Expression of Concern: Bombardier et al., "Comparison of Upper Gastrointestinal Toxicity of Rofecoxib and Naproxen in Patients with Rheumatoid Arthritis," 353 NEW ENG. J. MED. 2813 (2005); Gregory Curfman et al., Expression of Concern Reaffirmed, 354 NEW ENG. J. MED. 1193 (2006); Claire Bombardier et al., Response to Expression of Concern Regarding VIGOR Study, 354 NEW ENG. J. MED. 1196 (2006); Alise Reicin & Deborah Shapiro, Response to Expression of Concern Regarding VIGOR Study, 354 NEW ENG. J. MED. 1196 (2006).

(31.) See David Armstrong, Bitter Pill: How the New England Journal Missed the Warning Signs on Vioxx, WALL ST. J., May 15, 2006, at A1. It is worth noting that Armstrong's title misses the real story as well.

(32.) For Merck's thorough account of these and other events, see Report from the Hon. John S. Martin, Jr. to the Special Comm. of the Bd. of Dirs. of Merck & Co. 71-80 (2006), available at

(33.) See Int'l Comm. of Med. Journal Editors, Uniform Requirements for Manuscripts Submitted to Biomedical Journals, publishing_2corrections.html (last visited Mar. 24, 2010);

The second type of difficulty is scientific fraud. If substantial doubts arise about the honesty or integrity of work, either submitted or published, it is the editor's responsibility to ensure that the question is appropriately pursued, usually by the authors' sponsoring institution. Ordinarily it is not the responsibility of the editor to conduct a full investigation or to make a determination; that responsibility lies with the institution where the work was done or with the funding agency.

(34.) See Sue Hughes, Naproxen Best NSAID for Heart-Disease Patients, HEARTWIRE, May 28, 2009,

(35.) Lewis Krauskopf, Merck Agrees to Pay $4.85 Billion in Vioxx Settlement, REUTERS, Nov. 9, 2007, available at idUSL0929726620071110.

Richard Epstein, NYU James Parker Hall Distinguished Service Professor of Law, the University of Chicago; the Peter and Kirsten Senior Fellow, The Hoover Institution This article is a much revised and expanded version of the keynote address that I gave to the Conference on Academic Integrity sponsored by the Stanford Law & Policy Review on February 5, 2010.
COPYRIGHT 2010 Stanford Law School
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Epstein, Richard A.
Publication:Stanford Law & Policy Review
Date:Jan 1, 2010
Previous Article:Corporate manipulation of research: strategies are similar across five industries.
Next Article:EC in D.C.: an analysis of Washington D.C.'s emergency contraception legislation.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |