Patently conflicted: medical schools have gotten into the health technology business in a big way, but in the wake of litigation, institutions may surface as guilty of conflicts of interest.
The field of healthcare, in fact, is of singular note with regard to the Bayh-Dole Act. Medicine may be the one field wherein the principles of Bayh-Dole hit the wall of controversy--a controversy which has become more pronounced as a number of voluntary standards have emerged which seem to cut against the grain of the federal Law.
It all has to do with conflicts of interest. Recent trends may indicate an inherent contradiction in the Bayh-Dole Act's policies, at least vis-a-vis medical care. For since a number of tragic deaths in the late 1990s, professional medical organizations have been devising conflict-of-interest rules for healthcare-related research, and in March 2003, the federal government followed suit with the publication by the U.S. Department of Health and Human Services (www.dhhs.gov) of a voluntary guidance on the subject. Last fall, the trend came full circle as the Association of American Medical Colleges (www.aamc.org) published its standards for institutional conflicts of interest in biomedical research.
Problem is, there are a lot of dollars attached to the patent interests in that research. According to the most recent license survey by the Association of University Technology Managers (www.autm.net), in the past 10 years, federal government research expenditures at U.S. hospitals and medical research institutions have nearly tripled, rising from just over half a billion dollars in 1991, to $1.47 billion in 2001. Patents filed by these institutions rose from 416 in 1991, to 1,212 in 2001, according to the survey.
American medical schools, some say, seem to have an inherent financial conflict whenever they host clinical trials on technology in which they hold a patent interest.
The Evolution of a New Standard?
Let's step back a bit, for a refresher course on the Bayh-Dole Act itself.
The 1980 Act gives research institutions the right to seek a patent interest in discoveries made with federal funds. Institutions generally proceed to license the technology at a fairly early stage, thereby garnering more investment dollars to conduct clinical trials (eventually, on human subjects), as a drug, device, or discovery gradually makes its way from the level of basic science all the way to the marketplace.
And the law places other requirements on those institutions which choose to patent a discovery funded with federal money, reminds William Tew, assistant provost and assistant dean at Johns Hopkins School of Medicine (MD). "It requires that we seek licensees, that we show preference for small companies; and that we share the income with the inventor, for the inventor's personal use," he says. "Like some other institutions, Johns Hopkins shares 35 percent of the net value with the inventor," he adds.
Still, it is because of this patent interest that some analysts theorize research institutions and their employees have a financial conflict of interest, which gives rise to two concerns in research: 1) the objectivity of the research itself, and 2) especially in the healthcare field, the protection of the human research subjects. Medical advances usually must undergo clinical testing on significant populations of human subjects before the FDA allows them to be marketed, but these clinical trials may be conducted at the patent-owning institution, at another institution, or independently by a "contract research organization" (CRO).
Since the tragic death of Jesse Gelsinger, a healthy teenager who participated in a gene therapy trial and died in 1999, it is the issue of human participant protection which has gotten the most attention. The financial interest of the individual inventors, as pointed out by Tew, has received regulatory attention for some time. By federal law, those conducting a clinical trial on human participants must disclose financial interests above a certain threshold amount ($10,000 or 5 percent equity, according to the U.S. Public Health Service regulations).
But it is not merely the share of the license interest that is relevant here, onlookers insist. Doctors running a clinical trial may also receive funding from the trial's sponsor (the pharmaceutical company, which may have licensed the drug from the university). And in some circumstances, the doctor may receive speaking honoraria or other compensation from the trial sponsor. Yet, in all of this, the question of the independence of the research institution itself often went unaddressed. Even after the death of Jesse Gelsinger (as the American Society of Gene Therapists, the American Society of Clinical Oncology, and the Association of American Medical Colleges devised frameworks for regulating individual conflicts of interest in clinical research), nobody was really asking whether the institution itself could be objective.
But research institution administrators and academics maintain they are indeed sensitive to institutional culpability. Some of the voluntary frameworks of the professional organizations even surpass the federal rules, they insist. "We don't allow clinical investigators to accept any money at all from sponsors," says Lisa Bero, professor of Clinical Pharmacy and Health Policy at the University of California-San Francisco. UCSF holds the patents for the research that led to the hepatitis B vaccine, and (along with California's Stanford University), the Boyer-Cohen cloning technique, making it something of an 800-pound gorilla in the field, according to UCSF Executive Vice Chancellor Regis Kelly.
Clearly, there have been other decisions made in favor of removing institutional conflicts of interest during clinical trials. "I have direct experience whereby in the presence of a financial conflict, an institution has said, 'This is not the place to conduct a Phase III trial'" reports Susan Ehringhaus, vice chancellor and general counsel for the University of North Carolina at Chapel Hill. Ehringhaus is also incoming associate general counsel in Regulatory Affairs for the AAMC. "The AAMC taskforce on conflicts felt that it was extremely important for academic medical centers, teaching hospitals, and schools to take a leadership role in defining the standard for conflicts of interest," she says. "They proceeded in an attitude of some skepticism regarding the ability of science to police itself. The standards which the AAMC urge are intended to supplement the government standard," she asserts.
And indeed, in October 2002, the AAMC did supplement its policy on individual conflicts with one on institutional conflicts. It suggested, among other things, that institutions ought to examine their involvement when they hold an equity interest in a nonpublicly traded sponsor, or an interest worth $100,000 or more in a publicly traded one. In March 2003, HHS issued its own draft guidance, which offers an analysis regarding institutional conflicts. "The standard the AAMC offers is consistent with the government standard," Ehringhaus says.
What the Rules Provide
High-level talk about reforming the clinical trial process is nothing new. For all its success, the American drug and technology discovery process has always been the center of a firestorm of controversy. When, for instance, it has not responded to the need for a streamlined process to make treatments for previously untreatable fatal conditions available to the public, the government has tried to come up with new approaches. This time, however, there is even more momentum behind government efforts--and that's because there is concern both about the reliability of data and the well being of enrollees.
The series of policies, rules, and guide Lines issued since 2000 vary in their provisions, details, and, importantly, in their exceptions, but they all have one thing in common: ALL four of those issued as of this writing detail greater responsibility and heightened scrutiny in reviewing potential conflicts of interest for the research institution, the clinical investigators, and any other clinical investigators, including those from the healthcare and pharmaceutical industries.
Financial conflicts in government-sponsored research have actually gotten some level of governmental attention for years, since the U.S. Public Health Service conducts an annual survey of potential conflicts and acts to mitigate them. (Those institutions found guilty of conflicts of interest face possible government sanctions, including withdrawal of PHS funding.) Confounding matters, each government agency that provides funding may have its own regulation. (The PHS section, for example, views investigator equity interests of $10,000 or more as being potentially problematic.) For those clinical trials that are not government funded, the critical level of equity interest (for trials which must be submitted to the FDA before the drug or device under investigation may be marketed to the public) is $50,000.
And while the FDA does not directly prohibit investigators from having financial interests in the outcome of the research, the agency will take into account any financial conflicts in analyzing the reliability of the data such a study produces. A clinical trial prepared by investigators with hefty bonuses at stake will probably receive more intense government scrutiny as the numbers come in.
Interestingly, the government regulations at this time do not consider the question of institutional conflicts, except to explicitly provide in the PHS regulations that institutions have an obligation to manage conflicts on the part of their employees. The trend of an institution to be the investigator and hold patent interests has continued into recent times, even despite the Gelsinger and similar tragedies.
In April of 2000, for example, in the wake of such well-publicized events, the American Society of Gene Therapy issued a policy on financial conflicts that specifically applied to financial interests in companies sponsoring trials, yet no mention was made of nonprofit academic research institutions or the like. While noting that the "society is not a regulatory body and it should beware from becoming one [sic]," the Ethics Committee of the Society noted with approval the standards of the PHS, as well as the National Institutes of Health (www.nih.gov) and the National Science Foundation (www.nsf.gov), and concluded that "[A]ll investigators and team members directly responsible for patient selection, the informed consent process and/or clinical management in a trial must not have equity, stock options or comparable arrangements in companies sponsoring the trial." Essentially, the Society announced a policy of zero tolerance. In its view, even de minimis conflicts were not acceptable. Despite the hortatory nature of the policy, the Society's opinion could serve as evidence of an industry standard, in a court of law. And there was more to come.
In December 2001, the AAMC released a long-awaited policy on individual conflicts, which provided that individuals with conflicts surpassing the PHS-designated guidelines should be presumed to be conflicted out of the research, even if the study was not government-funded. But the policy, which specifically cited the Bayh-Dote Act, also suggested that individuals ought to be able to present to a conflict-of-interest (COI) committee evidence that, under compelling circumstances, the conflict had been adequately managed. (It then barred the payment of outcome-contingent funds to the investigators.)
The AAMC policy was followed in April of 2003 by action from the American Society of Clinical Oncology (www.asco.org), which updated its 1996 policy to require any investigator to disclose receipt of more than $100 from a clinical trial sponsor, and prohibited lead investigators from receiving any income from a sponsor. Though the ASCO policy provides a level of exception for uniquely talented individuals, the penalty for investigators that do not disclose is to be barred from publication in the ASCO journal or presentation at the society's meetings. (And this type of penalty could indeed become part of federal policy down the road.)
In October 2002, the AAMC produced its second report on the subject, which addressed for the first time the question of whether it is possible for an American medical research institution to be presumed guilty of financial conflict. The association concluded that under certain circumstances, an institution should also be conflicted out of the research. Those circumstances included: 1) when the institution receives a royalty interest of more than $100,000, for example, or 2) if institutional leadership is personally conflicted, or 3) if the institution is in receipt of significant gifts from trial sponsors. The association suggested that an institutional COI committee should hear evidence that the presumption should not apply, and encouraged the separation of clinical administrative functions from technology transfer offices, as well as use of multicenter trials, independent oversight, recusal, and other measures. The association also suggested that hospitals affiliated with a given medical school ought to be regarded distinctly as not suffering from the institution's conflict.
Finally (lest anyone labor under the impression that the other associations' policies were paper tigers), HHS followed up with its own restatement of the matter with a draft guidance in 2003. The guidance, which received public comment until May 2003, encouraged the formation of institutional conflict committees, independent oversight, and independent management of a conflicted institution's financial interests in the research. More detail, however, may be forthcoming.
Is Conflict Present? Is It Inevitable?
That brings us to these two questions: 1) Does your medical school academic medical center, or research institution have a financial conflict of interest looming in the background of its clinical trials activities? And, 2) Have we come full circle from the BayhDole vision of 1980?
One interesting way of looking at these questions, some experts say, is to consider the notion of conflicts in science and research.
"I think the AAMC policy is an overreaction," says Dennis Liotta, Samuel Candler-Dobbs professor of Chemistry at Emory University (GA). "In the area I work in, which is the very earliest part of drug discovery, the chances of me or the university being able to do anything inappropriate to manipulate the data are practically nil. I make compounds: either the compound is what I say it is, or it is not," he maintains.
Yet, as far back as 2000, the ASGT policy noted, "An extreme case would be that of a clinical reagent, be it a small molecule, a protein or a gene transfer vector, that is manufactured by a company wholly or partly owned by the Principal Investigator conducting the trial The guiding principle is clear: Clinical investigators must be able to design and carry out clinical research studies in an objective and unbiased manner, free from conflicts caused by significant financial involvement with the commercial sponsors of the study."
Yet there may be instances in which an institution is reluctant to disqualify an investigator, even if a conflict seems to exist. "If you have a very early-stage invention, there may still be a little tweaking to be done on a compound or on a device, and the inventor often has the most subtle knowledge," says Julie Gottlieb, associate dean for Policy Coordination at Johns Hopkins School of Medicine.
So what is an institution to do, if it has a significant patent interest in a particular research product? Given that the whole problem with conflicts seems to arise only when the product enters clinical trials, it would appear that the solution is merely to send the product elsewhere for the trials. Not so fast.
"One advantage of keeping the research in-house is that sometimes an institution might want to do research it can't get a private group to take up," says Bero at UCSF, noting that institutions are unlikely to refer products to one another for clinical trials, given the intense competition between universities--a climate of competition that the Bayh-Dole Act helped to create. "I don't think institutions would go to a contract research organization (CRO) either," she continues, "because universities have lost a lot of business to CROs already. CROs have fewer rules and regulations, whereas universities have a good track record," she adds.
But Bero (chair of the conflicts committee at UCSF, and researching the effect of financial interests on the objectivity of research) maintains that even small amounts of money have been shown to affect results. "There are many things you can do to fudge research," she says. "You can ask a question to which you already know the answer; you can make decisions regarding the design of a study; and then there's the question of whether the study is actually published."
Still, the AAMC rules are a step in the right direction, she concedes. "They're very similar to the federal regulations, but the subject of institutional conflicts is a whole new area. The AAMC was ahead of the pack, and a lot of institutions are just now developing their policies."
John Otrompke is a Chicago-based law grad.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Research & Business|
|Date:||Feb 1, 2004|
|Previous Article:||Extron boosts AV education.|
|Next Article:||The case for clusters: looking to maximize server and storage capabilities? Clusters are the key.|