Why is there not a de facto gold standard for genetic-testing controls? Where we are now, and where we can go?
The European approach
In contrast, the European Union (EU) has taken a more systematic and progressive approach to this situation by starting with the establishment of viable standards, which then provide a clear and accepted roadmap for the development of new assays. The EU also recognizes that it is not always possible, or wise, to rely on a single source for the development of these clinical reference controls. Through Euro-Gentest, a continent-spanning organization funded by EU members, experts determine which areas are in most urgent need of standards development and which corresponding governmental agency is best suited to handle each of these areas. The stated goal is to provide "the harmonization of standards and practice in all these areas throughout the EU and beyond."
The current situation in the United States
Ideally, quality reference controls would provide the gold standard against which all clinical assays could be judged, and would serve as the starting point for the enhancement and development of new genetic tests. Currently, assay developers in the United States are forced to approach the situation backwards: They develop the test and then create the standard.
What this means in practice is that all that can be known about the assay is only as much as can be gleaned from the relatively small set of samples used in development. Even more detrimental is the fact that once the assay becomes available for more widespread clinical use, there is no broadly accepted standard against which laboratories can measure accuracy and reproducibility during the testing of actual patient samples.
Aside from the minimal standards established under the Clinical Laboratory Improvement Amendments (CLIA) for validating research-grade materials (intended only for academic-research purposes) the industry is essentially left to police itself--which leaves laboratories facing the question: If an assay is going to be used for clinical purposes, and the government is not taking the lead in providing clinical reference controls to match it against, what are the alternatives?
Limitations for laboratories
Laboratories bear the burden of validating reference-control material or risk being cited for a violation during a CLIA inspection. In any case, given the choice, most labs obviously would choose to practice genetic testing using the highest quality standards. But they face a roadblock to doing this--namely, that there is no readily available source for the appropriate (which is to say, clinical-quality) DNA controls.
In fact, while one arm of the government regulatory apparatus--the Centers for Disease Control and Prevention (CDC)--is actively advancing the use of research-grade controls, another--the Food and Drug Administration (FDA)--has not enforced regulatory control over the marketing of such reference controls that fail to meet even minimal FDA guidelines. This leaves labs with a dilemma: Either use controls obtained without proper ethical consent of the donor, or use controls that are known to be created under lesser quality standards.
In the first instance, laboratories routinely will retain a blood sample that is left over from patient tests, purify it, and use it as a control for as long as the sample lasts. Since only a small amount of blood is typically needed for any individual assay, one tube can supply enough DNA for anywhere from 2,000 to 10,000 additional tests.
In addition to the fact that this requires the constant recreation of reference "standards" from different patients, this procedure is ethically questionable. There is no informed consent from the patients whose blood samples are used; they are almost certainly unaware that such a use of their bodily fluids is even possible. Nothing in current guidelines specifically addresses or allows this process, but no regulatory agency has stepped in to stop it.
Many laboratories, then, look to the second choice: using a DNA source that is approved for research, not clinical use. As mentioned above, such controls are readily available and advertised by government agencies, by way of postings at their websites as a viable source of such materials.
Ironically, although it has met a valid research need, this source was never intended for widespread clinical application. Unfortunately, this creates a situation in which the control material is "manufactured" in a way that does not meet the minimum requirements of all the other testing components that go into a typical PCR-based genetic test. What users end up with is something that is actually the complete antithesis of a true gold standard.
Where do we go from here?
Given the current regulatory landscape and financial implications, there is no incentive for the laboratory industry to make the switch from research-grade to clinical-grade material on its own. CLIA allows for the use of research-grade reference controls, leaving up in the air the entire question of when should only FDA-cleared products be used. Plus, it is more cost effective for labs to keep doing what they have been doing--especially given the fact that, unlike the assays themselves, insurance providers do not currently reimburse the cost of the control material.
This unfortunate situation exists despite the fact that everyone agrees on the need for up-to-date standards that explicitly address clinical reference controls. Such standards would provide a range of benefits:
* For assay developers, government regulation can be a driver of innovation--within a defined regulatory framework, companies know what is permitted and endorsed, so they have a clear playing field for development.
* For laboratories, established standards offer improved consistency, reliability and quality--not to mention fewer false positives and negatives.
* For the general public, concrete guidelines help establish a level of trust and confidence that is necessary to overcome any initial misgivings about genetic testing.
Those in the genetic-testing industry cannot afford to stumble out of the gate with unreliable, inconsistent tests. Based on the European example, it is possible for a governmental agency (even one operating across international borders) to develop sensible standards that provide a clear roadmap for the creation of new assays, and instill a high level of confidence that clinical applications are being judged against a true gold standard.
It is time for government to play a similar leadership role in the United States, so that the promise of genetic testing can more effectively be fulfilled.
Michael P. Murphy, MSc, is president and CEO of ParagonDx(www.paragondx.com), located in Research Triangle Park, NC. His company was first in bringing FDA-cleared human genomic quality controls to the market, and offers molecular diagnostic reagents including analyte specific reagents. Murphy was previously the founder and CEO of Gentris Corp., his third start-up, and he formed Intek Labs in 1997, the first international pharmacogenomics company. He was trained in pharmacology and molecular biology at the University of California, San Diego Cancer Center, and The Salk Institute. With more than 24 years of scientific and business experience in this field, Murphy is a frequent lecturer and publisher on pharmacogenomics with invited reviews in Pharmacogenomics, Drug Discovery World, Genetic Engineering News, IVD Technology, and Expert Review of Molecular Diagnostics.
By Michael Murphy, MSc
|Printer friendly Cite/link Email Feedback|
|Publication:||Medical Laboratory Observer|
|Article Type:||Cover story|
|Date:||May 1, 2008|
|Previous Article:||The laboratory's role in pharmacogenetic testing.|
|Next Article:||The Gram stain goes molecular.|