Printer Friendly

Meta-analysis: apples and oranges, or fruitless.

Of increasing interest to researchers, clinicians, and health policy planners is the use of a controversial statistical method known as meta-analysis. The term, coined by Gene V. Glass, a psychologist now at the University of Arizona, describes a process of synthesizing results from separate but similar experiments. [1] In its earliest applications, meta-analysis was used primarily by educators and social scientists to analyze studies with continuous outcomes or to extract simple recommendations from a heterogeneous body of literature. Early meta-analyses reviewed such topics as the use of intelligence quotients [2] and social welfare programs. [3] Its proponents in medicine claim meta-analysis can be used to determine effectiveness in a timely manner without additional trials when sufficient, yet diverse, research exists. [4]

While the method has its critics, its application to medical literature is receiving widespread recognition in the lay and scientific press. Recent reviews in the New York Times [5] and in Science [1] debate the strengths and weaknesses of meta-analytic techniques, citing examples of harmful treatments that might have been withdrawn years sooner had existing data been subjected to the rigors of meta-analytic scrutiny. Further evidence of heightened interest in meta-analytic procedures is reflected in the proliferation of articles in the MEDLINE database containing the key word "meta-analysis," from zero in 1976 to 28 in 1986 [6] to 272 in 1990. [7] Editorial comments [8] accompanying the publication of a particularly elegant meta-analysis [9] specifically welcome high-quality meta-analyses to the prestigious annals of Internal Medicine as a "new class" of articles that, at their best, knit clinical insight with quantitative results in a way that enhances both."

As a genre, meta-analysis appears to be here to stay. Because of its potentially strategic influence as a tool for data synthesis in such highly active fields as technology assessment and outcomes research, meta-analysis may affect health care policy in a profound manner. It is therefore important that health care planners and clinicians become proficient appraisers of the meta-analyses they will encounter with increasing frequency in the medical literature.

Purpose

Designers of medical meta-analytic methodology usually restrict use of the term to methods directly used for contrasting and combining results of different studies. [10-12] The primary purpose for combining or "pooling" results is to achieve the greater statistical power inherent in a larger study. [13]

It has been demonstrated that small, single-center random clinical trials (RCTs) are particularly susceptible to false negative results. [14] Furthermore, most current clinical research in such areas as cancer and heart disease attempts to evaluate treatments that offer relatively small increments of improvement. The smaller the increment of change, the larger the placebo-controlled RCT must be in order to yield statistically significant results. Richard Peto, director of the Cancer Studies Unit at Oxford University, cites the example of a new treatment for heart disease that reduces the occurrence of heart attack by 20 percent. In order, however, to observe this 20 percent reduction in mortality among the half million people in the United States who die of heart attacks each year, a placebo-controlled RCT would require an enrollment of more than 200,000 subjects. [1] Meta-analysis, by combining many small studies, can reveal the effectiveness of a treatment previously obscured by numerous falsely negative trials. Likewise, a falsely positive study result (one that arises because of random sampling errors) is more likely to occur when a large number of small trials test an ineffective treatment. Such errors, if they are truly random, can be canceled out when data from a large group of small RCTs are correctly combined in a meta-analysis. [13]

Meta-analysis is most useful, then, when used to overcome "noise" caused by small sample size and random error in a large group of studies. Within this restricted definition, certain components have come to be expected of formal medical meta-analytic strategies (see figure 1, right). [11]

Controversies and Solutions

Is meta-analysis a "black box?" An ill-advised attempt to compare apples with oranges? A camouflage technique, disguising "garbage in, garbage out" with fancy statistics? [15] Four major areas of potential weakness in the meta-analytic process can be summarized as follows:

* Oversimplifying results--combining the results of studies for the purpose of discovering an overall effect may minimize the importance of interactive or mediating effects.

* Ignoring the possible impact of study quality--including results of "poorly" designed studies with those of good ones is thought by some to lead inevitably to uninterpretable results.

* Attempting to combine studies when they are few in number and when their results are heterogeneous--many question the use of meta-analysis to resolve "conflicting" trial results.

* Applying statistical techniques of unproven validity for the pooling of trial results--some investigators believe that the application of quantitative methods to meta-analyses has proceeded too rapidly, precluding adequate evaluation of the underlying statistical theory. [11,16]

Carefully designed meta-analyses overcome most of these problems. Visual components often provide a unique sense of the complexity of the intervention being considered, dispelling any illusions of oversimplification. Meta-analyses typically employ graphs that illustrate sample size, confidence intervals, and temporal sequence of the studies. Such graphs provide a sense of what is being combined, while highly detailed tables illustrate how evaluation and inclusion criteria were applied to the literature. [8] The arrangement of similar subgroups from various studies may highlight an effect that is not observable in the individual studies.

However, the dangers of attempts to combine results of a few heterogeneous studies continue to elicit concern. Highly controversial issues cannot be resolved through quantitative methods of review. Some users enlist meta-analysis primarily when there is reason to suspect that an effect may be masked by noise or bias or when there is a need to evaluate the consistency of a body of evidence. David Eddy, Professor of Health Policy and Management at Duke University, suggests that meta-analytic methods are best reserved for that "middle ground" of research in which adequate numbers of relatively high-quality studies are available, but their combined results are not intuitively obvious. Statistical methods can then be used to make weighting adjustments for variations in sample size (Mantel-Haenzel-Peto) or to explore the implication of both multiple biases across studies and varying levels of uncertainty about these biases (Confidence Profile Method). [17]

Conclusion

How can one recognize a high-quality meta-analysis? Interested readers will wish to consider formal evaluation criteria (figure 2, above) and basic texts describing statistical theory and models [2,16,18,] as well as the track record of the group performing the meta-analysis. However, the more casual reader will do quite well by asking three questions:

* Does the question asked by the meta-analysis make sense from a clinical or biologic standpoint?

* Have important differences between studies been overlooked?

* In generating an aggregate estimate of treatment effect, have the authors considered both statistical and clinical significance? [13]

References

[1] Mann, H. "Meta-Analysis in the Breech." Science 249(4968):476-80, Aug. 3, 1990.

[2] Light, R., and Pillemer, D. The Science of Reviewing Research. Cambridge, Mass.: Harvard University Press, 1984.

[3] Smith, M., and Bissel, J. "The Impact of Head Start." Harvard Educational Review 40(1):51-104, Feb. 1970.

[4] Wortman, P., and Yeaton, W. "Using Research Synthesis in Medical Technology Assessment." International Journal of Technology Assessment 3(4):509-22, Oct.-Dec. 1987.

[5] Altman, L. "Meta-Analysis." New York Times, Aug. 21, 1990, p. C1.

[6] Wolf, F. "Meta-Analysis (Letter)." New England Journal of Medicine 317(9):576, Aug. 27, 1987.

[7] MEDLINE search by the author, March 1991.

[8] Goodman, S. "Have You Ever Meta-Analysis You Didn't Like?" Annals of Internal Medicine 114(3):244-6, Feb. 1, 1991.

[9] Callahan, C., and others. "Oral Corticosteroid Therapy for Patients with Stable Chronic Obstructive Pulmonary Disease." Annals of Internal Medicine 144(3):216-23, Feb. 1, 1991.

[10] Greenland, S. "Quantitative Methods in the Review of Epidemiologic Literature." Epidemiology Review 9:1-30, 1987.

[11] L'Abbe, K., and others. "Meta-Analysis in Clinical Research." Annals of Internal Medicine 107(2):224-32, Aug. 1987.

[12] Sacks, H., and others. "Meta-Analysis of Randomized Controlled Trials." New England Journal of Medicine 316(8):450-5, Feb. 19 1987.

[13] Naylor, C. "Two Cheers for Meta-Analysis: Problems and Opportunities in Aggregating Results of Clinical Trials." Canadian Medical Association Journal 138(10):891-5, May 15, 1988.

[14] Freiman, J., and others. "The Importance of Beta, the Type II Error, and Sample Size in the Design and Interpretation of the Randomized Control Trial. Survey of 71 "Negative Trials." New England Journal of Medicine 299(13):690-4, Sept. 28, 1978.

[15] Wachter, K. "Disturbed by Meta-Analysis?" Science 241(4872):1407-8, Sept. 16, 1988.

[16] Wolf, F. "Meta-Analysis: Quantitative Methods for Research Synthesis." In: Quantitative Applications in the Social Sciences. London, England: Sage Publications, 1986.

[17] Eddy, D. The Statistical Synthesis of Evidence: Meta-Analysis by the Confidence Profile Method. Book and software in press.

[18] Hedges, L., and Olkin, I. Statistical Methods for Meta-Analysis. Orlando, Fla.: Academic Press, 1985.

Joan B. Vatz, MD, is a Fellow, Science and Technology, American Medical Association, Chicago, Ill. She thanks Sona Kalousdian, MD, MPH, for her thoughtful review of this article and helpful suggestions. The opinions in the article are those of the author and do not necessarily represent policy positions of the AMA.
COPYRIGHT 1991 American College of Physician Executives
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:statistical method for systematic synthesis of health care management and policy research findings
Author:Vatz, Joan B.
Publication:Physician Executive
Article Type:column
Date:May 1, 1991
Words:1525
Previous Article:Federal court assumes ERISA claims jurisdiction.
Next Article:Quality assurance in an IPA setting.
Topics:


Related Articles
Meta-analysis: the librarian as a member of an interdisciplinary research team.
New wine in old bottles: a meta-analysis of Ricardian equivalence.
What works?
Introduction.
Meta-analysis in library and information science: method, history, and recommendations for reporting research.
Systematic reviews and librarians.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters