Printer Friendly

Expert opinion's role in assessment.

How many articles start off by extolling the randomized clinical trial as the pinnacle of scientific evidence? The next sentence typically confronts the practical reality that randonized trials are simply not feasible in most cases. In the absence of definitive trials, technology assessment methodologies have been forced to rely on various types of observational studies--prospective cohort, case control, retrospective studies, cross-sectional and case studies--and a certain component of expert opinion/group judgment, as expressed either in the published medical literature or by the assessors themselves.

Technology assessments are used as a tool for physician education, as a guide to coverage decisions, or as the basis of health policies, review criteria, or practice parameters. Obviously, one would like those assessments to be as objective and reproducible as possible. Expert opinion is frequently seen as a compromise. However, given the limitations of the randomized trial, expert opinion is a fundamental part of the technology assessment methodologies of a variety of entities: the federal government (e.g., Office of Technology Assessment), physician groups (e.g., American College of Cardiology, American Medical Association), and private organizations (e.g., RAND Corporation). Although expert opinion is undeniably subjective, how explicitly it is used and molded into a technology assessment is a major variable in technology assessment methodologies.

The Office of Health Technology Assessment (OHTA), the technology assessment arm of the Health Care Financing Administration (HCFA), has a technology assessment process based primarily on literature review. As part of the process, OHTA requests comments from the relevant medical specialty societies and manufacturers. In addition, it announces its intent to undertake a specific technology assessment in the Federal Register and solicits comments. All comments become part of the "literature" upon which assessments are based. While its process is described as open, OHTA does not describe how the comments are evaluated and incorporated into the final assessment. Does a comment from a large specialty society carry the same weight as one from an individual private practitioner, a patient, or a manufacturer? How are conflicting comments resolved?

The American College of Cardiology (ACC) and the American Heart Association (AHA) have a joint technology assessment/physician education program that relies on the judgment of a panel of physicians. A literature review is initially prepared by a staff person; the panel then reviews and discusses the literature and formulates its recommendations, which fall into three categories:

* Class I--General agreement that the medical service is needed.

* Class II--Divergence of opinion.

* Class III--General agreement that a medical service is unnecessary.

A unique feature of the ACC/AHA process is that the panel must unanimously agree on the final classification. The dynamics of the group process-how the panel reaches unanimity and what compromises are inherent in this process-is an interesting component of the methodology.

The Diagnostic and Therapeutic Technology Assessment (DATTA) program of the American Medical Association takes a different approach in integrating expert opinion into the final technology assessment. The DATTA program consists of two basic components: a literature review composed by a physician staff member and an analysis of that review by an outside panel of physicians nominated by the relevant specialty societies. DATTA solicits comments from its panel by mail. The panel is asked to categorize a specific technology as established, promising, investigational, doubtful, or unacceptable. Unanimity is not a goal of the DATTA process. Instead, the methodology includes a detailed statistical analysis of the responses to produce mean and median ratings.

The methodology of the RAND Corporation relies on the group judgment of a panel of nine that is sent a literature review composed by a RAND staff member. The panel is asked to rate the "appropriateness" of a technology on a scale of one to nine. The panel then meets, and members are given a printout indicating how their rankings compare to those of their peers. A discussion of the literature review follows, and the panel is asked to rerank the technology. If one compares the rerankings to the original responses, a tendency toward convergence to the median is apparent. How and why the group judgment process alters the panel members' responses is an interesting methodologic question. For example, panel members who see that their responses are outliers may be swayed to rerank them closer to those of other panel members. The composition of the panel may also have an influence--a particularly vocal member with a dominant personality may influence responses.

Unlike the DATTA methodology, where the responses undergo statistical analysis, the RAND methodology has established four definitions of agreement and disagreement that function to turn the varying responses into "appropriateness" criteria. For example, the four definitions of agreement are as follows:

* All nine ratings fall within a single three-point region (1-3, 4-6, 7-9).

* All nine ratings fall within any three-point range.

* After one extreme high and one extreme low are discarded, the remaining seven ratings all fall within one of the three-point regions.

* After the extremes are discarded, the remaining seven ratings fall within any three-point range.

By determining if there is agreement and where on the nine-point scale the agreement falls, one can determine the appropriateness of the technology.

Given the inherent subjectivity of expert opinion, it is unknown which of these approaches best approximates the elusive "truth" or the "right answer." However, reliance on expert opinion will certainly continue. It behooves all those using technology assessments to understand the underlying methodologies and the contribution of expert opinion in the assessments.
COPYRIGHT 1993 American College of Physician Executives
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:medical assessment
Author:Brown, Elizabeth
Publication:Physician Executive
Date:Mar 1, 1993
Previous Article:All roads lead to Rome.
Next Article:A guide to the upcoming health reform debate.

Related Articles
Evolution toward revolution.
Coverage, technology assessment, and the courts.
The neuropsychologist in brain injury cases.
The Chinese version of the Becker Work Adjustment Profile (BWAP-CV) for use by people with developmental disabilities. (Becker Work Adjustment...
Science on the Witness Stand: Evaluating Scientific Evidence in Law, Adjudication, and Policy.
Haworth Press (Binghamton, NY) has begun the publication of "Journal of Child Custody," a new quarterly journal that carries research, case studies...
Integra Realty Resources, Inc.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters