Printer Friendly

The relationship between adjusted hospital mortality and the results of peer review.

Two methods are commonly used to assess the quality of hospital care. The traditional method involves reviewing the medical record to evaluate the process of clinical decision making. Clinical experts then judge whether the process in the medical record is up to the usual standards of medical care. The second method is to review the outcome of care. Hospitals with rates of bad outcomes higher than expected may be providing lower-quality care.

Both of these methods have been used to evaluate the quality of care for Medicare patients. The first method is the responsibility of peer review organizations (PROs). These organizations were created in each state and territory by the Tax Equity and Fiscal Responsibility Act of Congress in 1982 and are regulated by a bureau in the Health Care Financing Administration (HCFA). The second method has been used by HCFA since 1986. HCFA uses routinely collected billing data and information from the Social Security Administration to compute for all acute care hospitals an adjusted annual 30-day mortality rate of Medicare patients. The results of these analyses have been made available directly to all hospitals involved and are released yearly to the public in a report (Medicare Hospital Mortality Information) that often receives considerable press coverage (Pear 1987).

It is much less expensive to derive the HCFA adjusted mortality rate than to measure quality with PRO review, but the value of the HCFA adjusted mortality rate has been criticized on the grounds that (1) mortality adjustment may be based on inaccurate data (Hsia, Krushat, Fagan, et.al. 1988); (2) mortality adjustment may require more detailed clinical or socioeconomic information than is available in the HCFA discharge summary data (Eastaugh 1986; National Association of Public Hospitals 1987; Dubois, Rogers, Moxley, et al. 1987; Park, Brook, Kosecoff, et al. 1990); (3) mortality may not be an appropriate measure of quality of care (Lohr 1988); and (4) differences among hospitals may be random (Park, Brook, Kosecoff, et al. 1990).

Two studies of specific diseases evaluated the relationship between adjusted mortality rates similar to those used by HCFA and quality of care as determined by physician review. One study found that the adjusted mortality rates were related to the results of implicit review criteria (Dubois, Rogers, Moxley, et al. 1987), but not to explicit review criteria. The PRO method of evaluating the process of care was also criticized for lacking criteria applicable in a uniform fashion (Dippe, Bell, Wells, et al. 1989).

In this study we determined the relationship between the two methods for evaluating the quality of care. If both methods are assessing quality of care, there should be a relationship. Therefore, to some extent the results from each method can be used to validate the other.

DATA SOURCES

The three data sources are (1) the 1988 HCFA mortality study (using 1987 Medicare data) on U.S. acute care hospitals treating Medicare patients; (2) PRO reviews completed between June 30, 1987 and July 1, 1988; and (3) the American Hospital Association's (AHA) 1986 Annual Survey of Hospitals.

Results from the HCFA hospital mortality study have been described in the December 1988 HCFA release, Medicare Hospital Mortality Information. The HCFA data used for our present study included the observed and predicted 30-day hospital mortality rates of Medicare patients. The predicted hospital mortality rates were derived by HCFA from data on the UB-82 billing form and data from the Social Security Administration. These data included dates of admission and discharge, one principal discharge diagnosis, up to four secondary diagnoses, age, sex, race, whether the patient had been transferred from another hospital, and the date of death. Adjusted mortality is the difference between the observed and predicted mortality for a given hospital. Hospitals with a positive adjusted mortality had mortality rates greater than expected, and hospitals with a negative adjusted mortality rate had lower than expected mortality rates.

The PRO data were obtained from 38 PROs. There are 54 PROs in the United States, one in each state and each territory and one in Washington, DC. The PRO review process to evaluate the quality of care is as follows: (1) About 25 percent of Medicare records are selected for review by a number of criteria screens. (2) Once an admission is selected, a nurse reviewer examines the record on the basis of process criteria that have been established by HCFA. The 20 standard process criteria are listed in the Appendix, and an additional 21st "general quality" criterion is used by some states although this screen varies among the states. This general quality category has the highest failure rates for some of the states that use it, since it may cover a broad range of potential categories. (3) If the medical record fails any of these criteria, the record is generally referred to a physician adviser although in some states the nurse can make a final determination in cases such as inadequate discharge planning. (4) The physician then confirms or does not confirm the problem found by the nurse reviewer. Generally, the physician review focuses on the problem identified by the nurse reviewer, but the physician may reexamine the entire medical record.

The PROs differ in their ways of recording the physician's judgment. For 21 of the 38 PROs reported in this study the PRO records a severity level for the quality problem: this ranges from inadequate documentation to care that threatens the life of the patient. For the remaining PROs the scale is not used, and there exists a record only of whether the physician detected a quality problem or not.

For the purpose of this study, all 54 PROs were instructed by HCFA in July 1988 to submit a tape of their data to our research unit. Thirty-eight responded in a timely manner. Only PRO reviews that were completed from July 1, 1987 to June 30, 1988 were included in this study. The date July 1, 1987 was chosen since the PROs revised their reporting system under the second scope of work, which should have been implemented for all PROs by July 1987.

The results of the PRO review were used to derive a measure of the quality of care at a hospital. This measure was the percentage of records reviewed at a given hospital that had a quality problem found on screening by the nurses and confirmed by a physician adviser.

STATISTICAL METHODS

To test the association between hospital adjusted mortality rates and confirmed problem rates within each PRO, we ranked the hospitals for each of these measures and then computed the Spearman rank correlation. In calculating the correlation we weighted each hospital by the number of cases the PRO screened at the hospital. To calculate an overall significance test it was necessary to transform the correlation (r) from an individual PRO with n hospitals to a Z-statistic (Snedecor and Cochran 1980). The sum of the Z statistics from each PRO weighted by the inverse of the variance of the Z for that PRO (the number of hospitals for that PRO minus 3) was used to obtain an overall Z value. This overall Z value was then transformed back to a correlation coefficient. Significant testing was performed on the weighted Z value, which under the null hypothesis has a normal distribution with mean of zero and variance of the inverse of the sum of the variances.

RESULTS

The average confirmed problem rates and the observed mortality rates for each PRO reporting its data are shown in Table 1. The states are ordered by the number of PRO-reviewed admissions. There is considerable variation in the confirmed problem rates among the PROs. Two PROs (New Jersey, Montana) reported very low confirmed problem TABULAR DATA OMITTED rates. These PROs apparently reported the number of sanctions against physicians rather than the number of physician-confirmed problems. Another PRO, Puerto Rico, had a very high confirmed problem rate (38 percent). For the remainder of the PROs the confirmed problem rate varied from under 0.5 percent to nearly 10 percent. Little variation was noted among the states with respect to the observed mortality rates, which averaged 12.4 percent and ranged from 9 percent to 14 percent.

The predicted mortality rate was very close to the observed mortality. The difference between the observed and predicted mortality ranged from -1.48 percent to 1.32 percent and was less than 1 percent for 30 of the 38 states. The good agreement between the predicted and observed mortality suggests that the method used by HCFA for obtaining the predicted mortality rate is not greatly biased when applied at the state level.

The rank correlations between confirmed problem rates and adjusted mortality rates is given in the right-hand column. For example, among the 367 hospitals in California the correlation between confirmed problem rates and mortality rates was .29 (p |is less than~ .001). Note that the six PROs that reviewed the most cases all had correlations that were greater than or equal to the average correlation of .19. Fourteen of the PROs had correlations that were significant at the p |is less than~ .05 level.

As shown in Table 2, considerable variation existed in the PRO confirmed problem rates associated with each of the individual quality criteria screens. For example, for screen 2 the state at approximately the 75th percentile (28 states had a lower confirmed problem rate for this screen) had a confirmed problem rate of .15 percent, which was five times higher than the confirmed problem rate of .03 percent for the state at the 25th percentile. (Nine states had a lower confirmed problem rate for this screen.) For all screens the confirmed problem rate for the state at the 75th percentile was more than three times the confirmed problem rate for the state at the 25th percentile. The overall confirmed problem rate for states at the 75th percentile was 3.8 percent, which is more than four times the confirmed problem rate of .87 percent for states at the 25th percentile.

The screens with the highest failure rate were screens 5 and 21. Screen 5 includes abnormal diagnostic tests that were not addressed in the record. This includes positive cultures that were not followed by negative cultures or antibiotics and abnormal x-rays that were not followed by other diagnostic tests or a physical examination. Screen 21 was performed by only ten of the PROs and it differed among these PROs.

TABULAR DATA OMITTED

Not only did the failure rates vary among the states, but substantial variation also appeared in the correlations between the two quality measures. To identify what types of PROs had higher confirmed problem rates, we tested the following PRO characteristics: the number of cases reviewed, the number of hospitals reviewed, the average number of cases per hospital, the average confirmed problem rate, the variation in the confirmed problem rate, the observed mortality rate, and the predicted mortality rate. The only characteristic evaluated that was significantly associated with the size of the correlation was the average number of cases reviewed per hospital (r = .43, p |is less than~ .01).

We also tried to determine whether the correlations were comparatively stronger for certain types of hospitals. To investigate this hypothesis we examined subsets of hospitals for the six PROs that had reviewed more than 100,000 cases. Although New Jersey had more than 100,000 cases it was omitted from this analysis because its confirmed problem rate was not comparable to that of other PROs. For California, New York, and Ohio, limiting the analysis to hospitals in very large metropolitan statistical areas (MSAs) resulted in substantially higher correlations, but this was not true for other states. In addition to the high rank correlations for the metropolitan areas shown in Table 3 there was a high correlation for Washington, DC (r = .74, p |is less than~ (.02), as shown in Table 1.

The correlations were also high and positive for teaching hospitals with the exceptions of those in Texas and Pennsylvania. The correlations were much higher for public hospitals in New York and Illinois than for other hospitals in these states, but there was only a weak correlation for the 40 public hospitals in Texas. Limiting the analyses to those hospitals that had more than 500 cases reviewed did not substantially increase the mean correlation (.29 versus .25).

DISCUSSION

This study examined the reliability and validity of two methods for measuring the quality of hospital care: the PRO measure of the process of care and the HCFA adjusted mortality rate. We found a weak but highly significant relationship between the two quality measures that was strengthened for certain subsets of hospitals.

The results in this study differ from the results of previous studies that did not find a relationship between adjusted mortality rates and the results of physician review using explicit review criteria (Park, Brook, Kosecoff, et al. 1990; Dubois, Rogers, Moxley, et al. 1987). Compared to previous studies this study had more cases per hospital; examined many more hospitals; and included all patients, not only patients with specific diseases. Thus, this study could detect relationships that might be missed in smaller studies.

We assessed the reliability of each quality measure by examining the variation of the measure across states. As shown in Table 1 the HCFA predicted mortality rate was an accurate predictor of mortality for all states, even those with a higher or lower mortality rate than the TABULAR DATA OMITTED national average. This high reliability suggested that any errors in billing data used by HCFA for its calculations are uniform for all states. On the other hand, the PRO review process is clearly not applied uniformly, as shown by the dramatic variation across the states in Table 1 and Table 2. This variation is probably due to the subjective nature of the review process.

To assess the validity of the quality measures, we tested whether the PRO measure was related to the HCFA adjusted mortality rate. Thus, each measure was used to validate the other. The measures were independent since physicians reviewing the medical record were unaware of the adjusted hospital mortality rate, which was not computed until December 1988 and the PRO reviews were completed by July 1988. In addition, the errors detected by the PRO physician reviewers were only rarely considered to be related to patient death. Unfortunately, neither measure is a gold standard, although each is likely to have some validity: the PRO method because it is based on physician review (Dippe, Bell, Wells, et al. 1989) and the HCFA adjusted mortality rate because it has been shown to give results that are very similar to mortality rates adjusted using detailed clinical data (Krakauer, Bailey, Skelton, et al. 1992).

Both the confirmed problem rate and the mortality rate may also measure hospital characteristics unrelated to quality. For example, adjusted mortality rates will vary among hospitals due to variations in patient severity of illness that is not taken into account by the HCFA method for adjusting for quality of care. Also, the confirmed problem rate in different hospitals may vary in part because of variation in record keeping or in the aggressiveness of review of certain types of hospitals.

Within a homogenous group of hospitals, however, the relative value of each quality measure should depend more on the quality of care and less on other hospital characteristics that are uniform. Therefore, the correlation of the two measures of care may be greater in a homogenous group of hospitals in part because both measures have a stronger relationship with quality than they do in a heterogenous group of hospitals. The homogeneity of the hospitals may explain why the correlation between the two measures was high for hospitals within large SMSAs (r = .36) and public hospitals (r = .42).

In summary, our results suggest that both the HCFA adjusted mortality rate and the PRO confirmed problem rate may measure quality to some extent. Within homogenous groups of hospitals that have fewer factors affecting outcome, these rates may be a better measure of relative quality.

REFERENCES

Dippe, S. E., M. M. Bell, M. A. Wells, W. Lyons, and S. Clester. "A Peer Review of a Peer Review Organization." Western Journal of Medicine 151, no. 1 (1989): 93-96.

Dubois, R. W., W. H. Rogers, J. H. Moxley, D. Draper, and R. H. Brook. "Hospital Inpatient Mortality: Is It a Predictor of Quality?" New England Journal of Medicine 317, no. 26 (1987): 1674-80.

Eastaugh, S. R. "Hospital Quality Scorecards, Patient Severity, and the Emerging Value Shopper." Hospital & Health Services Administration 31 (November/December 1986): 85-102.

Hsia, D. C., W. M. Krushat, A. B. Fagan, J. A. Tebbutt, and R. P. Kusserow. "Accuracy of Diagnostic Coding for Medicare Patients under the Prospective Payment System." New England Journal of Medicine 318, no. 6 (1988): 352-55.

Krakauer, H., R. C. Bailey, K. J. Skelton, J. D. Stewart, A. J. Hartz, E. M. Kuhn, and A. A. Rimm. "Evaluation of the HCFA Model for the Analysis of Mortality following Hospitalization." Health Services Research 27, no. 3 (1992): 317-35.

Lohr, K. N. "Outcome Measurement: Concepts and Questions." Inquiry 25 (Spring 1988): 37-50.

Medicare Hospital Mortality Information 1987. Health Care Financing Administration. HCFA publication no. 00646. Washington, DC: U.S. Government Printing Office, 1988.

National Association of Public Hospitals. "Statement of NAPH on the Release of 1987 Medicare Hospital Mortality Data." Washington, DC, December 16, 1987.

Park, R. E., R. H. Brook, J. Kosecoff, J. Keesey, L. Rubenstein, E. Keeler, K. L. Kahn, W. H. Rogers, and M. R. Chassin. "Explaining Variations in Hospital Death Rates. Randomness, Severity of Illness, Quality of Care." Journal of the American Medical Association 264, no. 4 (1990): 484-90.

Pear, R. "Mortality Data Released for 6000 U.S. Hospitals." New York Times (18 December 1987): 5.

Snedecor, G. W., and W. G. Cochran. Statistical Methods. 7th ed. Ames: Iowa State University Press, 1980.

Address correspondence and requests for reprints to Arthur J. Hartz, M.D., Ph.D., Associate Professor, Division of Biostatistics/Clinical Epidemiology, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226. Mark S. Gottlieb, Ph.D. is Assistant Professor in Family Medicine, Evelyn M. Kuhn, Ph.D. is Assistant Professor in the Division of Biostatistics/Clinical Epidemiology, and Alfred A. Rimm, Ph.D. is Professor and Chief of the Division of Biostastistics/Clinical Epidemiology, Medical College of Wisconsin, Milwaukee. This article, submitted to Health Services Research on September 6, 1990, was revised and accepted for publication on April 1, 1992.
COPYRIGHT 1993 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hartz, Arthur J.; Gottlieb, Mark S.; Kuhn, Evelyn M.; Rimm, Alfred A.
Publication:Health Services Research
Date:Feb 1, 1993
Words:3072
Previous Article:The financial performance of diversified hospital subsidiaries.
Next Article:The effect of HMOs on premiums in employment-based health plans.
Topics:

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters