Printer Friendly

The relationship between Medicare's process of care quality measures and mortality.

Using Medicare inpatient claims and Hospital Compare process of care quality data from the period 2004-2006, we estimate two model specifications to test for the presence of correlational and causal relationships between hospital process of care performance measures and risk-adjusted (RA) 30-day mortality for heart attack, heart failure, and pneumonia. Our analysis indicates that while Hospital Compare process performance measures are correlated with 30-day mortality for each diagnosis, after we account for unobserved heterogeneity, process of care performance is no longer associated with mortality for any diagnosis. This suggests that the relationship between hospital-level process of care performance and mortality is not causal. Implications for pay-for-performance are discussed.

**********

Medicare spending totaled $374 billion in 2006 and is expected to grow at a rate of almost 8% over the next decade (Kaiser Family Foundation 2007). The massive outlays for Medicare and well-documented deficiencies in health care quality in the United States (Institute of Medicine [IOM] 2000, 2001; McGlynn et al. 2003) have raised pressing concerns about the value of medical care received by Medicare beneficiaries. In an attempt to address these concerns, Congress enacted legislation as part of the 2005 Deficit Reduction Act calling for hospital value-based purchasing (VBP), a combination of pay-for-performance (P4P) and public quality reporting, to be implemented for Medicare hospital care by fiscal year 2009 (U.S. Congress 2005). While this deadline was missed, a recent report by the Senate Finance Committee shows strong, continued support for VBP (U.S. Senate 2009).

A critical question is how health care quality will be assessed in VBP. Process of care performance measures, which attempt to assess whether "what is now known to be 'good' medical care has been applied" (Donabedian 1966), have been the cornerstone of Medicare's existing hospital-based P4P demonstration, the Premier Hospital Quality Incentive Demonstration (PHQID), and its public quality reporting program, Hospital Compare; they are likely to be prominent, or possibly exclusive, metrics of quality in VBP. This analysis evaluates the usefulness of process performance measures in VBP by assessing the correlational and causal associations between process of care measures and 30-day mortality, a prominent and commonly used measure of patient outcomes.

The potential relationships between hospital processes and outcomes of care may vary substantially by condition and the nature of the evidence base supporting the process measures. Since our focus is Medicare VBP, this analysis employs data that are part of the current experiments. Consequently, our study is limited to examining hospital care for acute myocardial infarction (AMI), heart failure, and pneumonia, the clinical conditions for which data are widely reported in Hospital Compare and which are part of Medicare's current pay-for-reporting program.

Background

Quality in health care is frequently defined by process, outcome, and structure measures (Donabedian 1966), with process and outcome measures being the most common. Process of care measures are often preferred to outcome measures on the grounds that providers have greater control over their performance on these measures, and that the measures offer "actionable" information for quality improvement (Birkmeyer, Kerr, and Dimick 2006; Mant 2001). However, the attribution of "quality" to performance on process of care measures is contingent on the nature of the measures. Performance on process of care measures that exhibit minimal clinical significance or a weak causal relationship with patient health will be of little value, even as effort is expended in achieving and documenting them. Absent the causal relationship between process performance and patient outcomes, process measures are compromised as a metric of individual provider improvement and as a means of selective referral to high-quality providers, since measure responsiveness by patients would not result in better outcomes.

There are several specific reasons why performance on the Hospital Compare process of care measures may not be causally related to patient mortality: 1) compliance with the process measures may not reduce mortality; 2) the processes captured by the measures may be implemented effectively by some providers but ineffectively by others; 3) processes captured by the measures that are in fact positively associated with patient health may have been rendered obsolete by advances in clinical practice (Porter and Teisberg 2006); 4) use of a set of process measures covering only a small array of diseases may divert clinical efforts toward those processes and away from unmeasured processes, resulting in negative consequences for patients; 5) the true relationship between process of care measures and patient mortality may be altered due to measurement error, enhanced record-keeping, or gaming (henceforth referred to jointly as "enhanced record-keeping").

In the context of Hospital Compare, the fifth point is particularly likely because hospitals self-report their quality performance and are able to improve reported quality scores through nonclinical activities (such as devoting greater effort to documenting quality performance). Hospitals also have discretion to exclude patients from the denominator of their performance measures (known as exception reporting)--an action without sufficient risk of auditing to deter it. Evidence from the 2005-2006 Community Tracking Study indicates that the quality reporting programs (including Hospital Compare) implemented by the Centers for Medicare and Medicaid Services (CMS) and the hospital accreditation agency, the Joint Commission, resulted in hospitals dedicating substantially more resources to chart abstraction and data review, though the effect on quality improvement initiatives is unclear (Pham, Coughlan, and O'Malley 2006).

Survey research confh:ms that a substantial proportion of hospitals' response to CMS' Hospital Compare program has been dedicated to documenting quality: in 2005, hospitals dedicated, on average, 4.8 full-time equivalent employees (FTEs) toward quality improvement activities, and 2.5 FTEs, on average, toward quality reporting (Mathematica Policy Research 2005). A similar emphasis in the allocation of labor toward administrative activities has been seen in the United Kingdom's quality incentive program (Galvin 2006). Further evidence from the United Kingdom's effort to pay family practice providers based on process measure performance showed that the strategy of exception reporting was associated with higher performance and was higher among conditions that promised greater financial rewards (Doran 2006). (1)

In addition to enhanced record-keeping, flaws in the distributional qualities of the process measures may obscure the association between measured processes of care and outcomes. Figure 1 shows the distribution of the 2006 "starter set" (explained later) for AMI process performance measures among acute care hospitals that reported at least 10 patients for a given measure. Four of the five measures are strongly right-censored, or top-coded, at the maximum process score of 100% compliance. The lack of variation among the higher-performing hospitals may decrease the ability of process measures to proxy for quality in a cross-sectional context. The near-maximum scores also limit the amount of absolute improvement possible for high performers, perhaps attenuating the within-hospital correlation between process performance and mortality. These top-coded measures may be retained in a system to ensure against slippage, but may not be useful in a VBP performance enhancement environment.

Estimating the Relationship between Process Performance and Mortality

In addition to the reasons why compliance with process measures may not be associated with underlying quality of care, estimating the relationship between process measures and health outcomes is subject to a number of difficulties. To illustrate, assume that health care outcomes are a function of patient characteristics, health care facility characteristics (including technology), provider characteristics (including physician skill), and processes of care.

This relationship could be estimated in a regression model of the following form:

Outcome = [b.sub.0] + [b.sub.1] patient characteristics + [b.sub.2] health care facility characteristics + [b.sub.3] provider characteristics + [delta] measured processes of care + v (1)

where v = [b.sub.4] unmeasured processes of care + u, and where u is a normally behaved error term.

Even assuming that accurate and appropriate measures are available for patient characteristics, health care facility characteristics, provider characteristics, and measured processes of care, estimating equation 1 using linear regression will likely result in a biased estimate of the relationship between measured processes of care and the outcome. This is because the error term, v, is likely to be correlated with the independent variables in the model, particularly measured processes of care, resulting in omitted variable bias (Wooldridge 2005, p. 96).

To summarize, process performance measures may not be causally related to outcomes because the measures themselves are inappropriate proxies for outcomes, or because the process of measurement adversely impacts provider behavior. In addition, estimating the relationship between measured processes and outcomes without accounting for unmeasured processes will likely result in a biased estimate of the causal relationship.

Review of Studies Examining the Relationship between Process and Outcome Measures

A number of recent studies have examined the association between process and outcome measures, including mortality, in the context of care for AMI (Bradley et al. 2006; Werner and Bradlow 2006; Jha et al. 2007; Granger et al. 2005; Peterson et al. 2006), heart failure (Fonarow et al. 2007; Luthi et al. 2004; Luthi et al. 2003; Werner and Bradlow 2006; Jha et al. 2007), and pneumonia (Werner and Bradlow 2006; Jha et al. 2007). The Werner and Bradlow and Jha et al. studies are the most similar to the current investigation because they examine the association between processes and outcomes using the same clinical conditions (AMI, heart failure, and pneumonia) and the same data (Medicare claims and Hospital Compare) over similar time periods. These prior studies both concluded that greater performance for process measures was associated with lower mortality for AMI, heart failure, and pneumonia. (2) However, both the Werner and Bradlow and Jha et al. investigations employed cross-sectional designs, and did not account for the omitted variable bias that we assert is likely present in these analyses.

[FIGURE 1 OMITTED]

As noted by Werner and Bradlow (2006, 2008) many of the process performance measures employed in Hospital Compare have been shown in fact to reduce mortality in clinical trials, the ideal experiment. However, the fundamental problem translating results from clinical trials to a real-world situation like Hospital Compare relates to the level of analysis: even if the relationship between process performance and mortality is causal at the individual level, it does not mean that process performance is causally associated with mortality at the hospital level. This could be a result of hospital-level efforts employed to improve process performance but which are not associated with decreased mortality, such as improving record-keeping and excluding patients unlikely to achieve success on certain measures. It also could be the result of multi-tasking, where hospital effort is diverted toward achieving measured objectives and away from unmeasured ones (Eggleston 2005; Holmstrom and Milgrom 1991). Thus, hospital-level process performance may not be causally related to hospital-level outcomes. Add to this the confounding from hospital unobservables that occurs in cross-sectional analysis with a limited set of controls, and cross-sectional models are likely to result in biased estimates of the causal relationship between hospital-level process performance and mortality.

Data

The analysis was conducted using Medicare inpatient claims data, the Medicare beneficiary denominator file, and Hospital Compare data from the period 2004-2006, as well as the Medicare Provider File in 2006. Inpatient claims were used to identify the primary diagnoses for which beneficiaries were admitted, secondary diagnoses and type of admission for risk adjustment, and discharge status to exclude transfer patients. Transfer patients were excluded because outcomes for these may depend on the quality of care received at multiple hospitals, potentially confounding the relationship between process performance and mortality at a given hospital. The Medicare denominator file was used to add additional risk adjusters and to determine mortality. Hospital Compare data were used to create measures of hospital process performance. Data from the Medicare Provider File were employed to control for hospital structural characteristics in the estimated models. Table 1 shows descriptive statistics of the hospital-level variables employed in the analysis.

Methods

This analysis examines the relationship between process and mortality in the context of Hospital Compare, a voluntary, Internet-based public quality reporting program for hospital care implemented by CMS in 2003. The analysis seeks to distinguish between the correlational and causal associations between observed process performance and mortality for diagnoses of acute myocardial infarction, heart failure, and pneumonia. The analysis also explores whether improvement in hospital process performance can be attributed to exception reporting, and whether exception reporting and top-coding of process measures impact the relationship between process of care performance and mortality.

Measuring Quality

Two approaches were taken toward creating the process performance composite measures. Both composite measures were comprised from the 10 "starter set" CMS/Joint Commission process quality measures. In 2003, CMS asked hospitals to report the number of patients who received recommended care for 17 process measures related to pneumonia, AMI, and heart failure, and the number of patients eligible for inclusion in the denominator of each quality measure. The process measures were selected for inclusion into Hospital Compare after being endorsed by the National Quality Forum (NQF). After low rates of voluntary reporting, CMS made the 2004 update for Medicare payments to hospitals conditional on reporting for 10 of the 17 indicators--the starter set--which increased reporting on these indicators dramatically. For AMI, the starter set measures are aspirin at admission, aspirin at discharge, use of an angiotensin-converting enzyme (ACE) inhibitor, beta ([beta]) blocker at admission, and [beta]-blocker at discharge. For heart failure, the starter set measures are assessment of left ventricular function (LVF), and use of ACE inhibitor. For pneumonia, the starter set measures are oxygenation assessment, timing of initial antibiotics, and pneumococcal vaccination. Only process measures from the pay-for-reporting starter set are employed in this analysis because those are the ones on which hospitals consistently reported performance from 2004 to 2006.

The first composite measure (Composite 1) was calculated as the z-score of the weighted sum of z-scores for process measures corresponding to each diagnosis. (3) The individual process measures each were transformed by the z-score in order to avoid bias in the composite measure resulting from the positive correlation between the likelihood of reporting on a measure and performance on that measure. The sum of the weighted z-scores then was transformed by the z-score to facilitate interpretation. If a hospital had fewer than 10 cases for an individual process measure, the measure was not included in the calculation of the composite. However, as long as a hospital reported a denominator of at least 10 patients for at least one measure, the hospital had a composite score. Composite 1 is similar to the composite measure calculated by Werner and Bradlow (2006).

An alternative measure (Composite 2) was calculated as the z-score of the unweighted sum of each process measure for each diagnosis for hospitals that reported a denominator of at least 10 patients for each measure. (4) Composite 2 is similar to the composite measure calculated by Jha et al. (2007).

As a result of the z-score transformations, Composite 1 and Composite 2 both have a mean of 0 and a standard deviation of 1. While the Jha et al. and Werner and Bradlow studies both used a cut-off of 25 patients for inclusion of a process performance measure, this study used 10 as a cut-off primarily because the lower cut-off allows us to include more hospitals in the analysis and does not adversely affect our results. This is because our regression analysis weights each observation based on the number of Medicare patients in each hospital, with each diagnosis, in each year. Consequently, hospitals with a small number of patients did not have undue influence on the analysis, yet were able to be included in the analysis.

Risk Adjustment for 30-day Mortality

Hospital-level risk-adjusted (RA) 30-day mortality for each diagnosis was the dependent outcome variable. RA mortality was calculated by taking the ratio of observed mortality to expected mortality for each hospital and each diagnosis, and multiplying this ratio by the population mean mortality for the respective diagnosis. Expected mortality was estimated by generating predicted probabilities of 30-day mortality from patient-level logit models where mortality was regressed on age, gender, race, 30 dummy variables for the Elixhauser comorbidities (Elixhauser et al. 1998), type of admission (emergency, urgent, elective), and season of admission. These predicted probabilities then were summed over the patients in a hospital to generate hospital-level scalars of expected mortality.

Modeling the Relationship between Process Performance and Mortality

The first specification estimates the relationship between performance and the log of RA mortality. This model is similar to what has been estimated previously in the literature:

In (RA Mortality [rate.sub.jt]) = [b.sub.0] + [b.sub.1] [Z.sub.jt] + [b.sub.2] [year.sub.t] + [b.sub.3] [Z.sub.jt]* [year.sub.t] + [[delta]sub.1] [process.sub.jt] + [[delta].sub.2] [process.sup.2.sub.jt] + [e.sub.jt] (2)

where j is indexed to hospitals and t is indexed to year. The equation was estimated separately for each diagnosis. In this specification, Z is a vector of hospital characteristics (ownership, number of beds, teaching status, urbanicity, the ratio of residents to average daily census, and the percentage of patients insured by Medicare); Process is one of the two composite measures of process performance for each diagnosis; and year is a vector of year dummies for 2005 and 2006. A negative sign on the marginal effect for process ([[delta].sub.1] + 2 * [[delta].sub.2] process) would indicate that process measure performance was associated with lower mortality.

The second specification accounts for time-invariant factors at the hospital level through the inclusion of hospital fixed effects:

ln(RA Mortality [rate.sub.jt]) = + [b.sub.2] [year.sub.t] + [b.sub.3] [Z.sub.jt] [year.sub.t] + [[delta].sub.1] [process.sub.jt] + [[delta].sub.2][process.sup.2.sub.jt] +[h.sub.j] + [e.sub.jt] (3)

where h is a vector of hospital-specific fixed effects. The vector Z varies minimally within hospitals and over time and is absorbed by the hospital fixed effects, while the interaction between Z and year is time-varying and included in the specification.

The inclusion of hospital-specific effects controls for unobserved time-invariant factors at the hospital level (e.g. physician skill, physician experience, coordination of care, technology, hospital management interest in quality improvement) that may confound the relationship between processes of care and outcomes. Fixed-effects methods are designed to adjust for any important factors that are constant over time at the hospital level, even if these factors are not observed. Fixed-effects models estimate whether within-hospital variation in process performance is associated with within-hospital variation in mortality. As a result, this specification more clearly identifies the causal relationship between process performance and mortality.

For each model, we performed an F-test that all hospital fixed effects are equal to 0: if the null is rejected, then the linear specification is omitting important fixed effects, making it biased and inconsistent. For each model, the consistency of random effects also was examined by a test of the overidentifying restrictions of random effects: if the test is rejected, random effects estimation is inconsistent, and fixed effects must be used (Schaffer and Stillman 2006).

As a result of the log transformation of the dependent variable in both equations 2 and 3, the marginal effects of process performance were interpreted as the percentage change in RA mortality associated with a one standard deviation increase in the composite measure. Equations 3 and 4 also were estimated using the vector of individual process measures and their squares corresponding to each condition instead of the composite measures of process performance.

Supplemental analysis was performed to estimate whether the relationship between process performance and mortality varies according to two factors hypothesized to moderate these relationships: measure top-coding, and hospital exception reporting. To evaluate the potential effects of top-coding, we tested the difference in the marginal effect of process performance between the top and bottom quartiles of hospitals on process performance for each model specification. For evidence of top-coding there would be a stronger association between process performance and mortality among the low quartile of performers.

To evaluate the effects of exception reporting, we created an indicator, called the reporting ratio, for individual process measures and the process composite measures. The reporting ratio estimates the extent to which hospitals exclude patients from the denominator in their reported process measure performance. For individual process measures, the reporting ratio was defined as the number of patients included in the denominator for a given measure, multiplied by the percentage of Medicare patients in the hospital's total caseload divided by the Medicare hospital caseload for that diagnosis. The aggregate condition reporting ratio (henceforth referred to as reporting ratio) was defined as the average number of patients included in the denominator for each diagnosis, multiplied by the percentage of Medicare patients in the hospital's total caseload divided by the Medicare hospital caseload for that diagnosis. (5) A lower reporting ratio means that a hospital has excluded a greater proportion of patients from the calculation of its performance on a measure, which is a crude indicator of potential gaming. Hospitals that reported sampling patient data to obtain process performance scores were excluded from the analysis of exception reporting.

To evaluate whether the reporting ratio is associated with higher process performance, the following model was estimated:

Composite [1.sub.jt] = [b.sub.1] [year.sub.t] + [b.sub.2] [year.sub.t] * [Z.sub.j] + [[delta].sub.1] [reporting ratio.sub.jt] + [[delta].sub.2] [reporting ratio.sup.2.sub.jt] + [h.sub.j] + [e.sub.jt] (4)

A negative sign on the marginal effect of the reporting ratio indicates that as hospitals exclude a greater proportion of patients from their calculation of process performance, their process performance increases. The relationship between the reporting index and process performance was estimated only with the fixed-effects specification.

To evaluate whether exception reporting weakens the association between process performance and mortality, equations 3 and 4 were re-estimated with the inclusion of the reporting index and interactions between the reporting index and the Composite 1 terms. Then, marginal effects of process performance were evaluated at the 25th and 75th percentiles of the reporting index. If the marginal effect of process performance is greater (more negative) at the 75th percentile of the reporting index, that suggests the association between process performance and mortality is stronger for hospitals that exclude more patients from their quality measures.

Two sources of heteroskedasticity could arise from model specifications estimated in this analysis. First, multiple observations from the same hospitals over time give rise to potential group-level heteroskedasticity. Second, the hospital-level RA mortality rates vary in their precision as a result of the number of patients in the denominator of the calculation. To treat these two forms of heteroskedasticity, hospital-level cluster-robust standard errors were estimated (Williams 2000) and analytical weights, based on the number of Medicare claims for the respective diagnoses, were employed (Gould 1994). The use of analytical weights has the effect of placing greater emphasis on hospitals that have more patients, and thus have more precise estimates of mortality and process performance. All analyses were performed using Stata 10.0.

Results

All analyses were conducted with both composite measures of process performance and with individual process measures in place of the composite measures. Because the results were qualitatively similar across the specifications, only results using Composite 1 are reported in the main regression results (results from models using Composite 2 and individual process measures are available from the authors).

Table 2 shows the marginal effects of Composite 1 from the linear regression and fixed-effects specifications, estimated at the 25th percentile, 50th percentile, and 75th percentile of process performance for each diagnosis. In the linear regression specification, a one-standard-deviation increase from median process performance was associated with a reduction in mortality of 9.0% for AMI (p < .01), 1.5% for heart failure (p < .05), and 1.9% for pneumonia (p < .01). However, when hospital fixed effects were added, this association disappeared: a one-standard-deviation increase from mean process performance was associated with an increase in mortality of .2% for AMI, a decrease in mortality of .5% for heart failure, and an increase in mortality of .5% for pneumonia, none of which were significant at p <. 10. For both model specifications, the marginal effects of process performance did not vary substantially over levels of process performance. For each model, tests that all fixed effects equal 0 were rejected, indicating that the results from linear regression models are biased, and tests of the overidentifying restrictions of random effects were rejected, indicating that the results from random effects models (not shown) are biased. (6) Results from the linear regression and fixed-effects models estimated with individual process measures instead of Composite 1 mirror the results in Table 2.

As a sensitivity check, we reran each of the pooled cross-section and fixed-effects models, excluding, separately for each condition, hospitals that were in the bottom quartile of within-hospital variation. In another iteration, we excluded hospitals in the bottom half of within-hospital variation. In these specifications, model inference was not changed when hospitals with limited within-unit variation were excluded in both the pooled cross-section or fixed-effects models. As another sensitivity check, we reran the linear regression and fixed-effects models over three separate time periods: 2004-2005, 2005-2006, and 2004-2006. The results from these models were qualitatively similar over these different periods: process performance was significantly and inversely associated with mortality in the pooled cross-sectional models and was not associated with mortality in the fixed-effects models.

Descriptive evidence of hospital improvement for process performance and RA mortality further supports the results from the linear regression and fixed-effects models. Table 1 shows that, while process performance improved steadily from 2004 to 2006 for each diagnosis, mortality changed very little. This supports the results from the fixed-effects models that within-hospital variation in process performance was not associated with within-hospital variation in mortality. Also, among hospitals that reported at least 10 patients for process performance measures and had at least 10 Medicare patients in 2004, 2005, and 2006, within-hospital process performance on Composite 1 in 2004 and 2006 was correlated at r = .65 for AMI, r = .77 for heart failure, and r = .68 for pneumonia. Further, despite substantial improvement in process performance from 2004 to 2006, the relative rankings of hospitals changed very little: among hospitals in the bottom quartile of process performance in 2004, 65% remained in the bottom quartile in 2006 for AMI, 67% for heart failure, and 60% for pneumonia. As a result, levels of process performance remained correlated with mortality over the observation period, as seen in the linear regression results.

Effects of Exception Reporting

Table 3 shows descriptive statistics of the reporting ratios for individual process measures and on the aggregate for each diagnosis. It shows that apart from the initial antibiotic measure for pneumonia, reporting ratios increased for all process measures and for the aggregate ratio from 2004 to 2006. Table 3 also shows that reporting ratios varied substantially across process measures within a diagnosis: the reporting ratio for ACE inhibitor for both the AMI and heart failure diagnoses was approximately one-third the size of the reporting ratio for the other measures within these diagnoses.

Table 4 shows the results from the fixed-effects models in which process performance was regressed on the reporting ratios. Negative marginal effects are taken to be evidence that exception reporting increased process performance. Table 4 shows evidence that within-hospital increases in the reporting ratios were negatively associated with process performance for AMI. The marginal effect of the AMI reporting index was significantly negative (p <.05) at the 75th percentile of the AMI reporting index; the marginal effects of the reporting index for [beta]-blocker at admission, [beta]-blocker at discharge, and use of ACE inhibitor were significant at p <. 10 for at least one of the percentiles. Worthy of note is that the marginal effect of the reporting ratio for ACE inhibitor for AMI was very large (27.2 at the 50th percentile), and that the overall level of the reporting ratio tended to be very low for ACE inhibitor, around. 15 (see Table 3). Apart from initial antibiotic for pneumonia, Table 4 shows little evidence that within-hospital variation in reporting ratios for heart failure and pneumonia were associated with within-hospital variation in process performance.

Table 5 shows the results from linear regression and fixed-effects models where mortality was regressed on Composite 1 and its square, hospital characteristics, the reporting ratio and its square, and interactions between the Composite 1 and reporting ratio terms. Marginal effects were calculated at the 25th and 75th percentiles of the reporting index. If exception reporting attenuated the association between process performance and mortality, then the marginal effects of process performance would be of a larger magnitude when the reporting ratio was high (at the 75th percentile) relative to when it was low (at the 25th percentile). Table 5 shows that the opposite is true: the marginal effects of process performance showed a stronger inverse relationship with mortality at the 25th percentile relative to the 75th percentile for both the linear regression and fixed-effects specifications for each of the diagnoses. The difference in marginal effects between the 25th and 75th percentiles were significant for the AMI linear regression model (p < .01), the AMI fixed-effects model (p < .10), and the pneumonia linear regression model (p < .05).

Overall, the analysis of exception reporting provides some evidence that exception reporting is related to process performance for AMI, but not for heart failure and pneumonia, and that greater exception reporting does not attenuate the relationship between process performance and mortality for any diagnosis.

Discussion

The results from the linear regression and fixed-effects models indicate that performance on the starter set of Hospital Compare process measures is inversely correlated with risk-adjusted 30-day mortality for AMI, heart failure, and pneumonia, but that differences in process performance are not associated with within-hospital variation in mortality. This suggests that while levels of hospital process performance may roughly approximate levels of mortality performance, process performance is not causally related to the mortality outcome, and instead is a proxy for unobserved factors related to mortality (such as physician skill or hospital interest in quality improvement).

Descriptive evidence supports the findings from the regression analysis. Increases in process performance larger than decreases in risk-adjusted mortality were observed over the study period, providing face validity for the results from the fixed-effects models that improvement in process performance is not associated with improvement in mortality. In addition, evidence that improvement in process performance did not substantially alter the relative rankings of hospitals supports the finding that the correlation between process performance and outcomes observed in the linear regressions is not undermined by process improvement unrelated to mortality.

The hypothesis that process measure top-coding may moderate the relationship between process performance and mortality was not supported: the relationship between process performance and mortality was not stronger among hospitals in the 25th percentile of process performance (that were not subject to top-coding) than those in the 75th percentile (that may have been subject to top-coding).

Analysis of the effects of enhanced record-keeping or gaming, as measured by hospitals' strategy of excluding patients from the denominator of process performance calculations, indicates that exception reporting appears to increase process performance scores for AMI, particularly for the ACE inhibitor measure, but not for heart failure and pneumonia. However, exception reporting does not attenuate the relationship between process performance and mortality for AMI, heart failure, or pneumonia, and in some cases, appears to strengthen the relationship. This is the opposite of what was expected. A possible explanation is that the reporting ratio itself is actually strongly inversely associated with mortality (not shown): hospitals that exclude fewer patients have much lower mortality rates. Therefore, it is possible that the reporting ratio is a proxy for hospital quality, and that the inverse association between process performance and mortality is attenuated for higher quality hospitals. This could result from the fact that higher quality hospitals already have higher process scores and lower mortality rates, and that the marginal effect of process improvement therefore will be smaller. Overall, these findings suggest that exception reporting is not largely responsible for the absence of a within-hospital association between process performance and mortality. Instead, it is likely that improvements in record-keeping, unobserved in this study but documented in recent examinations (Mathematica 2005; Pham, Coughlan, and O'Malley 2006), are responsible for hospital increases in process performance that did not result in decreases in mortality.

The inverse relationships between process performance and mortality for AMI, heart failure, and pneumonia, observed in the linear regression models, are similar to those reported in the Werner and Bradlow and Jha et al. studies. However, the finding that process performance is not causally related to mortality has not previously been documented in the literature, and conflicts with the assertion made by Jha et al.: "By demonstrating that the results [evaluating the effect of process performance on mortality] do not change meaningfully [as a result of controlling for hospital-level factors], our findings suggest a robust relationship between HQA [Hospital Compare] measures and outcomes that is less likely to be attributable to unmeasured confounders" (p. 1109). In fact, it is Jha et al.'s failure to account for unobserved hospital-level factors that resulted in their spurious correlations between process performance and outcomes.

This is also the first study to examine the effect of exception reporting on process performance in the Hospital Compare program, and to evaluate the effect of top-coding and exception reporting on the relationship between process performance and mortality. Future research should attempt to identify the effect of other similar hospital behaviors--such as the allocation of resources toward the documentation of performance--on the relationship between process performance and mortality.

The findings from this study should be noted in light of several relevant limitations. First, the individual-level risk adjustment is limited in its reliance on secondary diagnoses, instead of a more comprehensive set of adjusters (including "present on admission" indicators, laboratory values, and Part B claims [Pine et al. 2007]). As a result, measured process performance may be confounded with unobserved health status if patients who are sicker are more likely to receive the recommended care because physicians think that these patients have a higher probability of an adverse event. However, to the extent that patients' unobserved health status is constant over time within hospitals, this potential bias would be addressed in the models including hospital fixed effects. Also, while hospital process performance was evaluated using data from all payers, mortality was evaluated using only data from Medicare beneficiaries. Patient-level process performance data would help to further elucidate the relationship between processes of care and patient outcomes.

In addition, process performance was evaluated using only a small set of measures for three major clinical conditions, which likely does not accurately reflect true hospital process quality for these conditions. However, because many of the measures in the starter set will likely be used in future hospital quality interventions such as value-based purchasing, and given that they are among the small group of NQF-endorsed measures, the relationship between these measures and mortality is relevant to policy. Further, while not all acute care hospitals reported process quality data, approximately 95% of hospitals reported the required measures (Werner 2006; CMS 2009). Because the analysis was weighted by the number of admissions within hospitals and because supplemental analysis (not shown) indicated that nonreporting hospitals tended to be much smaller than reporting hospitals, the impact of nonreporting hospitals on the results was likely minimal.

Another limitation of the study was reliance on mortality as the only outcome evaluated. Mortality has inherent limitations as an outcome measure. While mortality is the "ultimate outcome," all patients die at the end of some condition and risk adjustment is never complete, making the assignment of causality to hospital process performance fraught with difficulty. The relationship between process performance and other health outcomes may be different from the relationship observed here. However, despite the noise in the mortality measures, results from the linear regression models showed that process performance is significantly correlated with mortality. As a result, imprecise measurement of true mortality performance is not solely responsible for the absence of an observed causal relationship.

A final limitation of the study pertains to the extent of variation in the process performance measures within hospitals over time, and the effect of this variation on the consistency and efficiency of the marginal effects in the fixed-effects models. As noted by Wooldridge (2002, p. 266), consistent estimation of fixed-effects parameters is conditional on some within-unit variation, but does not require a designated "large" amount of variation. However, fixed-effects parameters will be inefficient with limited within-unit variation. Table 1 indicates that the ratio of between-hospital to within-hospital variation for Composite 1 is 2.53 for AMI, 2.74 for heart failure, and 2.07 for pneumonia. While rules of thumb are hard to come by, simulation evidence suggests that these ratios do not have large efficiency implications (Plumper and Troeger 2007). This was confirmed by the observation that the standard errors of the marginal effects in the fixed-effects models were less than one-and-one-half times larger than those in the linear regression models. Even if the standard errors were the same in the fixed-effects models as in the linear regression models, the marginal effects were so small (less than .8% for each diagnosis) and generally positive in the fixed-effects models that the inference that process performance was significantly associated with reduced mortality could not be made. Further, sensitivity checks that excluded hospitals with lower within-unit variation on process performance and that estimated the models over different combinations of years yielded identical inference. Finally, if the true causal relationship derived from the fixed-effects models were at the negative end of their 95% confidence intervals, the inverse relationship between process performance and mortality would still be very small.

Future research should examine correlational and causal relationships between process of care performance and health outcomes under a variety of circumstances. For instance, do the relationships observed in this paper differ as more, and potentially better, measures of process performance are integrated into Hospital Compare? Are the relationships observed in this analysis consistent for other diagnoses? Do the relationships observed in this paper hold for other outcomes, such as the AHRQ Patient Safety Indicators or the Brailer et al. (1996) comorbidity-adjusted complications indices? Will these findings remain robust over additional years of data as value-based purchasing incentives and longer periods of public reporting further change hospital behavior?

Conclusion

By not accounting for unobserved confounds, recent studies have failed to establish a credible association between process of care performance and mortality. The current investigation provides the strongest evidence to date that the Hospital Compare process measures for AMI, heart failure, and pneumonia are not causally related to mortality for these diagnoses. While correlation between process performance and mortality may support the use of process measures for public reporting (as patients may be steered toward higher quality providers), the absence of a causal relationship casts serious doubt on the validity of current process performance measures as a metric of hospital quality improvement.

To improve the value of Medicare spending and to create incentives for quality improvement in a payment system, quality of care must be measured accurately and meaningfully. This study suggests that the process performance measures currently employed for this task are inadequate. CMS' approach of including only NQF-endorsed measures has the advantage of seeking stakeholder consensus for political purposes, but has the disadvantage of limiting the universe of potential measures for the program. Further, the lengthy consensus building and endorsement of process performance measures assures that the set of measures employed as quality metrics are unlikely to improve in the short term. As Medicare moves to a value-based purchasing model, policymakers must create incentives for providers to change their patterns of care to extend the lives and improve the well-being of seniors, not just create incentives for providers to check a box and comply with standardized process metrics.

References

Birkmeyer, J. D., E. A. Kerr, and J. B. Dimick. 2006. Improving the Quality of Quality Measurement. In Performance Measurement: Accelerating Improvement, Institute of Medicine. Washington, D.C.: National Academies Press.

Bradley, E. H., J. Herrin, B. Elbel, R. L. McNamara, D. J. Magid, B. K. Nallamothu, Y. F. Wang, S. L. T. Normand, J. A. Spertus, and H. M. Krumholz. 2006. Hospital Quality for Acute Myocardial Infarction Correlation among Process Measures and Relationship with Short-term Mortality. Journal of the American Medical Association 296:72-78.

Brailer, D. J., E. Kroch, M. V. Pauly, and J. P. Huang. 1996. Comorbidity-Adjusted Complication Risk--A New Outcome Quality Measure. Medical Care 34(5):490-505.

Centers for Medicare and Medicaid Services (CMS). 2009. Reporting Hospital Quality Data for Annual Payment Update. http://www.cms.hhs.gov/HospitalQualityInits/08_ HospitalRHQDAPU.asp#TopOfPage. Accessed April 22, 2009.

Donabedian, A. 1966. Evaluating the Quality of Medical Care. Milbank Quarterly 44:166-203.

Doran, T., C. Fullwood, H. Gravelle, D. Reeves, E. Kontopantelis, U. Hiroeh, and M. Roland. 2006. Pay-for-Performance Programs in Family Practices in the United Kingdom. New England Journal of Medicine 355(4):375-384.

Eggleston, K. 2005. Multitasking and Mixed Systems for Provider Payment. Journal of Health Economics 24:211-223.

Elixhauser, A., C. Steiner, D. R. Harris, and R. N. Coffey. 1998. Comorbidity Measures for Use with Administrative Data. Medical Care 36(1): 8-27.

Fonarow, G. C., W. T. Abraham, N. M. Albert, W. G. Stough, M. Gheorghiade, B. H. Greenberg, C. M. O'Connor, K. Pieper, J. L. Sun, C. Yancy, and J. B. Young. 2007. Association between Performance Measures and Clinical Outcomes for Patients Hospitalized with Heart Failure. Journal of the American Medical Association 297:61-70.

Galvin, R. 2006. Pay-For-Performance: Too Much Of A Good Thing? A Conversation With Martin Roland. Health Affairs 25:w412-419.

Gould, W. 1994. Clarification on Analytic Weights with Linear Regression. Stata Technical Bulletin 20:2-3.

Granger, C. B., P. G. Steg, E. Peterson, J. Lopez-Sendon, F. Van de Werf, E. Kline-Rogers, J. Allegrone, O. H. Dabbous, W. Klein, K. A. A. Fox, and K. A. Eagle. 2005. Medication Performance Measures and Mortality following Acute Coronary Syndromes. American Journal of Medicine 118(8):858-865.

Hibbard, J. H., J. Stockard, and M. Tusler. 2003. Does Publicizing Hospital Performance Stimulate Quality Improvement Efforts? Health Affairs 22:84-94.

Holmstrom, B., and P. Milgrom. 1991. Multitask Principal Agent Analyses--Incentive Contracts, Asset Ownership, and Job Design. Journal of Law Economics and Organization 7:24-52.

Institute of Medicine. 2000. To Err is Human: Building a Safer Health System, L. T. Kohn, J. M. Corrigan, and M. S. Donaldson, eds. Washington, D.C.: National Academies Press.

--. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: National Academies Press.

Jha, A. K., E. J. Orav, Z. H. Li, and A. M. Epstein. 2007. The Inverse Relationship between Mortality Rates and Performance in the Hospital Quality Alliance Measures. Health Affairs 26:1104-1110.

Kaiser Family Foundation. 2007. Medicare Spending and Financing Fact Sheet. http://www.kff.org/medicare/upload/7305-02.pdf. Accessed January 29, 2008.

Luthi, J. C., W. D. Flanders, S. R. Pitts, B. Burnand, and W. M. McClellan. 2004. Outcomes and the Quality of Care for Patients Hospitalized with Heart Failure. International Journal for Quality in Health Care 16(3):201-210.

Luthi, J. C., M. J. Lund, L. Sampietro-Colom, D. G. Kleinbaum, D. J. Ballard, and W. M. McClellan. 2003. Readmissions and the Quality of Care in Patients Hospitalized with Heart Failure. International Journal for Quality in Health Care 15(5):413-421.

Mant, J. 2001. Process Versus Outcome Indicators in the Assessment of Quality of Health Care. International Journal for Quality in Health Care 13:475-480.

Mathematica Policy Research Inc. 2005. Hospital Responses to Public Reporting of Quality Data to CMS: 2005 Survey of Hospitals. Final Report.

McGlynn, E. A., S. M. Asch, J. Adams, J. Keesey, J. Hicks, A. DeCristofaro, and E. A. Kerr. 2003. The Quality of Health Care Delivered to Adults in the United States. New England Journal of Medicine 348:2635-2645.

Peterson, E. D., M. T. Roe, J. Mulgund, E. R. DeLong, B. L. Lytle, R. G. Brindis, S. C. Smith, C. V. Pollack, L. K. Newby, R. A. Harrington, W. B. Gibler, and E. M. Ohman. 2006. Association between Hospital Process

Performance and Outcomes among Patients with Acute Coronary Syndromes. Journal of the American Medical Association 295:1912-1920.

Pham, H. H., J. Coughlan, and A. S. O'Malley. 2006. The Impact of Quality-Reporting Programs on Hospital Operations. Health Affairs 25:1412-1422.

Pine, M., H. S. Jordan, A. Elixhauser, D. E. Fry, D. C. Hoaglin, B. Jones, R. Meimban, D. Warner, and J. Gonzales. 2007. Enhancement of Claims Data to Improve Risk Adjustment of Hospital Mortality. Journal of the American Medical Association 297(1):71-76.

Plumper, T., and V. Troeger. 2007. Efficient Estimation of Time-invariant and Rarely Changing Variables in Finite Sample Panel Analyses with Unit Fixed Effects. Political Analysis 15:124-139.

Porter, M. E., and E. O. Teisberg. 2006. Redefining Healthcare: Creating Positive-Sum Competition to Deliver Value. Boston: Harvard Business School Press.

Schaffer, M. E., and S. Stillman. 2006. xtoverid: Stata Module to Calculate Tests of Overidentifying Restrictions after xtreg, xtivreg, xtivreg2 and xthtaylor, http://ideas.repec.org/c/ boc/bocode/s456779.html. Accessed July 7, 2008.

Smith, P. 1995. On the Unintended Consequences of Publishing Performance Data in the Public Sector. International Journal of Public Administration 18(213):277-310.

StataCorp. 2007. Stata Statistical Software: Release 10. College Station, TX: StataCorp LP.

U. S. Congress. 2005. House Report 109-362 Deficit Reduction Act of 2005. http://thomas. loc.gov/cgi-bin/cpquery/R?cp109:FLD010:@ 1(hr362). Accessed January 29, 2008.

U.S. Senate Finance Committee. 2009. Transforming the Health Care Delivery System: Proposals to Improve Patient Care and Reduce Health Care Costs. http://finance.senate.gov/ sitepages/leg/LEG%202009/042809%20Health %20Care%20Description%20of%20Policy%20 Option.pdf. Accessed August 10, 2009.

Werner, R. M., and E. T. Bradlow. 2006. Relationship between Medicare's Hospital Compare Performance Measures and Mortality Rates. Journal of the American Medical Association 296(22):2694-2702.

Werner, R. M., E. T. Bradlow, and D. A. Asch. 2008. Does Hospital Performance on Process Measures Directly Measure High Quality Care or Is It a Marker of Unmeasured Care? Health Services Research 43(5): 1464-1484.

Williams, R. L. 2000. A Note on Robust Variance Estimation for Cluster-Correlated Data. Biometrics 56(2):645-646.

Wooldridge, J. M. 2002. Econometric Analysis of Cross Section and Panel Data. Cambridge, Mass.: MIT Press.

--. 2006. Introductory Econometrics: A Modern Approach, 3rd ed. Mason, Ohio: Thomson South-Western.

Notes

Andrew Ryan was supported by a training grant from the Agency for Healthcare Research and Quality (grant no. 05 T32 HS000062-14) and by the Jewish Healthcare Foundation under the grant, "Achieving System-wide Quality Improvements--A collaboration of the Jewish Healthcare Foundation and Schneider Institutes for Health Policy." The authors would like to thank Deborah Garnick and Christopher Baum for helpful comments on this paper.

(1) Doran et al. (2006) note that some of the conditions for which greater exception reporting occurred, such as mental health, may also have more legitimate reasons supporting the exceptions.

(2) The Werner and Bradlow study found that one pneumonia process measure (oxygenation assessment) was not associated with mortality, and that some process measures were not associated with any of the three measures of mortality (in-hospital, 30-day, and one-year mortality) at p < .05; however, the other process measures were all associated with mortality at p < .05 for at least one measure of mortality.

(3) Composite [1.sub.jkt] = [Z.sub.jkt] - [[bar.z].sub.k]/[[sigma].sub.k] where [Z.sub.jkt] = [n.summation over (l = 1)] ([q.sub.jklt] - [[bar.q].sub.kl]/[[sigma].sub.kl] [S.sub.jklt], and [S.sub.jklt] = [r.sub.jklt]/[n.summation over (l = 1)] [r.sub.jklt]

In this equation, j indexes to hospitals, k indexes to condition (AMI, heart failure, or pneumonia), l indexes to process measure (different for each condition), t indexes to year (2004 to 2006), and n is the number of process individual measures for each diagnosis. In this equation, q is the measure-specific hospital quality score (ranging from 0 to 100), r is the number of patients included in the calculation of a given process measure, [sigma] is the standard deviation, and s is the weight given to each process measure. Note that Composite 1 is missing only if there were fewer than 10 patients included in the indicator quality score for every process indicator associated with the condition.

(4) Composite [2.sub.jkt] = [Z.sub.jkt] - [[bar.z].sub.k]/[[sigma].sub.k] where [Z.sub.jt] = [n.summation over (l = 1)] [q.sub.jklt].

Note that Composite 2 is missing if a hospital had fewer than 10 patients included in any of the process scores associated with the condition.

(5) reporting [ratio.sub.kjt] = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where l is the number of individual process measures for a given diagnosis. Approximately 1% of hospitals had reporting ratios greater than 1, in which case the reporting ratios were set to 1.

(6) This test was performed without analytic weights.

Andrew M. Ryan, M.A., Ph.D., is an assistant professor in the Department of Public Health at Weill Cornell Medical College. Christopher P. Tompkins, Ph.D., is an associate professor, and Stanley S. Wallack, Ph.D., is a professor, both at the Heller School of Social Policy and Management, Brandeis University. James F. Burgess, Jr., Ph.D., is a senior investigator in the Center for Organization, Leadership and Management Research at the Veterans Administration Boston Healthcare System, and an associate professor at Boston University School of Public Health. Address correspondence to Dr. Ryan at the Department of Public Health, Division of Outcomes and Effectiveness Research, Weill Cornell Medical College, 402 East 67th Street, LA-215, New York, NY 10065. Email: amr2015@med cornell.edu
Table 1. Hospital characteristics

Hospital characteristics                     2004    2005    2006

n                                            3,317   3,277   3,262
Ownership (%)
  Government run                             18.0    18.6    18.7
  For-profit                                 18.2    18.2    18.4
  Not-for-profit                             63.8    63.1    62.9
Number of beds (%)
  1-99                                       --      --      33.4
  100-399                                    --      --      57.1
  400+                                       --      --       9.5
Teaching (%)                                 --      --      32.0
Residents / average daily census (mean)      --      --        .09
Percent Medicare admissions (mean)           --      --      50.3
Urban (%)                                    --      --      70.7
Risk-adjusted 30-day mortality (mean)
  AMI                                        16.6    16.3    16.5
  Heart failure                              10.1    10.1     9.9
  Pneumonia                                  11.7    11.1    11.1
CMS process performance (mean)
  AMI
    Aspirin at admission                     92.7    93.7    94.1
    Aspirin at discharge                     89.5    91.8    92.4
    [beta]-blocker at admission              86.6    89.4    90.0
    [beta]-blocker at discharge              88.3    91.5    92.0
    ACE inhibitor                            78.9    83.0    83.7
    Composite 1                                .83    1.07    1.13
    Composite 2                                .73    1.10    1.18
Heart failure
  Assessment of left ventricular function    81.6    85.5    86.5
  ACE inhibitor                              74.9    81.6    82.6
  Composite 1                                  .81    1.11    1.18
  Composite 2                                  .66    1.14    1.22
Pneumonia
  Oxygenation assessment                     98.0    99.1    99.2
  Initial antibiotic                         72.0    75.9    77.3
  Pneumococcal vaccination                   45.5    59.6    64.3
  Composite 1                                  .64    1.13    1.26
  Composite 2                                  .56    1.13    1.32

                                             Between-    Within
                                             hospital    hospital
                                             standard    standard
Hospital characteristics                     deviation   deviation

n                                            --          --
Ownership (%)
  Government run                             38.3         6.3
  For-profit                                 38.1         7.5
  Not-for-profit                             47.6         8.2
Number of beds (%)
  1-99                                       47.2        --
  100-399                                    49.5        --
  400+                                       29.3        --
Teaching (%)                                 46.7        --
Residents / average daily census (mean)        .22       --
Percent Medicare admissions (mean)           14.2        --
Urban (%)                                    45.5        --
Risk-adjusted 30-day mortality (mean)
  AMI                                         7.3         7.1
  Heart failure                               3.4         3.0
  Pneumonia                                   3.4         2.8
CMS process performance (mean)
  AMI
    Aspirin at admission                      6.8         3.2
    Aspirin at discharge                     10.1         4.4
    [beta]-blocker at admission              11.0         4.7
    [beta]-blocker at discharge              10.5         4.5
    ACE inhibitor                            11.9         6.6
    Composite 1                                .99         .39
    Composite 2                                .99         .42
Heart failure
  Assessment of left ventricular function    15.7         5.3
  ACE inhibitor                              11.5         7.2
  Composite 1                                  .93         .34
  Composite 2                                  .92         .46
Pneumonia
  Oxygenation assessment                      3.2         1.9
  Initial antibiotic                         11.9         4.8
  Pneumococcal vaccination                   22.1        12.2
  Composite 1                                  .93         .45
  Composite 2                                  .90         .46

Notes: Table includes data from hospitals that are included in at
least one of the regression models. Data from individual process
measures are included if hospitals report at least 10 in the measure
denominator. Composite measures are scaled so that the mean of each
measure across all hospitals across all years is 1.

Table 2. Marginal effects of process performance, Composite 1 on
30-day risk-adjusted mortality

                                         AMI

                     25th        50th        75th
Description       percentile  percentile  percentile    n    [R.sup.2]

Linear             -8.1% ***   -9.0% ***   -9.6% ***  8,696     .10
  regression        (.7)       (1.0)       (1.2)
  with hospital
  controls
Hospital fixed       .1%         .2%         .3%     8,549      .03
  effects          (1.1)       (1.4)       (1.7)

                                      Heart failure

                     25th        50th        75th
Description       percentile  percentile  percentile    n    [R.sup.2]

Linear              -1.2% *    -1.5% **    -2.2% **   9,487     .04
  regression         (.7)       (.8)       (1.0)
  with hospital
  controls
Hospital fixed        .8%        .5%        -.1%      9,369     .03
  effects           (1.0)      (1.1)       (1.4)

                                        Pneumonia

                     25th        50th        75th
Description       percentile  percentile  percentile    n    [R.sup.2]

Linear             -1.9% ***   -1.9% ***   -1.9% **   8,913     .03
  regression        (.6)        (.6)        (.7)
  with hospital
  controls
Hospital fixed       .4%         .5%         .6%      8,704     .04
  effects           (.8)        (.9)       (1.0)

Notes: Hospital controls include ownership, number of beds, teaching
status, urbanicity, and ratio of residents to average daily census.

Robust standard errors are in parentheses.

Marginal effects are converted into percentages to facilitate
interpretation.

*** p < .01; ** p < .05; * p < .1.

Table 3. Descriptive statistics for reporting ratios

                                              Reporting ratios

Process measure                              2004    2005    2006
AMI
  Aspirin at admission                       .592    .599    .641
  Aspirin at discharge                       .473    .475    .526
  [beta]-blocker at admission                .526    .512    .550
  [beta]-blocker at discharge                .480    .487    .541
  ACE inhibitor                              .140    .150    .167
  AMI reporting ratio                        .421    .423    .468
Heart failure
  Assessment of left ventricular function    .544    .556    .608
  ACE inhibitor                              .174    .192    .214
  Heart failure reporting ratio              .352    .368    .408
Pneumonia
  Oxygenation assessment                     .694    .647    .725
  Initial antibiotic                         .632    .537    .609
  Pneumococcal vaccination                   .394    .396    .455
  Pneumonia reporting ratio                  .570    .521    .593

                                              Between-      Within
                                              hospital     hospital
                                              standard     standard
Process measure                              deviation    deviation
AMI
  Aspirin at admission                          .181         .090
  Aspirin at discharge                          .234         .076
  [beta]-blocker at admission                   .174         .089
  [beta]-blocker at discharge                   .232         .077
  ACE inhibitor                                 .063         .034
  AMI reporting ratio                           .140         .072
Heart failure
  Assessment of left ventricular function       .133         .073
  ACE inhibitor                                 .072         .041
  Heart failure reporting ratio                 .100         .059
Pneumonia
  Oxygenation assessment                        .170         .089
  Initial antibiotic                            .162         .092
  Pneumococcal vaccination                      .133         .063
  Pneumonia reporting ratio                     .161         .080

Notes: Table includes data from hospitals that are included in at
least one of the regression models and hospitals that did not report
sampling.

For individual process measures, table includes data from hospitals
that reported at least 10 cases for that measure.

For diagnosis reporting index, table includes data from hospitals that
reported at least 10 cases for that measure for at least one process
measure.

Table 4. Marginal effects of reporting ratios on process performance,
fixed-effects model

                                         25th           50th
Marginal effect                       percentile     percentile

AMI
  Aspirin at admission                 -.1            -.3
                                       (.8)           (.5)
  Aspirin at discharge                 2.6            1.8
                                      (1.8)          (1.4)
  [beta]-blocker at admission         -2.00          -1.73*
                                      (1.5)           (.9)
  [beta]-blocker at discharge          -.3           -1.1
                                      (2.0)          (1.6)
  ACE inhibitor                      -29.0 ***      -27.2 ***
                                      (9.4)          (8.1)
  AMI reporting ratio                  -.1            -.2
                                       (.2)           (.1)
Heart failure
  Assessment of left ventricular       -.3            -.4
    function                          (1.2)           (.9)
  ACE inhibitor                       -6.8           -6.1
                                      (4.6)          (4.1)
  Heart failure reporting ratio         .2             .2

Pneumonia
  Oxygenation assessment                .5             .1
                                       (.4)           (.3)
  Initial antibiotic                  -1.9 *         -2.1 **
                                      (1.4)          (1.0)
  Pneumococcal vaccination            -2.2           -2.4
                                      (4.5)          (3.6)
  Pneumonia reporting ratio             .1            0.0
                                       (.1)           (.1)

                                         75th
Marginal effect                       percentile       n

AMI
  Aspirin at admission                 -.4           7,643
                                       (.5)
  Aspirin at discharge                  .1           6,545
                                      (1.0)
  [beta]-blocker at admission         -1.5 *         7,535
                                       (.8)
  [beta]-blocker at discharge         -2.7 **        6,649
                                      (1.0)
  ACE inhibitor                      -24.8 ***       4,292
                                      (6.7)
  AMI reporting ratio                  -.2 **        7,686
                                       (.1)
Heart failure
  Assessment of left ventricular       -.4           8,200
    function                           (.9)
  ACE inhibitor                       -5.5           7,386
                                      (3.6)
  Heart failure reporting ratio         .2           8,201

Pneumonia
  Oxygenation assessment               -.3           7,993
                                       (.5)
  Initial antibiotic                  -2.3 **        7,940
                                       (.9)
  Pneumococcal vaccination            -2.5           7,899
                                      (2.9)
  Pneumonia reporting ratio            0.0           7,995
                                       (.1)

Marginal effect                      [R.sup.2]

AMI
  Aspirin at admission                  .06

  Aspirin at discharge                  .09

  [beta]-blocker at admission           .13

  [beta]-blocker at discharge           .21

  ACE inhibitor                         .17

  AMI reporting ratio                   .15

Heart failure
  Assessment of left ventricular        .23
    function
  ACE inhibitor                         .31

  Heart failure reporting ratio         .29

Pneumonia
  Oxygenation assessment                .11

  Initial antibiotic                    .28

  Pneumococcal vaccination              .47

  Pneumonia reporting ratio             .42

Notes: Table includes data from hospitals that are included in at
least one of the regression models and hospitals that did not report
sampling.

For individual process measures, table includes data from hospitals
that reported at least 10 cases for that measure.

For diagnosis reporting index, table includes data from hospitals that
reported at least 10 cases for that measure for at least one process
measure.

The dependent variable is different in every row of the model. For
instance, the dependent variable in the first row is the process score
for aspirin at admission and in the second row is the process score
for aspirin at discharge.

The marginal effects for the AMI reporting ratio, heart failure
reporting ratio, and pneumonia reporting ratio are in units of the
standard deviation of Composite 1, and consequently are much smaller
than the other marginal effects reported.

Robust standard errors are in parentheses.

*** p < .01; ** p < .05; * p < .1.

Table 5. Marginal effects of process performance, Composite 1 on
30-day risk-adjusted mortality in models at the 25th and 75th
percentiles of the reporting ratio

                                        AMI

                     25th           75th
Description       percentile     percentile       n      [R.sup.2]

Linear            -11.3%        -6.2% ***       7,827       .09
  regression       (1.4)        (1.3)
  with
  hospital
  controls
Hospital fixed     -2.2%         1.4%           7,572       .02
  effects          (1.9)        (1.9)

                                   Heart failure

                      25th          75th
Description        percentile    percentile     n      [R.sup.2]

Linear            -1.9% **       -1.2%        8,240       .03
  regression      (1.0)           (.9)
  with
  hospital
  controls
Hospital fixed     0.4%           1.0%        7,915       .02
  effects         (1.4)          (1.4)

                                   Pneumonia

                      25th           75th
Description        percentile     percentile     n      [R.sup.2]

Linear            -2.8% ***       -1.2%        7,513       .04
  regression       (.8)            (.8)
  with
  hospital
  controls
Hospital fixed     0.6%            1.2%        7,070       .02
  effects         (1.1)           (1.2)

Notes: Difference in marginal effects between 25th and 75th
percentiles of reporting ratio significant at p < .01 in AMI linear
regression model, at p < .10 in AMI fixed-effects model, and at
p < .05 in pneumonia linear regression model.

Hospital controls include ownership, number of beds, teaching status,
urbanicity, and ratio of residents to average daily census.

Robust standard errors are in parentheses.

Marginal effects are converted into percentages to facilitate
interpretation.

*** p < .01; ** p < .05; * p < .1 for tests of marginal effects.
COPYRIGHT 2009 Excellus Health Plan, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ryan, Andrew M.; Burgess, James F., Jr.; Tompkins, Christopher P.; Wallack, Stanley S.
Publication:Inquiry
Geographic Code:1USA
Date:Sep 22, 2009
Words:9861
Previous Article:Access to health insurance at small establishments: what can we learn from analyzing other fringe benefits?
Next Article:Changes in Medicaid physician fees and patterns of ambulatory care.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters