Printer Friendly

A Longitudinal Study of the Effects of Graduate Medical Education on Hospital Operating Costs.

Objective. To examine the effect of graduate medical education sponsorship on hospital operating costs over a seven-year period, to test for a longitudinal association between teaching intensity and cost, and to determine whether the indirect medical education (IME) payment adjustments made under Medicare's Prospective Payment System are appropriate.

Data Sources. Medicare cost and payment data from the Hospital Cost Report Information System and other related HCFA files, from FFY 1989 through 1995. The study population consists of all short-stay hospitals ([approximate]5,000) participating in Medicare and receiving case payments by diagnosis-related groups.

Study Design. The original cost functions used to develop indirect medical education payment adjustments under PPS are re-estimated with panel data. Specification changes are included based on findings from critiques of the original hospital cost model. Additional variations on the model are explored to test for differences by hospital status, to control for the effect of additional disproportionate share and outlier payments, and to isolate the effects of improved case-mix measurement on model results.

Principal Findings. Fixed effects regression produces no evidence of a significant within-hospital association between increased sponsorship of medical residents and increased cost per case. In models designed to capture a cross-sectional association, operating costs are positively related to teaching activity, but the association shows a decline in strength over time. In all years, the strength of the association is significantly greater among hospitals eligible for disproportionate share adjustments and among major teaching hospitals. Controlling for secular trends of increased teaching intensity results in a pattern of declining cross-sectional teaching coefficients that supports a theory that observed teaching effects are the result of unmeasured case severity.

Conclusions. A significant but declining cost differential is observed between teaching and nonteaching hospitals. The association appears to be related to hospital and patient characteristics that cannot be controlled using currently available case-mix and wage indices. Longitudinal models do not provide evidence to support a payment adjustment formula that allows individual hospitals to recompute their IME adjustment rates as their teaching ratios rise or fall from year to year. Cross-sectional findings suggest that re-estimations of the teaching effect may be appropriate when significant improvements occur in Medicare case-mix measurement.

Key Words. Medicare, Prospective Payment System, indirect medical education, teaching hospitals

Medicare's incremental payments to teaching hospitals are highly controversial. An estimated $4.6 billion was paid in 1997 as indirect medical education (IME) adjustments added on to payments for acute care discharges (Med-PAC 1998). Hospitals that receive IME payments have substantially higher Medicare payment-to-cost margins than those that do not (ProPAC 1995).

IME adjustments to the per-case payment rates under the Prospective Payment System (PPS) were originally derived from observed cost differentials in teaching hospital settings (Pettengill and Vertrees 1982). Evidence that the IME formulas overcompensate for cost differentials has been documented repeatedly (ProPAC 1987, 1991, 1992; CBO 1995), but the adjustments have been successfully justified to Congress on the grounds that overall operating margins of teaching facilities are worse than those of their nonteaching counterparts, and serious financial dislocation would result if IME payments were reduced (COGME 1990; AAMC 1997b; ProPAC 1997). Thus the adjustment has undergone a conceptual transformation since its introduction in 1984, from a tool for product differentiation within an administered price system to an explicit Medicare subsidy used by teaching institutions to underwrite other revenue shortfalls as they carry out their social and educational missions.

Much of the current policy debate with respect to Medicare payments to teaching hospitals focuses on the public policy implications of this notion of subsidy (Pew Health Professions Commission 1995; Cohen 1998; Ross 1999). IME adjustments are designed, however, to address a combination of valid cost differentials as well as public policy objectives. An examination of the econometric evidence for incremental teaching payments is a necessary component of the re-evaluation of Medicare support for medical education.

BACKGROUND

Under Medicare's PPS for short-stay hospitals, facilities that sponsor or have formal affiliations with graduate medical education programs receive incremental payments that vary according to the ratio of on-site trainees to hospital beds and that are proportional to the base payment per diagnosis-related group (DRG) (Social Security Act [ss]1886). Slightly over one-fifth of all short-stay hospitals qualify for these IME payments. The mean IME adjustment in 1995 was a 12.6 percent increment to the hospital's payment per DRG, but the distribution is highly skewed. Approximately 10 percent of qualifying hospitals received adjustments of over 36 percent, while 50 percent received adjustments below 7 percent. IME payments comprised 4.8 percent of Medicare Part A payments to short-stay hospitals in 1989 and 6.3 percent in 1995. [1]

The term "indirect medical education cost" refers to observed differences in Medicare operating costs that correlate with measures of the intensity of a hospital's participation in graduate medical education. It does not refer to any of the direct costs attributable to resident stipends, faculty supervision, or other overhead related to educational activities reimbursed by Medicare under a separate payment scheme as "direct medical education costs." The indirect cost differentials were empirically derived by Health Care Financing Administration (HCFA) analysts from the parameter estimates on a teaching intensity variable in regression models of operating costs per Medicare discharge that controlled for case mix, urban location, hospital size, and local wage variation (Pettengill and Vertrees 1982). In pre-PPS hospital payment regulations, HCFA had incorporated similar adjustments to limitations imposed on routine per-diem costs and, later, on target costs per discharge that were part of the Tax Equity and Fi scal Responsibility Act (TEFRA) 1982 payment regulations (CCH 1999).

In analyses conducted for the PPS adjustments, costs from 1981 data were originally found to be 5.79 percent higher for each increase of 0.10 in the teaching variable, which was measured as 1 plus the ratio of interns and residents to staffed beds (IRB). Medicare's IME adjustment is structured to reflect these cost differentials as a similarly increasing function of a resident-to-bed ratio computed by the hospitals each year on their cost reports. While the PPS legislation was under consideration, Congressional Budget Office (CBO) simulations compared projected payments under PPS to reimbursement under the existing cost-based system and generated some political concern by predicting that 71 percent of all teaching hospitals and 70 percent of hospitals with over 300 beds would experience reductions in payment (Lave 1985). Congress responded by allowing a multiplier of 2.0 to be applied when the model's teaching coefficient was incorporated into a formula for teaching hospital payments, doubling the observed e ffect and creating a total IME adjustment rate of 11.59 percent.

In 1985, the 1981 data were re-analyzed by the CBO to incorporate several technical changes to the cost model, and the coefficient was estimated at 0.405 (ProPAC 1989). COBRA 1985 (PL99-272) altered the formula for the IME adjustment, both to apply the new coefficient and to allow the payment formula to reflect the diminishing marginal effect of teaching intensity that is implied by the model's functional form. A further adjustment in OBRA 1987 (PL 100-203) reduced the policy-derived multiplier from 2.0 to 1.89, and this formula remained in effect from 1988 to 1997. The Balanced Budget Act of 1997 (PL 105-33) mandated a phased-in reduction of the multiplier down to 1.35 by 2001. The basic form of the payment adjustment, however, has not been altered since the 1985 legislation. Results from the CBO's cross-sectional study of 1981 hospital operating costs still constitute the basis for IME adjustment for the operating component of the DRG payment. Since 1992, IME adjustments have also applied to the DRO paymen ts for capital costs. The capital IME formula is derived from a slightly different model where teaching intensity is measured by the ratio of residents to occupied beds (Phillips 1992). This article addresses only the operating IME adjustment, but our findings may apply equally to the adjustment made to the capital portion of the DRG payment.

The model estimating hospital operating costs is a log-linear transformation of a Cobb-Douglas production function in the form of Y = [alpha][X.sup.[beta]][epsilon], where the dependent variable is the natural log of the average Medicare operating cost per case, and incremental units of X each have diminishing marginal effects on Y. Log transformation permits the equation to be estimated using linear modeling techniques. As described by Pettengill and Vertrees (1982), HCFA's original estimating equation regressed the transformed cost variable on the natural log of (1 + IRB)--adding the constant of 1.0 to the resident-to-bed ratio allowed the logged teaching variable to equal 0 in the case of nonteaching hospitals. The remaining independent variables were the logs of each hospital's case-mix index, area wage index, and bed capacity, and a set of indicator variables designating urban and large urban location. When the CBO re-estimated the equation it added a variable to proxy for disproportionate share (charit y) obligations, eliminated the bed capacity variable, and constrained the coefficients on the case-mix and wage index variables to equal 1.00 and 0.75, respectively (ProPAC 1989; Sheingold 1990). The general formula for the IME adjustment rate is represented as M x [[(1 + IRB).sup.[beta]] - 1], where [beta] is the estimated teaching coefficient and M represents the policy-derived multiplier described above. [2] Figure 1 graphically compares the computed cost adjustments from the different IME formulas in effect since PPS inception for resident-to-bed ratios ranging from 0 to 1.3.

Teaching cost differentials were attributed initially to inefficiencies inherent to training residents (Pettengill and Vertrees 1982). Other explanations interpreted the teaching variable as a proxy measure for unobserved patient and hospital characteristics, including greater illness severity and treatment intensity, which were not adequately captured by the case-mix index, and to poor control for wage variation and other location-related factors (Lave 1985). Published critiques of the HCFA cost model have concentrated on identifying bias from omitted variables (Anderson and Lave 1986; Welch 1987; Thorpe 1988) and incorrect functional form (Thorpe 1988; Rogowski and Newhouse 1992). Both the Thorpe and the Rogowski and Newhouse studies found evidence that teaching effects were not log-linear but had some threshold value above which the teaching coefficient was significantly greater. Rogowski and Newhouse re-estimated the HCFA cost function with data from 1984, using alternative log transformations. They conc luded that what had been identified as omitted variables bias by earlier research was more likely the result of inappropriate functional form that could be corrected by reformulating the teaching variable as the log of (.0001 + IRB) and relaxing the assumption of constant elasticity with respect to teaching. They also addressed technical shortcomings of the earlier models by adding weights (appropriate to analysis on grouped data) and introducing smearing estimates (appropriate for retransformation of a log model under certain types of heteroscedasticity).

What may be most notable about the literature as a whole is that, regardless of the specification changes, indirect teaching costs continued to be identified as significant and positive for some or all teaching facilities. The distribution of the IME cost differentials (and therefore also the payment adjustments) across hospitals is clearly influenced by the model's choice of control variables and functional form. In total, however, the analyses do not contradict HCFA's original premise of a positive cross-sectional association between teaching intensity and cost that cannot be explained by available measures of patient mix or regional variation in input prices.

Since the implementation of PPS, the margin between Medicare inpatient payments and hospital costs has been consistently higher among teaching than nonteaching hospitals, and highest among major teaching hospitals. Over the seven years covered by our study, the margins have improved for all short-stay facilities, but the gaps by teaching status have widened (see Note 1). One reason for this lies with the disproportionate share payments, which are correlated with teaching activity and which have also grown over this period. Another explanation, which we investigate in this study, may be in the mechanics of the IME payment formula as written into legislation. The coefficient of 0.405 represents an estimate of the association between teaching and cost observed across hospitals in a single year. While the coefficient in the formula is mandated and has not changed since COBRA 1985, until the Balanced Budget Act of 1997 individual hospital IME adjustment rates were still recomputed each year using annually updated resident-to-bed ratios. In a given year, if Hospital A and Hospital B were located in similar labor markets, but Hospital A's ratio was 0.10 and Hospital B's was 0.20, Hospital B's payment per DRG exceeded that of Hospital A by approximately (1.89 x 4.05), or 7.7 percentage points. Similarly, if Hospital A increased its ratio from 0.10 in one year to 0.20 in the next, its payment rate in the subsequent year was raised by 7.7 percentage points. Thus the IME formula implemented the payment differentials as though the research had demonstrated that the effect of teaching intensity on cost was applicable within hospitals over time, as well as across hospitals.

If the longitudinal association is not present, this annual recomputation of the IME adjustment will increasingly overpay facilities that expand their teaching commitments, reduce their bed complements, or both. Figure 2 places this issue in a Medicare budget perspective. The proportion of IME payments attributable to increases in teaching intensity--as distinct from increases in price, Medicare caseload, case mix, or the number of teaching hospitals--can be derived by analyzing real historical IME payment amounts made to the cohort of teaching hospitals that received IME adjustments from PPS years 6 through 12 (1989 through 1995). In Figure 2, the components of real IME payments are graphed separately. Because the payments are deflated, the absolute numbers on the y-axis are less important than the proportion of the top layer (amounts attributable to increased teaching intensity) relative to the total area under the graph. Over the study period, this segment represents roughly 8 percent of total real-dollar payments to this cohort. By PPS year 12, it represented nearly 14 percent.

The relative improvement of teaching hospital payment margins is also at least partially attributable to stronger cost reductions among teaching institutions. From 1989 to 1995, real case-mix-adjusted operating costs per case dropped by 11.5 percent among major teaching hospitals and 10.8 percent among minor teaching hospitals, compared to a decline of only 5.8 percent among nonteaching hospitals (see Note 1). During this same period, the resident-to-bed ratios used to compute IME payments on the submitted cost reports increased by an average of 21 percent. The downward trends in average cost per discharge, occurring simultaneously with increases in the IRB ratios, suggest that increases in teaching intensity over time may not be associated with increased cost per discharge, or at least may not have as strong an association as has been observed in cross-sectional studies. Because there is also a contemporaneous trend of reduced length of stay resulting in lower costs per discharge across all facilities, conc lusions regarding the independent role of teaching activities are difficult to draw from this type of descriptive study across groups of hospitals.

The following analysis of pooled cross-sectional data at the individual hospital level was conducted in order to gain a better understanding of the association between teaching intensity and cost per case over time. To the extent that teaching intensity serves as a proxy measure for other unobserved, but primarily fixed, hospital characteristics associated with increased costs, the longitudinal association within hospitals was expected to be small or non-existent. To the extent that the Medicare case-mix index has improved since 1981 and become a more sensitive measure of severity, the cross-sectional effects of teaching intensity were expected to decrease over time because less unmeasured patient acuity should be absorbed by the teaching coefficient. However, a small or non-existent within-hospital effect would also be consistent with a serially decreasing cross-sectional effect even without the influence of improved case-mix measurement, in light of the secular trend of increased resident-to-bed ratios thr oughout this study period.

DATA

Hospital-level files from the Hospital Cost Report Information System (HCRIS) minimum data sets were used for PPS years 6 through 12, corresponding to federal fiscal years ending September 1989 through September 1995. Cost data were merged with HCFA Case-Mix Index Files, the Provider Specific Files, the Historical Wage Index File, and the PPS Input Price Index. An annual case-mix index value was constructed for each provider based on a weighted average of the published values for each of two periods spanning the provider's fiscal year. Wage index values were applied with a three-year lag (e.g., the index value for PPS year 9 was used as a control variable for observations during PPS year 6) in order to improve the match between the analysis year and the year when the wage data were actually collected. Wage index values were assigned based on the metropolitan statistical area (MSA) or state rural area in which the facility was located. The resident-to-bed ratio used in the model was algebraically derived from the ratio of IME payments to total DRG payments, as reported by the hospitals in their payment settlement data. (Descriptions of this derivation and of alternative sources for the teaching variable are available from the author on request.)

The combined HCRIS files contained data on 5,720 unique providers paid under the PPS-DRG system. Observations on hospitals located in Puerto Rico (09 percent) and those with fewer than 50 Medicare discharges in a given year (2.6 percent) were excluded from the study. Another 1.2 percent were excluded due to out-of-range cost or payment data or missing records from other merged HCFA files. The final study sample consisted of 5,413 unique providers, contributing a total of 35,085 observations over seven years. Descriptive statistics for the full study sample and the subset of teaching hospitals are presented in Table 1.

METHODS

The first analysis is designed to test for the presence of a within-hospital, or longitudinal, association. A fixed effects technique is used that explicitly controls for cross-sectional variation by estimating separate intercepts for each hospital. Since the model controls for all fixed hospital characteristics, it produces slope parameter estimates derived only from the variation of the independent variables within each hospital over time. The teaching coefficient generated from this estimation thus has no possible interpretation as a basis for adjusting payments across hospitals. The coefficient is relevant, however, to the interpretation of HCFA's original single-year models and may be useful in guiding the translation of cross-sectional results into a formula for an equitable payment adjustment applicable over multiple years.

The fixed effects model was restricted to the subgroup of 1,308 hospitals that received IME payments during the study period. The dependent variable is the natural log of real operating cost per Medicare case, defined as it was in previous HCFA models by excluding costs related to capital, direct medical education, and organ acquisition. Costs were deflated using the quarterly moving average of HCFA's PPS Input Price Index that was closest to the provider's fiscal year. The independent variable of interest is teaching intensity, defined as 1n(1+IRB). [3] PPS year indicators were included to control for secular trends not related to inflation. Disproportionate share (DSH)status is included as a policy indicator variable coded as 1 if the hospital received any DSH payments. Wage index was not included in the model, as it is recomputed each year to center on 1.00 and has no interpretation as a time-based cost predictor. Urban status was also excluded because it is a fixed characteristic that becomes exactly coll inear with the hospital indicator variable. Interaction terms were computed to test whether the effect of the teaching variable is different among facilities eligible for disproportionate share adjustments or among major teaching facilities (identified as members of the Council of Teaching Hospitals, or COTH). Excluding the effects of discharge-based weights, the fixed effects estimating equation is

+ [[beta].sub.3] 1n[(#beds).sub.jt] + [[beta].sub.4] [DSH.sub.jt] + [[beta].sub.6] [(major status).sub.jt]

In [(oper cost/case).sub.jt] = [[sigma][alpha].sub.j] + [[beta].sub.1] ln[(1 + IRB).sub.jt] + [[beta].sub.2] ln[(case-mix index).sub.jt]

+ [[beta].sub.5] [(DSH x IRB).sub.jt] + [[beta].sub.7] [(major x IRB).sub.jt]

+ [sigma][[delta].sub.t][year.sub.t] + [[epsilon].sub.jt],

where [sigma][[alpha].sub.j] represents the fixed hospital coefficients estimated by the model, [year.sub.t] is the vector of time trend indicators, and DSH x IRB and major x IRB are the two interaction terms between teaching and hospital status. Analytic weights are used to control for heteroscedasticity inherent in analysis of group means, constructed from the square root of the number of Medicare discharges from each observation.

The second set of analyses returns to a model that is similar to those employed by the earlier single-year investigations, but that examines the independent effect of time on the cross-sectional association between teaching intensity and cost. Four separate specifications were investigated. To obtain baseline comparisons, we began with a log-log model similar to that used by HCFA, with the addition of the DSH policy variable and time-trend indicators for PPS years 7 through 12, plus their respective interaction terms on the teaching variable. No constraints were imposed on the coefficients of any control variables. (For this reason, the teaching coefficients we derive are not directly comparable to the CBO's coefficient, although this should not affect findings with respect to time trends.) Since the teaching effects are computed separately for each year, the dependent variable was not adjusted for inflation. Categorical variables for small, medium, and large urban status were tested but produced nearly iden tical coefficients; consequently, urban status was entered into the model as a single indicator variable defined by the hospital's location in an MSA as of its last year in the study. Weighted least squares regression estimated the following equation:

1n [(oper cost/case).sub.jt] = [alpha] + [[beta].sub.1] 1n[(1 + IRB).sub.jt] + [[beta].sub.2] 1n[(case-mix index).sub.jt]

+ [[beta].sub.3] 1n[(wage index).sub.jt] + [[beta].sub.4] 1n[(#beds).sub.jt]

+ [[beta].sub.5] DSH [status.sub.jt] + [[beta].sub.6] urban [status.sub.j] + [sigma][[delta].sub.t][year.sub.t]

+ [sigma][[[delta].sup.*].sub.t] [(year x IRB).sub.jt] + [v.sub.jt],

where [[delta].sub.t] and [[[delta].sup.*].sub.t] represent main and interacted effects of each of the time trend variables. Due to the structure of the panel data, the error term [v.sub.jt] is known to contain a component that is correlated within individual hospitals. This problem is addressed using Huber-White standard errors with clustering by hospital. The hypothesis of declining teaching effects over time is tested by obtaining coefficients on the [[delta].sup.*] that are smaller in each successive year and that test significantly differently from each other.

This model was expanded to include another dichotomous variable indicating major teaching status, defined as membership in COTH. Two interaction terms were added to test for differences in the strength of the teaching effect by DSH eligibility and by major teaching status. Because separate PPS payment adjustments are available for exceptionally high-cost cases (outliers) and for DSH providers, a strong argument can be made to account for such payments when estimating indirect medical education costs, before using model coefficients to construct an IME adjustment formula. In a third variation on the model, therefore, the dependent variable was recalculated based on costs from which DSH and outlier payments had been deducted. Finally, a fourth specification was developed that retained the DSH and outlier-adjusted version of the dependent variable, but where the teaching variable for each unique hospital was fixed at the level measured in the first year of data that the hospital contributed to the analysis. Thi s specification was considered reasonable in view of the nonsignificant within-hospital association identified in the fixed effects model. By fixing the teaching coefficient at its base value we hoped to explore the independent effects of reduced measurement error in the case-mix index variable, in light of the fact that the index is based on resource weights and diagnosis grouping algorithms that have been subject to several incremental improvements over the study period.

RESULTS

The fixed effects model provides no evidence that changes in teaching intensity within a given hospital are associated with changes in real operating costs. Estimates of the teaching coefficients for each category of hospital identified by combinations of the interaction terms are plotted in Figure 3. The standard errors on the estimates are relatively large, suggesting a possibility that negative study results could be due to insufficient variation in the teaching intensity variable. The parameter estimates are very close to 0, however, and within-hospital variation in teaching intensity over the study period is not insubstantial. (Between PPS years 6 and 12 the unweighted mean resident-to-bed ratio increased by approximately 3 percent per year, and a variable computed from each hospital's seven-year change in the value of the ratio has a mean of .04 with a standard deviation of .082.) Complete fixed effects estimation results are presented in Table 2. Coefficients on the PPS year indicators confirm what was noted earlier about trends in real case-mix adjusted costs over this time period. Since the coefficients reflect each year's cost difference compared to the reference year (PPS 6), annual changes in cost per case are measured by the increments to the coefficients across each successive study year.

The cross-sectional models, in contrast, continue to show a significant, positive correlation between teaching intensity and cost per case when examined across hospitals. A clear downward trend over time in the strength of the teaching effect is evident from the coefficients on the time-trend interaction terms. Results from each of the four specifications designed to capture cross-sectional association are summarized in Table 3. The coefficient on the teaching variable in the first specification is 0.573 in the reference year (PPS 6), but it drops by 45 percent over the study period. Predicted effects on retransformed cost from this first estimation, by PPS year, can be seen in Figure 4. [In keeping with Rogowski and Newhouse's (1992) recommendation, smearing adjustments have been estimated and included in the computations of predicted cost. [4]] All time-based interaction terms are significantly different from 0 and are significantly different from each other, with the exception of the increment from PPS ye ar 9 to PPS year 10. The coefficient on the logged case-mix index variable is close to 1.00 in each of these models, as expected. In all models, urban status has a significant positive effect of approximately 6 percent on cost per case. The parameter estimates on the time-trend indicators in all four specifications are expressed in Table 3 as linear combinations of their main and interacted terms. As in the fixed effects model, these estimates represent comparisons to PPS year 6. Unlike the fixed effects model, they reflect inflation effects as well as other cost trends because the dependent variables are measured in current dollars.

In the second cross-sectional specification, disproportionate share status is a strong teaching effect modifier. Major teaching status, however, is not a significant covariate or interaction term when included simultaneously with the DSH variables (a reflection of the large degree of overlap between these groups). In the third specification, replacing the dependent variable with one constructed from costs net of DSH and outlier payments reduces the teaching coefficients by 10 to 15 percent in the reference year, but does not appear to alter the decline in teaching effects over time. The coefficient on the indicator variable for DSH status does, however, reverse sign and increase in magnitude. The negative coefficient is evidence that DSH payments are in excess of observed cost differentials among DSH-eligible hospitals. In this third specification, no DSH interaction term exists, which leaves major teaching status to become a significant and strong effect modifier. Linear combinations of the teaching coeffic ients with their related interaction terms (not shown) reveal that by the last year of the study, the coefficient on ln(l + IRB) among minor teaching hospitals is reduced to 0.11 and is only marginally significant (p = .075). The teaching coefficients in the DSH adjusted specification decline over the study period by over 70 percent for minor teaching and 50 percent for major teaching facilities, but the year-to year differences are not as evenly spaced as they are in the two unadjusted cost specifications. Coefficients on the interaction terms for PPS years 6 through 8 do not test significantly differently from each other, nor do those from PPS years 9 and 10, nor do those from PPS years 11 and 12. Figure 5 plots the annually estimated teaching elasticities from the DSH-adjusted specification and permits a comparison with those used in the IME payment formula in effect during the same period. This graph demonstrates the extent to which past IME overpayments could have resulted from a combination of Congress' s multiplier factor of 1.89 and the decision not to update the coefficient over time. The difference in teaching effects between major and minor teaching hospitals is substantial. Note that the lines plotted for these two groups are parallel; this is explained by a single major teaching interaction effect estimated across all years.

DISCUSSION

The last specification in Table 3 assesses year-to-year changes in the effect of teaching on adjusted operating costs after controlling for the effects of increased teaching intensity over time. When hospital-specific teaching ratios are fixed, the change in teaching coefficients reflects the impact of secular changes in other model covariates (including those that are unobserved but absorbed into the time-trend indicators). The differences between the last two specifications are moderate; by PPS year 12 the time-trend interaction term is approximately 12 percent greater in the fixed ratio specification. The pattern of decline in teaching coefficients over the study period, howeve r, is very similar. Figure 6 plots the coefficients on the time-based interaction terms, representing estimates of the annual reduction in indirect medical education costs for both models. The uneven secular pattern we noticed in the third cross-sectional estimation, where effect measures tested similarly within clusters of years, appears to be even stronger when the teaching variable is fixed in time.

Results from these panel data studies support earlier conclusions drawn from single-year analyses that identify teaching intensity as a proxy for unobserved hospital and patient characteristics associated with higher costs. Within a given hospital, however, our results provide no evidence that increases in the ratio of residents to beds are associated with increases in unit operating costs. The absence of a significant longitudinal association between intensity and costs suggests that the residents themselves are not causal agents in the teaching cost differentials; thus there is little evidence in these models to support the resident inefficiency, or "learning curve," explanation for indirect medical education costs. In models that do not include unique hospital indicator variables, a distinct pattern of declining teaching effects over time is evident. The declining cross-sectional effect is consistent with the theory that observed indirect medical education cost estimates are actually measuring the effects of fixed hospital characteristics, because the decline reflects a "watering down" effect from the contemporaneous trend of increased teaching intensity within hospitals. However, the increase in teaching ratios over seven years is not sufficient to explain all or even most of the decline in the cross-hospital estimates of teaching effects.

The fact that teaching effects decline over time even when the model fixes the ratios at their earliest level supports the hypothesis that teaching intensity also serves as a proxy for unmeasured patient characteristics (severity, practice patterns, or both) that are subject to change overtime. Discontinuities in the downward time trend evident in Figures 5 and 6 suggest the possibility of regulatory or other external influences. Our findings from the estimation that uses teaching ratios computed from 1989 are consistent with earlier hypotheses about the role of the case-mix index in the original HCFA model, particularly its inability to capture systematic differences in within-DRG acuity. HCFA has made improvements to the case-mix system each year, but in the period covered by this study, the most significant changes were made in 1990-91 (roughly PPS year 8), when it expanded the coding for the most complex cases (MedPAC 1998). In theory, as the case-mix index improves in its ability to capture differences in patient acuity, less "unmeasured" severity remains in the cost equation to be absorbed into the teaching coefficient We computed annual correlation coefficients between the IRB and case-mix index values and found an increase from around 0.37 in PPS years 6 and 7 to 0.41 in PPS year 8, with only very minor increases thereafter. Under PPS rules, coding and resource weight changes are generally effected in October of each year. This makes matching their impact in time to cost models that are dependent on fiscal year-based data difficult, but the drop in estimated teaching effects between PPS years 8 and 9 suggests a confirmation of the argument that teaching coefficients reflect unmeasured patient severity.

In both of our adjusted cost specifications a possibility exists that changes in the DSH payments could have caused the drops in the teaching effect DSH payments show a different growth pattern, however, rising sharply from PPS years 6 to 7 and increasing steadily but more slowly thereafter. We tested a variation on the fourth specification, using the unadjusted cost variable. The results (not shown) are still very similar to those shown in Figure 6.

All of our model results are also consistent with what we know about structural changes in hospital costs. If decreases in length of stay or other changes in practice patterns had been consistent across all groups of hospitals, there would be no reason for such changes to affect the indirect teaching cost estimates. Because case-mix adjusted cost per case has declined more rapidly in teaching than in nonteaching settings, that fact should be reflected in the interaction effects between teaching and the time-trend variables. At least part of the declining teaching effects can be viewed as a straightforward measure of a narrowing of the gap between teaching and nonteaching facilities. This may reflect the impact of market forces, independent of any patterns of participation in graduate medical education.

POLICY SIGNIFICANCE

How are the findings from these studies relevant to the current policy debate over appropriate Medicare payments to teaching hospitals? Like the earlier research on this question, these studies confirm HCFA's initial findings of a significant, independent teaching cost differential. Although our results tend to support original theories that the teaching ratio serves as a proxy for hospital and patient characteristics, our analysis does not attempt to address whether the hospital characteristics constitute legitimate grounds for price differentiation. They may reflect differences in quality or carrying costs associated with access to technology and biomedical advances; or they may reflect an historical accumulation of a management culture less attuned to efficiency or competition.

The findings should, however, provide some insight into appropriate ways to translate results from cost modeling into an equitable PPS adjustment. Shortly after the PPS legislation was enacted, Newhouse (1983) predicted that both the direct and indirect medical education adjustments would increase hospital demand for residents. Although empirical work to date has not demonstrated a strong relationship between Medicare graduate medical education payments and residency program expansion (Dalton 1999), results from our fixed effects model strengthen the argument that IME payments create additional financial incentives to expand training. Our findings imply that a payment adjustment derived from the observed cross-sectional association should not be tied to short-term changes in an individual hospital's teaching intensity. The Balanced Budget Act of 1997 partially accomplished this correction by capping each teaching hospital's IRB ratio at its 1996 levels. Unfortunately, the law continues to link reductions in the IME adjustments to reductions in a hospital's teaching activities. Even though the payment effect of a reduction in training is spread over a three-year period (because the adjustment is now based on a moving average of the capped ratio), any within-hospital link creates a potential disincentive for teaching facilities to downsize their training programs (AAMC 1997a). Our research does not support the notion that costs per discharge should decline as hospitals make incremental reductions to their complements of on-site residents.

Second, our study results suggest that changes in observed teaching effects are related to changes in the case-mix index. This is consistent with the current thinking of the Medicare Payment Advisory Commission; most recently, MedPAC informed Congress that it intends to re-estimate teaching cost differentials using an expanded DRG grouping system to reduce the influence of unmeasured acuity on the teaching cost estimates (MedPAC 1999). Our results imply that IME formulas should be re-estimated whenever significant diagnosis grouping changes are made.

Third, our study confirms that the strength of the teaching association is greater among major teaching programs. Significant interaction terms in our estimations are consistent with earlier research that identified nonlinearities in the effect of teaching on logged cost. An implication of this finding is that a multilevel adjustment formula by selected hospital groups might be worth examining. Before such an approach is considered, further investigation should be conducted to explore the role of research activities in the teaching cost differentials to determine whether the nonlinearities are a function of program size or of size plus other academic characteristics. Furthermore, in light of other secular patterns noted in the declining cost per discharge, trends in the interaction effects over time should be examined in greater detail.

The presence of a separate disproportionate share adjustment under PPS complicates the IME model's interpretation and its application to payment policy. DSH payments are considered to be both "cost" and "policy" adjustments (ProPAC 1995; Ross 1999). As policy adjustments, they are deliberately allowed to overcompensate for differences in the cost of care delivered to Medicare patients in order to ease the financial burden from care delivered to medically indigent populations and protect the special missions of these institutions. If the teaching cost model does not control for DSH payments, IME and DSH adjustments will at least partially duplicate each other. Yet netting DSH payments from the dependent variable in an IME cost model would have the effect of eliminating the policy component from the combined PPS payments of teaching hospitals. Retaining the DSH indicator variable in the cost model partially offsets the problem because some portion of the policy effect can be said to be absorbed into the negati ve coefficient on the DSH variable (leaving the teaching coefficients higher than they would be in a model without the indicator). Sheingold (1990) described another approach to the problem, using an array of DSH variables with empirically derived coefficient constraints. Appropriate model specification with respect to disproportionate share payments will depend primarily on the policy intentions behind the DSH legislation.

Finally, we note that the federal standardized rate per discharge is computed from a cost base that is statistically adjusted to remove the estimated indirect costs of medical education. A distortion in the teaching coefficient therefore affects the payments of both teaching and nonteaching facilities. Substantive revisions to the IME formulas prior to the Balanced Budget Act of 1997 were accompanied by adjustments to the standardized rate, resulting in payment redistributions rather than reductions. Improving the empirical basis for teaching hospital cost differentials should be viewed as a matter of appropriate product differentiation and as an issue of distributional equity, if only to provide a baseline from which budgetary and policy objectives can be more intelligently considered.

NOTES

(1.) These figures, and others noted later, are computed by the authors from information contained in the Hospital Cost Report Information System files (HCRIS). The files are described in more detail in the Data section.

(2.) The IME formula is derived from the ratio of expected costs among teaching hospitals to expected costs among nonteaching hospitals, holding all other model variables constant. If X is the teaching variable and [beta] is its coefficient, and if all other covariates and their coefficients are represented by Z[gamma] and the residuals by [epsilon], the expected cost differential is calculated by exponentiating the estimation results as follows:

[(E(Y)[\.sub.teach]/E(Y)[\.sub.nonteach]) -1] = [(([e.sup.(X[beta])] [e.sup.Z[gamma]] [e.sup.[epsilon]])[\.sub.teach]/([e.sup.(X[beta])] [e.sup.Z[gamma]] [e.sup.[epsilon]])[\.sub.nonteach]) -1].

Because we assume other covariates are held constant, the exponentiated Z[gamma] terms in the numerator and denominator cancel out. Similarly, if no groupwise heteroscedasticity exists that results in systematically different variance according to teaching status, the exponentiated error terms will also cancel out. If the teaching variable is set to 0 for all nonteaching hospitals, the denominator will be 1 and the formula for the cost differential reduces to [[e.sup.X[beta]] - 1]. In the case where X is defined as the 1n(1 + IRB), however, the expression becomes [[e.sup.(1n(1+IRB)[beta])] -1], which is equivalent to [(1 + IRB).sup.[beta]] - 1.

(3.) HCFA's original specification of the teaching variable has been retained in each of the models addressed by this article. In other analyses we have measured teaching effects using a log transformation of (.001 + IRB) similar to that recommended by Rogowski and Newhouse (1992). The size of the constant added prior to log transformation can have a substantial influence on the predicted teaching effect across groups of hospitals (depending on other assumptions made with regard to the log-linear form), but it does not affect the observed patterns of teaching cost differentials over time. Choice of functional form in the teaching variable is a complex issue, which we intend to address in greater detail in a subsequent article.

(4.) Following Duan (1983) and Manning (1998), the appropriate estimate for the expected value of an exponentiated, nonnormally distributed error term is the smearing factor, computed as the mean of the exponentiated residuals. If groupwise heteroscedasticity specific to teaching status is present, smearing factors must be computed separately for the observations in the numerator and denominator of the ratio for the teaching cost differential (see Note 2). Letting s represent the smearing factor, the corrected teaching cost differential is computed as [[(1 + IRB).sup.[beta]] x [S\.sub.teach]/[S\.sub.nonteach]] -1.

REFERENCES

Association of American Medical Colleges (AAMC). 1997a. Reaching Informed Institutional Decisions About Graduate Medical Education Program Size: Issues for Teaching Institutions. Washington, DC: Association of American Medical Colleges.

_____. 1997b. Statement on Teaching Hospitals and Medicare Disproportionate Share Hospital Payments. Presented to the Committee on Ways and Means, Subcommittee on Health, U.S. House of Representatives. Washington, DC.

Anderson, G. F., and J.R. Lave. 1986. "Financing Graduate Medical Education Using Multiple Regression to Set Payment Rates." Inquiry 23 (Summer): 191-99.

Cohen, J. (President, Association of American Medical Colleges). 1998. Testimony Before the GME Study Group of the National Bipartisan Commission on the Future of Medicare. September 15. Washington, DC.

Commerce Clearing House (CCH). 1999. Medicare and Medicaid Regulatory Guide. Explanations and Annotations, [paragraph]4620. Medicare Prospective Payment System, Hospital DRG Rates, Exceptions and Adjustments. Commerce Clearing House.

Congressional Budget Office (CBO). 1995. Medicare and Graduate Medical Education, Study. Washington, DC: Congressional Budget Office.

Council on Graduate Medical Education (COGME). 1990. Second Report: The Financial Status of Teaching Hospitaly. Washington, DC.

Dalton, K. 1999. Assessment of the Influence of Medicare Graduate Medical Education Payments on Hospital Sponsorship of Residency Training. Ph.D. Dissertation, University of North Carolina at Chapel Hill.

Duan, N. 1983. "Smearing Estimate: A Nonparametric Retransformation Method." Journal of the American Statistical Association 78 (383): 605-10.

Lave, J. R. 1985. The Medicare Adjustment for the Indirect Costs of Medical Education: Historical Development and Current Status. Washington, DC: Association of American Medical Colleges.

Manning, W. G. 1998. "The Logged Dependent Variable, Heteroscedasticity, and the Retransformation Problem." Journal of Health Economics 17 (3): 283-96.

Medicare Payment Advisory Commission (MedPAC). 1998. Report to the Congress: Medicare Payment Policy, Volume II (Chapter 3), March. Washington, DC: Medicare Payment Advisory Commission.

_____. 1999. Report to the Congress: Rethinking Medicare's Payment Policies for Graduate Medical Education and Teaching Hospitals, August. Washington, DC: Medicare Payment Advisory Commission.

Newhouse, J. P. 1983. "Two Prospective Difficulties With Prospective Payment for Hospitals, or It's Better to Be a Resident Than a Patient with a Complex Problem." Journal of Health Economics 2 (3): 269-74.

Pettengill, J., and J. Vertrees. 1982. Reliability and Validity in Hospital Case-Mix Measurement. Health Care Financing Review 4 (2): 101-27.

Pew Health Professions Commission. 1995. Shifting the Supply of Our Health Care Workforce: A Guide to Redirecting Federal Subsidy of Medical Education. San Francisco, CA, October 1995.

Phillips, S. M. 1992. Measuring Teaching Intensity with the Resident-to-Average Daily Census Ratio. Health Care Financing Review 14 (2): 59-68.

PL 100-203. Omnibus Budget Reconciliation Act of 1987.

PL 105-33. Balanced Budget Act of 1997, Sec. 4621. See also Preamble to the Final Rule, 62 Federal Register 45966, 46003, August 29, 1997.

PL 99-272. Consolidated Omnibus Budget Reconciliation Act of 1985.

Prospective Payment Assessment Commission (ProPAC). 1997. Report and Recommendations to the Congress, March. Washington, DC.

_____. 1987. Report and Recommendations to the Secretary, U. S. Department of Health and Human Services, March. Washington, DC.

_____. 1991. Report and Recommendations to the Secretary, U. S. Department of Health and Human Services, March. Washington, DC.

_____. 1992. Report and Recommendations to the Secretary, U. S. Department of Health and Human Services, March. Washington, DC.

_____. 1995 Report and Recommendations to the Secretary, U. S. Department of Health and Human Services, March. Washington, DC.

_____. 1989 Payment Adjustments--Indirect Teaching and Disproportionate Share Hospitals, Technical Report I-89-04, July. Washington, DC.

Rogowski, J. A., and J. P. Newhouse. 1992. "Estimating the Indirect Costs of Teaching." Journal of Health Economics 11(2): 153-71.

Ross, M. N. (Executive Director, Medicare Payment Advisory Commission). 1999. Medicare's Special Payments and Patient Care Costs: Testimony before the Committee on Finance, U. S. Senate. May 12.

Sheingold, S. H. 1990. "Alternatives for Using Multivariate Regression to Adjust Prospective Payment Rates." Health Care Financing Review 11(3): 31-41.

Social Security Act. 1886(d)(5)(B), as amended by Sec. 4621(b)(3)(B)(A) of the Omnibus Budget Reconciliation Act of 1990 (PL 101-508) and Sec. 4621(a) of the Balanced Budget Act of 1997 (PL 105-33).

Thorpe, K. E. 1988. "The Use of Regression Analysis to Determine Hospital Payment: The Case of Medicare's Indirect Teaching Adjustment." Inquiry 25 (Summer): 219-31.

Welch, W. P. 1987. "Do All Teaching Hospitals Deserve an Add-On Payment Under the Prospective Payment System?" Inquiry 24 (Fall): 221-32.
 Descriptive Data from 7-Year Study
 Sample, PPS Years 6-12 (Total
 observations = 35,085)
 % Change
 All over Study
 Years PPS 6 PPS 9 PPS 12 Period
Full sample
No. unique hospitals 5,413 5,125 5,082 4,818 -6.9%
Average no. staffed beds 152 155 153 147 -5.2%
Average operating cost/discharge:
 Actual $4,617 $4,059 $4,804 $4,763 +17.3%
 Adjusted (net DSH and outlier) $4,289 $3,829 $4,474 $4,322 +12.9%
Average case-mix index 1.194 1.222 1.254 +5.0%
% DSH providers 34.1% 25.7% 36.4% 40.2% +56.4%
% teaching hospitals 21.7% 20.8% 21.5% 23.0% +10.6%
Teaching hospital subsample only
No. unique hospitals 1,307 1,040 1,084 1,097 +5.5%
Average no. staffed beds 311 320 316 292 -8.7%
Average resident-to-bed ratio 0.166 0.151 0.160 0.183 +21.2%
Average operating cost/discharge:
 Current dollars $6,460 $5,701 $6,711 $6,364 +11.6%
 Constant (1987) dollars $5,142 $5,047 $5,315 $4,797 -5.0%
Average case-mix index 1.361 1.435 1.476 +8.4%
% DSH providers 59.1% 46.4% 62.5% 65.3% +40.7%
% major teaching
(COTH members) 25.5% 25.5% 25.6% 24.4% -4.3%
 Results from Longitudinal Model:
 Estimation on PPS Years 6-12, Using
 Weighted Fixed Effects Regression
Dependent Variable is
ln (Real Operating Cost per Case) Standard
in 1987 Dollars Coefficient Error P-Value
Teaching intensity:
(reference group is minor teaching,
non-DSH) In(1 + IRB) .017 .070 .809
Interaction, IRB x DSH -.0011 .0061 .851
Interaction, IRB x major teaching .006 .077 .980
ln(case-mix) .505 .032 [less than].0000
In(#beds) .047 .011 [less than].0000
DSH status -.0011 .0061 .860
Major teaching status (dropped)
Time trends: PPS 7 .0405 .0039 [less than].0000
 PPS 8 .0611 .0040 [less than].0000
 PPS 9 .0454 .0043 [less than].0000
 PPS 10 .0306 .0044 [less than].0000
 PPS 11 -.0190 .0048 [less than].0000
 PPS 12 -.0592 .0051 [less than].0000
Note: Intercept terms are not reported.
[R.sup.2] value includes the contribution
of the 1,307 fixed hospital variables,
whose coefficients are not shown.
Number of observations = 7,510; number
of hospitals = 1,307. Adjusted [R.sup.2] =
.888; [F.sub.(12,6191)] = 115.54.
 Results from Cross-Sectional Models:
 Estimation on PPS Years 6-12, Using
 WLS with H with Huber-White Correction
 Original Added
 Specification, w/a Interactions by
Model: DSH Indicator Hospital Status
Dependent Variable: ln (oper costs) ln (oper costs)
Teaching Intensity: (reference: minor
(reference year: PPS 6) teaching, non-DSH)
 In (1 + IRB) .573 (.035) [***] .436 (.057) [***]
 In (1 + IRB @ base yr) -- --
Interaction, IRB x DSH .152 (.051) [**]
Interaction, IRB x major .073 (.068)
Interaction terms for
Teaching variable x year:
 IRB x PPS 7 -.053 (.023) [*] -.068 (.025) [**]
 IRB x PPS 8 -.104 (.031) [***] -.124 (.032) [***]
 IRB x PPS 9 -.180 (.026) [***] -.207 (.028) [***]
 IRB x PPS 10 -.189 (.028) [***] -.220 (.031) [***]
 IRB x PPS 11 -.230 (.029) [***] -.263 (.031) [***]
 IRB x PPS 12 -.256 (.030) [***] -.289 (.032) [***]
Ln(case mix) .940 (.021) [***] .942 (.021) [***]
Ln(wage index) .619 (.016) [***] .621 (.016) [***]
Ln(# beds) .0674 (.0045) [***] .0670 (.0046) [***]
Urban status .0596 (.0071) [***] .0610 (.0072) [***]
 Added Added
 Interactions by Interactions by
Model: Hospital Status Hospital Status
 ln (adjusted costs) ln (adjusted costs)
 Net of DSH and Net of DSH and
Dependent Variable: Outlier $ Outlier $
Teaching Intensity: (reference: minor (reference: minor
(reference year: PPS 6) teaching) teaching)
 In (1 + IRB) -- --
 In (1 + IRB @ base yr) .397 (.060) [***] .391 (.061) [***]
Interaction, IRB x DSH
Interaction, IRB x major .155 (.076) [*] .154 (.083)
Interaction terms for
Teaching variable x year:
 IRB x PPS 7 -.030 (.026) -.025 (.024)
 IRB x PPS 8 -.073 (.034) [*] -.040 (.033)
 IRB x PPS 9 -.172 (.029) [***] -.154 (.026) [***]
 IRB x PPS 10 -.183 (.033) [***] -.148 (.031) [***]
 IRB x PPS 11 -.252 (.034) [***] -.212 (.034) [***]
 IRB x PPS 12 -.286 (.037) [***] -.230 (.037) [***]
Ln(case mix) .969 (.025) [***] .969 (.061) [***]
Ln(wage index) .522 (.020) [***] .522 (.020) [***]
Ln(# beds) .0598 (.0051) [***] .0595 (.0051) [***]
Urban status .0574 (.0078) [***] .0573 (.0079) [***]
DSH status .0164 (.0046) [***]
DSH status [+] .160 (.049) [**]
Major teach status [+] .060 (.060)
Time trends [+] PPS 7 .032 (.022) .017 (.023)
 PPS 8 .032 (.028) .013 (.029)
 PPS 9 -.025 (.025) -.050 (.027)
 PPS10 -.027 (.027) -.056 (.030)
 PPS11 -.095 (.028) [**] -.127 (.030) [***]
 PPS12 -.141 (.028) [***] -.172 (.031) [***]
Constant 7.753 (.020) [***] 7.748 (.020) [***]
Smearing estimates:
 Full sample 1.044
 Teaching only 1.057
 Nonteaching 1.040
 N = 35,085 N = 35,085
 [R.sup.2] = .7815 [R.sup.2] = .7821
DSH status -.0940 (.0050) [***] -.0932 (.0051) [***]
DSH status [+]
Major teach status [+] .130 (.067) .132 (.072)
Time trends [+] PPS 7 .062 (.025) [*] .067 (.024) [**]
 PPS 8 .079 (.031) [*] .111 (.030) [***]
 PPS 9 -.003 (.028) .015 (.025)
 PPS10 -.011 (.031) .024 (.030)
 PPS11 -.113 (.033) [**] -.080 (.033) [*]
 PPS12 -.181 (.035) [***] -.124 (.036) [***]
Constant 7.750 (.021) [***] 7.751 (.022) [***]
Smearing estimates:
 Full sample
 Teaching only
 Nonteaching
 N = 35,079 N = 34,577
 [R.sup.2] = .7005 [R.sup.2] = .6982
(+.)Parameter estimates and standard
errors have been expressed as linear
combination of the main plus the
interacted effect.
Robust standard errors are in parentheses:
(*.)p [less than] .05;
(**.)p [less than] .01;
(***.)p [less than] .000.
COPYRIGHT 2001 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Dalton, Kathleen; Norton, Edward C.; Kilpatrick, Kerry
Publication:Health Services Research
Geographic Code:1USA
Date:Feb 1, 2001
Words:8671
Previous Article:Comparing Mortality and Time Until Death for Medicare HMO and FFS Beneficiaries.
Next Article:Differences in Rehabilitation Services and Outcomes Among Stroke Patients Cared for in Veterans Hospitals.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters