# Measuring teaching intensity with the resident-to-average daily census ratio.

Introduction

Currently, Medicare provides for an add-on payment under the prospective payment system (PPS) to teaching hospitals for their higher costs stemming from graduate medical education. This payment, known as the indirect medical education (IME) adjustment, is calculated using a formula based on the ratio of teaching intensity, where the numerator is the number of residents working at the hospital and the denominator is either beds (for the operating PPS) or the average daily census (ADC) (for the capital PPS). Using this formula permits a comparison of hospitals of unequal size but with similar levels of teaching intensity. When the IME adjustment for capital costs was instituted with cost reporting periods beginning on or after October 1, 1991, ADC was selected as the denominator for the ratio, in part based on some of the analysis reported here. This article reports on the first comprehensive review of the impacts of using the resident-to-ADC ratio instead of the resident-to-bed ratio to measure teaching intensity for the IME adjustment under Medicare.

Both the Department of Health and Human Services (DHHS) and the Prospective Payment Assessment Commission (PROPAC) are on record as supporting a single IME adjustment for both the operating and capital PPS (Federal Register, 1992; Prospective Payment Assessment Commission, 1992b). However, DHHS supports adopting the capital IME adjustment formula, including the resident-to-ADC ratio, and ProPAC supports using the operating IME adjustment formula with the resident-to-bed ratio as the measure of teaching intensity.

Purpose

Interest in a measure that is an alternative to beds stems partly from the view that the IME adjustment could be better targeted by basing it on the numerical relationship between residents and patients, and partly from administrative difficulties associated with using beds in the denominator. Analysis of residents' activities indicates that most of their training time is spent in patient care (Arthur Young and Co., 1986). Therefore, the numerical relationship between residents and patients should more directly reflect teaching intensity than would be indicated by the relationship between residents and the hospital size.

The degree to which the resident-to-bed ratio approximates the resident-to-patient relationship depends on a hospital's occupancy rate. To illustrate via two extreme examples, the resident-to-bed ratio of a teaching hospital with a 99-percent occupancy rate would closely approximate the hospital's resident-to-patient ratio; on the other hand, the resident-to-bed ratio for a teaching hospital with an occupancy rate of 10 percent would understate its resident-to-patient ratio.

Administrative difficulties with determining hospital bed size (number of beds) have also sparked interest in an alternative measure. Questionable situations that would be resolved by adopting the resident-to-ADC ratio are whether beds should be counted when a wing is under construction; whether the days a bed is unavailable for use because it is located in a double room occupied by a patient in isolation should be deducted from the number of available bed days; and whether beds in storage should be counted as available.

When the IME adjustment was initiated in 1980, DHHS selected the resident-to-available bed ratio as the measure of teaching intensity over the resident-to-ADC ratio; there was concern that the latter would be too unstable because of fluctuations in use (Federal Register, 1980). Additionally, in response to DHHS' proposal to change the method used to determine available beds, one commenter suggested that using ADC is preferable because the data are readily available and an additional calculation would not be necessary (Federal Register, 1985). In its response, DHHS pointed out that it would consider this approach and others as more data became available.

For purposes of the ratio, an available bed is one that is available for use and housed in patient rooms or wards (Health Care Financing Administration, 1988). Thus, beds that meet the definition for availability are counted whether or not they are occupied. Over time, however, uncertainties have arisen over when a bed is considered available. In a report on what it describes as "weaknesses in data used to calculate" the IME adjustment, the U.S. General Accounting Office (GAO) found that bed-counting practices varied "widely among hospitals and intermediaries" (Comptroller General of the United States, 1991). The GAO report also supports changing the definition to occupied beds, which it calls a verifiable statistic.

An illustration of the difficulty in determining whether beds are available is a situation where a hospital takes a wing out of service for renovation. As a guide to whether or not the beds could be considered available, HCFA has issued instructions that they should be counted if the wing is included as part of the hospital's depreciable assets during the renovation and could be staffed within 24-48 hours (Blue Cross and Blue Shield Association, 1988). Nevertheless, HCFA's fiscal intermediaries need to determine whether both of these criteria are met.

Replacing available beds with occupied beds in the denominator would provide a conceptually simpler variable for implementation purposes, both for hospitals and for fiscal intermediaries. That is, counting only occupied beds avoids the sometimes difficult question of whether a bed can be made available for occupancy, thereby improving the consistency of the policy. Subject to the same exclusions as available beds (e.g., beds in units of a hospital that are not paid under PPS, such as psychiatric units, are not counted), beds are either occupied and counted, or not occupied and not counted.

Previous analysis

Most of the analysis concerning the IME adjustment has centered around the statistical estimate of teaching's impact on operating costs, how this estimate should be made, and the degree to which the adjustment should reflect this estimate in light of other public policy objectives. The current operating IME adjustment increases approximately 7.65 percent for every 10 percent increase in the IME adjustment. This level was set by the Omnibus Budget Reconciliation Act of 1987, and is based on U.S. Congressional Budget Office (CBO) estimates of the effect of teaching on Medicare inpatient operating costs at that time. For the capital PPS, the IME adjustment increases at a rate of approximately 2.82 percent for every 10 percent increase in the resident-to-ADC ratio. This adjustment is set forth in the Medicare regulations at 42 CFR 412.322 rather than by statute.

All recent analyses of the relationship between teaching intensity and operating costs have indicated that the actual cost effect is currently less than that reflected by the present level of the operating IME adjustment. Recently, ProPAC estimated the relationship to be 5.7 percent using PPS6 cost data and fiscal year (FY) 1992 payment rules (Prospective Payment Assessment Commission, 1992a).(1) I However, ProPAC did not control for the effect on costs of a disproportionate share of low-income patients, which is recognized by PPS through a payment adjustment. Controlling for this effect yields a much smaller estimate. The article by O'Dougherty et al., (1992) in this issue of Health Care Financing Review discusses the results of statistical analysis of the effects of teaching, in conjunction with a single standard rate system. Using the resident-to-ADC ratio, the estimated relationship is 3.06 percent. The corresponding estimate using the resident-to-bed ratio is 4.50 percent.

Despite the evidence that the current level of the IME adjustment overestimates the effect of teaching on costs, the level of the adjustment has not been changed by Congress since 1987. One reason is that, despite the high positive Medicare operating payment-to-cost relationships of major teaching hospitals, their overall payment-to-cost relationships are below average (Prospective Payment Assessment Commission, 1992b). Concern about the poor overall financial situation of large teaching hospitals reflects the fact that these hospitals are most likely to be confronted with health-related social problems associated with their predominant location in inner-city areas, such as treating large numbers of medically uninsured patients. Given the general desire to maintain some level of access to care for the uninsured, the IME adjustment has come to be viewed as a subsidy payment to teaching hospitals.

In fact, it has always been viewed that way to some extent. In passing the PPS legislation, Congress expressed doubts about the new system's ability to account fully for the higher costs of teaching hospitals, and set the adjustment at double the estimated level of the teaching effect. Furthermore, the specification of the estimating model itself involves a decision about the degree to which one wishes to isolate the true effect of teaching separate from other factors generally associated with teaching hospitals. Thorpe (1988) and Sheingold (1990) illustrated the variant results that occur depending on the specification of the model. Thorpe pointed out that by limiting the variables used in the model to PPS payment parameters, the cost effects of factors associated with teaching hospitals but not directly related to teaching are loaded onto the estimate. Basing the adjustment on an estimate that controls for all of the factors affecting variation in hospitals' costs would result in a substantially lower estimated teaching effect. Sheingold described the choice of a preferred model as being dependent on "the goals and objectives set forth by policymakers" (Sheingold, 1990).

This issue of payment equity between teaching and non-teaching hospitals relative to their respective costs has been the focus of most past analyses. Much less attention has been paid to the equity of the distribution of IME payments among teaching hospitals. An exception is Welch's article (1987) which argues that the first 10 or 15 residents per 100 beds provide services that relieve the demand for attending physicians, and therefore these residents should not be included in the resident count used to determine the adjustment. Another exception is Sheingold's research that tested for a threshold level of the resident-to-bed ratio at which the effects on costs become significant (Sheingold, 1990). He examined ratios in increments of 0.1 through 0.5, and lumped together hospitals with ratios greater than 0.5. Sheingold found that "there does not appear to be a threshold level of the [ratio] at which the indirect-teaching effect becomes significant . . . Rather, it appears that statistically significant cost effects exist throughout its range" (Sheingold, 1990). In his article, Welch acknowledged this absence of a threshold effect, hypothesizing that characteristics other than the actual education of residents may account for the higher costs of low-intensity teaching hospitals. Although the issue of a threshold effect is not taken up here, the potential refinements represented by the resident-to-ADC ratio may provide an avenue for further investigation.

Analytical approach and issues

This article explores distributional equity among teaching hospitals by reviewing the potential for improving the measure of teaching intensity. In order to evaluate the impact that switching to the resident-to-ADC ratio may have on IME payments, the analysis uses estimates of the effect of teaching on costs measured using both the resident-to-bed ratio and the resident-to-ADC ratio. These estimates are used to calculate IME adjustments using both ratios. Because Medicare now pays two different IME adjustments, four estimates are made, one for both ratios corresponding to the operating and capital IME adjustments. The formula used to calculate the current operating IME adjustment is:

1.89 x [((1 + resident-to-bed ratio).sup.0.405] - 1).

This formula results in the current adjustment of approximately a 7.65 percent increase for every 10 percent increase in the resident-to-bed ratio. It is approximate because the adjustment results in a smaller marginal increase as the ratios increase. For example, using this formula, a resident-to-bed ratio of 0.10 yields an operating IME adjustment factor of 0.0744, a ratio of 0.20 yields an adjustment factor of 0.1448, and a ratio of 0.40 yields 0.2759.

The current IME adjustment formula under the capital PPS is as follows:

e raised to the power (0.2822 x resident-to-ADC ratio) - 1,

where e is the natural antilog of I (or 2.71828), and 0.2822 is the estimated impact on hospitals' total costs (operating and capital) of a teaching program. Under the capital IME specification, the rate of increase in the adjustment grows larger as the level of the ratio rises. Switching to the resident-to-ADC ratio has potentially important implications for the size and distribution of IME payments. Since the current level of the operating adjustment is based on an estimate using beds in the denominator, changing to ADC would necessitate re-estimating the level of the adjustment using the resident-to-ADC ratio. Because all recent estimates of the effect of teaching on costs are below the current 7.65 percent level of the operating IME adjustment, this re-estimation would result in a reduction in the level of the adjustment. Hospitals whose resident-to-ADC ratios are significantly higher than their resident-to-bed ratios may benefit from the switch despite the lower level of the adjustment because of their much higher ratios. In fact, mainly because of the beneficial impact on this group of hospitals, the overall IME payments would be slightly higher using the resident-to-ADC ratio. At the other end of the spectrum, however, hospitals with high occupancy rates stand to lose because their resident-to-ADC ratios are not large enough to compensate for a lower adjustment. This is particularly an issue in today's health policy environment, as many of the largest teaching hospitals, which are often on the front lines in addressing such needs as caring for the medically uninsured, fall into this group. The analysis that follows indicates that if a switch to ADC were made for the operating IME adjustment, simultaneously adopting the capital IME adjustment formula specification for the operating adjustment would alleviate the potential adverse impacts on this group of hospitals. This results from the increasing marginal rate of change in the adjustment factors as the ratio rises. Because high occupancy teaching hospitals also tend to have higher ratios under both measures, they would benefit more from the capital adjustment formula. Another issue discussed later is the year-to-year stability of ADC compared with beds. It is desirable from the standpoint of hospitals and the Medicare program that the level of IME payments be fairly stable and predictable because excessive fluctuation in the ineasure of teaching intensity would hamper budgeting efforts. Concern about the stability of ADC hinges on its susceptibility to random fluctuations. The following analysis indicates that ADC is somewhat more variable over time than beds, although the rate of change in the resident-to-ADC ratios from PPSI to PPS7 is equal to that for beds.

Data and methodology

The data used in this analysis were taken from the Medicare Hospital Cost Reports on the Health Care Provider Cost Report Information System and the provider specific files which are maintained by HCFA. The resident-to-bed ratios used in the regressions are from the provider specific file. The resident-to-ADC ratios used in the regressions were calculated by first multiplying the resident-to-bed ratios from the provider specific file by the number of available beds reported on the cost reports to determine the number of residents. Hospitals' ADCS were determined by dividing total inpatient days, in areas of the hospital paid under PPS as reported on the cost report, by the number of days in the cost reporting period.

The cost data used in the regressions come from hospitals' PPS5 and PPS6 (FYs 1988 and 1989) cost reports. A dummy variable is included to control for inflation. The logged values of total Medicare costs (operating and capital) are used as the dependent variable in all of the regressions. This is consistent with the stated positions of DHHS and ProPAC that the operating and capital IME adjustments should ultimately be combined into a single adjustment. Total costs are standardized by the case-mix index corresponding with the year from which the data are taken. Besides the teaching intensity variables, the independent variables that are included in the regressions are the logged value of the area wage index, the percentage of low-income patients for urban hospitals with 100 or more beds, and dummy variables for location in either large urban or other urban areas. For rural hospitals and urban hospitals with fewer than 100 beds, a value of 0 is assigned to the independent variable representing the percentage of low income patients. This corresponds with the specification for the disproportionate-share adjustment under the capital PPS.

The estimate of the relationship between teaching and costs is affected by the choice of factors included in the regression model except for teaching intensity. The regressions examined later are primarily intended to facilitate a comparison between the two denominators rather than to estimate the appropriate level of the adjustment. Therefore, the independent variables besides the teaching variable are specified in the same way across all of the regressions. This eliminates the interactive effects between varying specifications of the IME and disproportionate-share adjustment variables, for instance.

The teaching variables in the regressions are specified two different ways, corresponding with the different specifications used to estimate the operating and capital adjustments. The specification of the teaching variable used in the regression analysis for the operating IME adjustment was in the form:

the natural log of (1 + resident-to-bed ratio)

This is the first specification used in the regressions below, first with beds in the denominator and then with ADC. For simplicity, it is referred to as the operating specification. To remove any impact on the estimate where the constant (1 in the previous specification) is added to the ratio to avoid taking the natural log of 0 (which is undefined) for non-teaching hospitals, the specification used in determining the estimate for the capital IME was simply the unlogged value of the resident-to-ADC ratio. This is the second specification of the teaching variable employed later, and it is referred to as the capital specification.

The coefficients resulting from these regressions are then used to calculate adjustment factors with the corresponding formulas currently used for the operating and the capital adjustments. To calculate adjustment factors using the operating adjustment formula, the formula is revised to set 1.89 equal to 1, and 0.405 is set equal to the coefficient corresponding to the intensity measure that is used. That is, if the resident-to-ADC ratio is being used to determine the adjustment factor, 0.405 in the equation is replaced by the coefficient resulting from the operating specification using the resident-to-ADC ratio. To calculate IME adjustment factors using the capital adjustment formula, the coefficient corresponding with whichever ratio is employed is substituted for 0.2822. This coefficient is estimated using the capital specification as previously described.

For a comparison of hospitals' available beds, ADCS, and ratios during the period from PPSI through PPS7, data from the cost report files were used. It is worth noting, however, that the resident-to-bed ratio used for payment purposes is not taken from the cost report but is reported separately by HCFA'S fiscal intermediaries on the provider specific file. There is some variation between the ratios determined based on data from the Medicare cost reports and data that are used to compute a hospital's IME adjustment factor. These data were used for the descriptive comparisons because the provider specific file does not contain historical data or hospitals' ADCS.

Impact by program size

As previously noted, switching to the resident-to-ADC ratio would lead to a larger relative portion of IME payments going to small programs because of the lower occupancy rates of these programs. Because ADC is actually a measure of occupied beds, multiplying available beds by the occupancy rate results in ADC. The lower the occupancy rate, the lower ADC will be relative to beds and, conversely, the higher the resident-to-ADC ratio will be relative to the resident-to-bed ratio.

Table 1 displays the average resident-to-ADC and resident-to-bed ratios for hospitals grouped by their numbers of residents, and the average occupancy rates for each group. The averages are weighted by PPS payments to illustrate the relative budget impacts. Hospitals with the smallest graduate medical education programs would experience the largest percentage increase in their average ratios (57.9 percent) by moving from the resident-to-bed ratio to the resident-to-ADC ratio. This can be attributed to the fact that this group of hospitals has an average occupancy rate of 67.3 percent, well below that of the other hospital groups. The average resident-to-bed ratio for hospitals with fewer than 50 residents is 0.057, and the average resident-to-ADC ratio for this group of hospitals is 0.090. Among hospitals with more than 301 residents, the average resident-to-bed ratio is 0.570 and the average resident-to-ADC ratio is 0.717.

To more fully evaluate the effect of this phenomenon, the payment impacts of switching to ADC were analyzed. Payments for IME are affected by both the level of the adjustment factor and the values of the ratios. Although the resident-to-ADC ratios are higher than the resident-to-bed ratios, the regression coefficients for teaching are lower when using the resident-to-ADC ratio. Analysis of the capabilities of the two measures to predict cost variation indicates that they perform similarly, however, as discussed later. The higher resident-to-ADC ratios and the lower coefficients will offset each other in terms of their payment effects.

In order to simulate the payment impacts of switching to the resident-to-ADC ratio and to compare these impacts with using the resident-to-bed ratio, it was necessary to determine comparable adjustment factors. To do this, four regressions were performed, using both ratios and the operating and capital specifications. Table 2 shows the resulting coefficients and t-statistics for the various specifications of the teaching variable (the coefficient values of the other variables conformed to expectations and varied little across the alternative specifications). The smaller coefficients using the resident-to-ADC ratio are illustrated here. The teaching coefficients are: with beds, 0.4383 using the operating specification and 0.3552 using the capital specification; and, with ADC, 0.3674 using the operating specification and 0.2824 using the capital specification. Although the coefficients are lower when ADC is used, the t-statistics are slightly higher. To calculate IME adjustment factors, these coefficients are substituted into the current operating and capital IME adjustment formulas as previously described in the methodology section. To illustrate using these coefficient values, to calculate adjustment factors using the resident-to-ADC ratios and the operating IME formula, the formula would be:

1 x [([1 + resident-to-ADC ratio].sup.0.3674] - 1)

Alternatively, to calculate adjustment factors using the resident-to-bed ratios and the capital IME formula, the formula would be:

e raised to the power (0.3552 x resident-to-bed ratio) - 1.

Table 3 compares the effects of using both ratios with their corresponding re-estimated coefficients on hospitals' IME adjustment factors. It shows the weighted (by PPS payments) average IME payment adjustment factors under the current 7.65 percent adjustment rate using beds as a denominator, and the weighted average IME adjustment factors using the operating and capital formulas and both ratios for hospitals grouped by their numbers of residents (Table 1). The averages are weighted by PPS payments to indicate the actual IME payment impacts on the various hospital groups. The current law capital adjustment factors are equal to those under the capital specification using the resident-to-ADC ratio, therefore they are not shown separately. The IME adjustment factors represent the average per case add-on factor a teaching hospital would receive for IME. For instance, a hospital with an IME adjustment factor of 0.150 would receive a per case payment for IME equal to 15 percent of the diagnosis-related group (DRG) payment for that case.

As anticipated based on the previous discussion, the impact of switching to ADC would vary depending on program size. Examining the adjustment factors for the 2 ratios under each specification, it is clear that smaller programs would do better using the resident-to-ADC ratio, and the largest programs would do about the same using either ratio. To illustrate, hospitals with 1 to 50 residents currently have operating IME adjustment factors of 0.042. With a formula employing the 4.38 percent adjustment as estimated in the previously noted regression that uses the operating specification and beds, the average adjustment factor is 0.024. With the 3.67 percent adjustment suggested by the regression that uses the operating specification and ADC, the average adjustment factor for this group of hospitals is 0.031. With the adjustment levels suggested by the regressions that use the capital specifications (3.55 percent using beds and 2.82 using ADC), the corresponding averages are 0.021 and 0.026. This disparity persists but decreases as the size of the teaching programs increase. For hospitals with more than 300 residents, average operating adjustment factors would fall from 0.377 under current law to 0.218 using either beds or ADC. Under the capital specification, there is only a 0.001 difference in the average adjustments for these hospitals.

The effects of the different rates of change between the two adjustments is also evident in Table 3. As discussed previously, the current operating adjustment formula results in a declining marginal rate of change as the ratio value increases. The current capital formula, meanwhile, results in a rising marginal rate of change as the ratio rises. This effect is evidenced by the greater span between the average adjustment factors for the smallest and largest programs under the capital specification. In fact, the adjustment factors using this specification are lower for the smallest programs and higher for the largest programs.

A final point brought out in Table 3 is that the weighted average adjustment factors for all teaching hospitals are greater using ADC. The implication of this is that total IME payments would be greater using a re-estimated adjustment and ADC, regardless of the formula specification. Under the operating specification, the average adjustment factor for all teaching hospitals is 0.072 using beds and 0.078 using ADC. Under the capital specification, the averages are 0.068 and 0.073, respectively. This is due to the beneficial impact of ADC on smaller programs.

[TABULAR DATA OMITTED]

Stability of average daily census

The following discussion evaluates the relative stability of ADC, both during the 7-year period from PPSI to PPS7, and from year to year during the interim. As previously noted, concern that ADC would fluctuate excessively from year to year and would be too easily manipulated led to its past rejection as the denominator for the ratio. Excessive instability would hamper budgeting efforts, both at the national level and at the hospital level. It would also make the resident-to-ADC ratio a less reliable predictor of costs.

To examine whether ADC is changing at a different rate over time than bed size, Table 4 shows the percent changes in beds and ADC for a matched set of hospitals, grouped according to teaching program size, based on their average resident-to-bed or resident-to-ADC ratios (the latter for showing the change in ADC, and the former to show the change in beds) during the period from PPS1 through PPS7. The groups are based on levels of teaching intensity in order to evaluate whether changes over time in beds and ADC have varied by program size. The evaluation group was limited to hospitals reporting at least one resident in each of the 7 years under review. It was felt that this would provide a more useful indication of the trends in changes in bed size and ADC in teaching hospitals over time.

The columns display the average percent changes from the previous year for PPS2 through PPS7. The total column is the overall percent changes from PPSI to PPS7. The percents are weighted by PPS payments in order to illustrate the budgetary implications of the changes from year to year. Since reductions in bed size or ADC would lead to higher ratios and more IME payments, reductions at a hospital receiving $2 million in PPS payments are considered more significant from the standpoint of the Medicare budget than they would be at a hospital receiving only $ 10,000 in PPS payments.

Overall, the weighted average number of beds fell by 8.3 percent from PPSL to PPS7, and the weighted average ADC fell by 9.0 percent. Furthermore, Table 4 shows that the averages for both beds and ADC fell by greater amounts during the first 2 years of PPS than they have during later years. Given the incentives of PPS to reduce Medicare patient days, one would expect the initial declines exhibited here. Similarly, since the potential for reducing the length and corresponding costs of patient stays, or minimizing bed size to maximize the resident-to-bed ratio, are somewhat limited, the smaller percent changes in later years are not surprising. Both statistics continued on a downward trend through PPS7, however.

The larger decrease in the average ADCS would seem to indicate that the resident-to-ADC ratios would have increased more than the resident-to-bed ratios. Table 5 shows that this is not the case, however, as both ratios increased by 15.2 percent from PPSI through PPS7 (the averages in Table 5 are also weighted by PPS payments). This occurs because of differential rates of change in the numbers of residents among the hospital groups in the tables. For example, hospitals with resident-to-bed ratios between 0.048 and 0.102 (between the 25th and 50th percentile average resident-to-bed ranking) had their average number of beds fall by 10.0 percent, while the average ADCS of hospitals with resident-to-ADC ratios between 0.076 and 0.162 (comparable group based on their average resident-to-ADC ranking) fell by 9.7 percent. Meanwhile the respective increases in the average resident-to-bed and resident-to-ADC ratios were 8.0 percent and 3.7 percent. The difference is attributable to the rates of change in numbers of residents for these groups (not shown in the tables). The number of residents among the hospital groups noted above fell by 3.6 percent for the resident-to-bed group, while it fell by 6.7 percent among the corresponding group based on the resident-to-ADC ranking.

Table 6 shows the distributions of the year-to-year percent changes in beds and ADC for teaching hospitals for PPS1 through PPS7. A percent change was computed for a hospital if it had residents in any 2 consecutive years. The concern here is to evaluate the impact of using ADC on individual hospitals, regardless of their Medicare payments relative to other teaching hospitals. The mean percent change for all included hospitals, and the standard deviation of the distribution around that mean percent change, are shown below the respective columns.

The final column in Table 6, labeled "Overall," displays the distribution, the mean percent change, and the standard deviation, after combining all of the year-to-year changes from the previous columns. For example, if a hospital had residents for all 7 years under review, all six of its year-to-year changes are included in this column. The mean year to year percent change in beds over the 7 year period was -0.3 percent, and the standard deviation was 28.1. The corresponding values for changes in ADC were -1.4 and 24.7. On average, ADC is about 1 percent more variable from one year to the next than beds.

Most of this differential stems from the higher rates of change in ADC during the first few years of PPS. From PPS I to PPS2, the mean percent change was -5.1 percent for ADC, and -2.0 for beds, and from PPS2 to PPS3, the corresponding values were -2.8 for ADC and -1.7 for beds. It is likely that these reductions in ADC were in response to the incentive of PPS to reduce patient lengths of stay. More recently, the mean percent change in ADC among teaching hospitals has been less than that for beds in all but one period (PPS6 to PPS7).

The median percent change in beds is 0.0 for all of the years examined, while the median percent change in ADC ranges from -5.3 from PPSI to PPS2, to -0.3 from PPS5 to PPS6. Given that ADC is more subject to random fluctuation, this result was expected. Examining the percent changes at the 5th and 95th percentiles, however, reveals that the two measures are very similar in terms of their susceptibility to extreme changes. The 5th percentile values for all year to year changes (the last column) for beds and ADC respectively are -18.8 percent and -19.1 percent. The 95th percentile values are: 12.5 percent for beds, and 12.6 percent for ADC.

In conclusion, ADC does not appear to be dramatically more variable, and in light of the fact that the weighted average ratios changed at exactly the same rate in Table 5, the results in Table 6 do not appear to disqualify ADC as a useful denominator for measuring teaching intensity.

[TABULAR DATA OMITTED]

Discussion

This analysis was undertaken to ascertain whether ADC would serve as a suitable replacement for beds, given the administrative complexities inherent in determining bed size. The results indicate that the resident-to-ADC ratio would have been as valid and reliable a measure of teaching intensity as the resident-to-bed ratio during the period from PPSI to PPS7.

The results also illustrate potential political ramifications of making a switch to ADC, given the redistribution of teaching payments that would arise. This redistribution would occur as a result of the lower occupancy rates of the smallest teaching programs compared with larger teaching programs, leading to a larger proportion of IME payments going to smaller programs. This effect is alleviated by employing the capital specification in the estimating equation combined with the capital IME adjustment formula. The lower coefficient when using the capital specification, and the increasing marginal change in the adjustment factor as the ratios rise lead to smaller adjustment factors for the smallest programs, and bigger adjustments for the larger programs compared with those occurring under the operating formula. Although further analysis is needed to explain the lower coefficients when the capital specification of the teaching variable is used rather than the operating specification, it appears that, if ADC is adopted, it would be preferable to combine it with the current capital IME adjustment formula. Given the health-related social problems (i.e., maintaining access for the uninsured) that are addressed primarily by large inner-city hospitals, many of which are teaching hospitals, it would not seem appropriate to redistribute IME payments away from these hospitals into the smallest programs. Concern for the financial viability of large teaching hospitals is a separate issue, however, from whether the resident-to-ADC ratio is a better measure of the relationship between teaching and higher costs. For example, given the role of patient care in residency programs, it may be entirely consistent with the policy objective of compensating hospitals for their indirect teaching costs to realign the relative level of payments to, for example, a 100-bed hospital with 5 residents and an ADC of 45. The resident-to-bed ratio of such a hospital would be 0.05. Its resident-to-ADC ratio, on the other hand, would be 0.11. Absent evidence that the linear relationships between costs and teaching intensity vary by program size, one can very plausibly argue that such outcomes are justified.

A related issue is that the incentive to minimize the length of patient stays if the resident-to-ADC ratio were adopted, has given rise to some concern that patient care may be adversely affected. In terms of Medicare patients, however, hospitals have been faced with this incentive since the inception of PPS. The peer review organizations serve as a check to ensure that Medicare patients continue to receive quality care. While the incentive to discharge patients earlier would be enhanced by adopting ADC as the denominator in the ratio, the impact for Medicare patients should be minimal. Because ADC includes all patient days, however, this incentive would also extend to non-Medicare patients. Given the increasing movement toward PPS-type payment systems and managed care programs by non-Medicare payers, however, it seems questionable whether any incentive to reduce non-Medicare patient lengths-of-stay would have a significant impact.

(1) PPS1, PPS2, PPS3, etc., refer to the respective since the beginning of PPS during FY 1984. For instance, PPS1 corresponds with FY 1984, and PPS7 correspondents with FY 1990, the latest year for which Medicare cost report data are generally available.

References

Arthur Young and Co.: Study of the Financing of Graduate Medical Education." Contract No. HHS- 100-80-0155. Prepared for the U.S. Department of Health and Human Services, Oct. 1986.

Blue Cross and Blue Shield Association: Administrative Bulletin # 1841, 88-01. Chicago. 1988. Comptroller General of the United States: Flawed Data Add Millions to Teaching Hospital Payments, Report GAO/IMTEC-91-31. Washington. U.S. General Accounting Office, June 1991.

Federal Register: Medicare program; schedule of limits on hospital inpatient general routine operating costs of cost reporting period beginning on or after July 1, 1980. Final Notice. Vol. 45, No. 121, 41869. Office of the Federal Register, National Archives and Records Administration. Washington. U.S. Government Printing Office, June 20, 1980.

Federal Register: Medicare program; changes to the inpatient hospital prospective payment systems and fiscal year 1986 rates. Final Rule. Vol. 50, No. 170, 35683. Office of the Federal Register, National Archives and Records Administration. Washington. U.S. Government Printing Office, Sept. 3, 1985.

Federal Register: Medicare program; changes to the inpatient hospital prospective payment systems and fiscal year 1993 rates, Final Rule. Vol. 57, No. 170, 39807. Office of the Federal Register, National Archives and Records Administration. Washington. U.S. Government Printing Office, Sept. 1, 1992.

Health Care Financing Administration: Provider Reimbursement Manual. Pub. No. 15- 1. Office of Issuances, Health Care Financing Adminstration. Washington. U.S. Government Printing Office. Aug. 1988. O'Dougherty, S.M., Cotterill, P.G., Phillips, S.M., et al.: Medicare prospective payment without separate urban and rural rates. Health Care Financing Review 14(2):31-47. HCFA Pub. No. 03335. Office of Research and Demonstrations, Health Care Financing Administration. Washington. U.S. Government Printing Office, Winter 1992.

Prospective Payment Assessment Commission: Report and Recommendations to the Congress. Washington. U.S. Government Printing Office, Mar. 1, 1992a.

Prospective Payment Assessment Commission: Medicare and The American Health Care System: Report to Congress. Washington. U.S. Government Printing Office, June 1992b.

Sheingold, S.H.: Alternatives for using multivariate regression to adjust prospective payment rates, Health Care Financing Review 11 (3):314 1. HCFA Pub. No. 03295. Office of Research and Demonstrations, Health Care Financing Administration. Washington. U.S. Government Printing Office, Spring 1990.

Thorpe, K.E.: The Use of Regression Analysis to Determine Hospital Payment: The Case of Medicare's Indirect Teaching Adjustment, Inquiry 25:219-23 1, Summer 1988.

U.S. Congressional Budget Office: Setting Medicare's Indirect Teaching Adjustment for Hospitals, Working Paper, May 1989.

Welch, W.P.: Do All Teaching Hospitals Deserve an Add-on Payment Under the Prospective Payment System? Inquiry 24:221-232, Fall 1987.

Currently, Medicare provides for an add-on payment under the prospective payment system (PPS) to teaching hospitals for their higher costs stemming from graduate medical education. This payment, known as the indirect medical education (IME) adjustment, is calculated using a formula based on the ratio of teaching intensity, where the numerator is the number of residents working at the hospital and the denominator is either beds (for the operating PPS) or the average daily census (ADC) (for the capital PPS). Using this formula permits a comparison of hospitals of unequal size but with similar levels of teaching intensity. When the IME adjustment for capital costs was instituted with cost reporting periods beginning on or after October 1, 1991, ADC was selected as the denominator for the ratio, in part based on some of the analysis reported here. This article reports on the first comprehensive review of the impacts of using the resident-to-ADC ratio instead of the resident-to-bed ratio to measure teaching intensity for the IME adjustment under Medicare.

Both the Department of Health and Human Services (DHHS) and the Prospective Payment Assessment Commission (PROPAC) are on record as supporting a single IME adjustment for both the operating and capital PPS (Federal Register, 1992; Prospective Payment Assessment Commission, 1992b). However, DHHS supports adopting the capital IME adjustment formula, including the resident-to-ADC ratio, and ProPAC supports using the operating IME adjustment formula with the resident-to-bed ratio as the measure of teaching intensity.

Purpose

Interest in a measure that is an alternative to beds stems partly from the view that the IME adjustment could be better targeted by basing it on the numerical relationship between residents and patients, and partly from administrative difficulties associated with using beds in the denominator. Analysis of residents' activities indicates that most of their training time is spent in patient care (Arthur Young and Co., 1986). Therefore, the numerical relationship between residents and patients should more directly reflect teaching intensity than would be indicated by the relationship between residents and the hospital size.

The degree to which the resident-to-bed ratio approximates the resident-to-patient relationship depends on a hospital's occupancy rate. To illustrate via two extreme examples, the resident-to-bed ratio of a teaching hospital with a 99-percent occupancy rate would closely approximate the hospital's resident-to-patient ratio; on the other hand, the resident-to-bed ratio for a teaching hospital with an occupancy rate of 10 percent would understate its resident-to-patient ratio.

Administrative difficulties with determining hospital bed size (number of beds) have also sparked interest in an alternative measure. Questionable situations that would be resolved by adopting the resident-to-ADC ratio are whether beds should be counted when a wing is under construction; whether the days a bed is unavailable for use because it is located in a double room occupied by a patient in isolation should be deducted from the number of available bed days; and whether beds in storage should be counted as available.

When the IME adjustment was initiated in 1980, DHHS selected the resident-to-available bed ratio as the measure of teaching intensity over the resident-to-ADC ratio; there was concern that the latter would be too unstable because of fluctuations in use (Federal Register, 1980). Additionally, in response to DHHS' proposal to change the method used to determine available beds, one commenter suggested that using ADC is preferable because the data are readily available and an additional calculation would not be necessary (Federal Register, 1985). In its response, DHHS pointed out that it would consider this approach and others as more data became available.

For purposes of the ratio, an available bed is one that is available for use and housed in patient rooms or wards (Health Care Financing Administration, 1988). Thus, beds that meet the definition for availability are counted whether or not they are occupied. Over time, however, uncertainties have arisen over when a bed is considered available. In a report on what it describes as "weaknesses in data used to calculate" the IME adjustment, the U.S. General Accounting Office (GAO) found that bed-counting practices varied "widely among hospitals and intermediaries" (Comptroller General of the United States, 1991). The GAO report also supports changing the definition to occupied beds, which it calls a verifiable statistic.

An illustration of the difficulty in determining whether beds are available is a situation where a hospital takes a wing out of service for renovation. As a guide to whether or not the beds could be considered available, HCFA has issued instructions that they should be counted if the wing is included as part of the hospital's depreciable assets during the renovation and could be staffed within 24-48 hours (Blue Cross and Blue Shield Association, 1988). Nevertheless, HCFA's fiscal intermediaries need to determine whether both of these criteria are met.

Replacing available beds with occupied beds in the denominator would provide a conceptually simpler variable for implementation purposes, both for hospitals and for fiscal intermediaries. That is, counting only occupied beds avoids the sometimes difficult question of whether a bed can be made available for occupancy, thereby improving the consistency of the policy. Subject to the same exclusions as available beds (e.g., beds in units of a hospital that are not paid under PPS, such as psychiatric units, are not counted), beds are either occupied and counted, or not occupied and not counted.

Previous analysis

Most of the analysis concerning the IME adjustment has centered around the statistical estimate of teaching's impact on operating costs, how this estimate should be made, and the degree to which the adjustment should reflect this estimate in light of other public policy objectives. The current operating IME adjustment increases approximately 7.65 percent for every 10 percent increase in the IME adjustment. This level was set by the Omnibus Budget Reconciliation Act of 1987, and is based on U.S. Congressional Budget Office (CBO) estimates of the effect of teaching on Medicare inpatient operating costs at that time. For the capital PPS, the IME adjustment increases at a rate of approximately 2.82 percent for every 10 percent increase in the resident-to-ADC ratio. This adjustment is set forth in the Medicare regulations at 42 CFR 412.322 rather than by statute.

All recent analyses of the relationship between teaching intensity and operating costs have indicated that the actual cost effect is currently less than that reflected by the present level of the operating IME adjustment. Recently, ProPAC estimated the relationship to be 5.7 percent using PPS6 cost data and fiscal year (FY) 1992 payment rules (Prospective Payment Assessment Commission, 1992a).(1) I However, ProPAC did not control for the effect on costs of a disproportionate share of low-income patients, which is recognized by PPS through a payment adjustment. Controlling for this effect yields a much smaller estimate. The article by O'Dougherty et al., (1992) in this issue of Health Care Financing Review discusses the results of statistical analysis of the effects of teaching, in conjunction with a single standard rate system. Using the resident-to-ADC ratio, the estimated relationship is 3.06 percent. The corresponding estimate using the resident-to-bed ratio is 4.50 percent.

Despite the evidence that the current level of the IME adjustment overestimates the effect of teaching on costs, the level of the adjustment has not been changed by Congress since 1987. One reason is that, despite the high positive Medicare operating payment-to-cost relationships of major teaching hospitals, their overall payment-to-cost relationships are below average (Prospective Payment Assessment Commission, 1992b). Concern about the poor overall financial situation of large teaching hospitals reflects the fact that these hospitals are most likely to be confronted with health-related social problems associated with their predominant location in inner-city areas, such as treating large numbers of medically uninsured patients. Given the general desire to maintain some level of access to care for the uninsured, the IME adjustment has come to be viewed as a subsidy payment to teaching hospitals.

In fact, it has always been viewed that way to some extent. In passing the PPS legislation, Congress expressed doubts about the new system's ability to account fully for the higher costs of teaching hospitals, and set the adjustment at double the estimated level of the teaching effect. Furthermore, the specification of the estimating model itself involves a decision about the degree to which one wishes to isolate the true effect of teaching separate from other factors generally associated with teaching hospitals. Thorpe (1988) and Sheingold (1990) illustrated the variant results that occur depending on the specification of the model. Thorpe pointed out that by limiting the variables used in the model to PPS payment parameters, the cost effects of factors associated with teaching hospitals but not directly related to teaching are loaded onto the estimate. Basing the adjustment on an estimate that controls for all of the factors affecting variation in hospitals' costs would result in a substantially lower estimated teaching effect. Sheingold described the choice of a preferred model as being dependent on "the goals and objectives set forth by policymakers" (Sheingold, 1990).

This issue of payment equity between teaching and non-teaching hospitals relative to their respective costs has been the focus of most past analyses. Much less attention has been paid to the equity of the distribution of IME payments among teaching hospitals. An exception is Welch's article (1987) which argues that the first 10 or 15 residents per 100 beds provide services that relieve the demand for attending physicians, and therefore these residents should not be included in the resident count used to determine the adjustment. Another exception is Sheingold's research that tested for a threshold level of the resident-to-bed ratio at which the effects on costs become significant (Sheingold, 1990). He examined ratios in increments of 0.1 through 0.5, and lumped together hospitals with ratios greater than 0.5. Sheingold found that "there does not appear to be a threshold level of the [ratio] at which the indirect-teaching effect becomes significant . . . Rather, it appears that statistically significant cost effects exist throughout its range" (Sheingold, 1990). In his article, Welch acknowledged this absence of a threshold effect, hypothesizing that characteristics other than the actual education of residents may account for the higher costs of low-intensity teaching hospitals. Although the issue of a threshold effect is not taken up here, the potential refinements represented by the resident-to-ADC ratio may provide an avenue for further investigation.

Analytical approach and issues

This article explores distributional equity among teaching hospitals by reviewing the potential for improving the measure of teaching intensity. In order to evaluate the impact that switching to the resident-to-ADC ratio may have on IME payments, the analysis uses estimates of the effect of teaching on costs measured using both the resident-to-bed ratio and the resident-to-ADC ratio. These estimates are used to calculate IME adjustments using both ratios. Because Medicare now pays two different IME adjustments, four estimates are made, one for both ratios corresponding to the operating and capital IME adjustments. The formula used to calculate the current operating IME adjustment is:

1.89 x [((1 + resident-to-bed ratio).sup.0.405] - 1).

This formula results in the current adjustment of approximately a 7.65 percent increase for every 10 percent increase in the resident-to-bed ratio. It is approximate because the adjustment results in a smaller marginal increase as the ratios increase. For example, using this formula, a resident-to-bed ratio of 0.10 yields an operating IME adjustment factor of 0.0744, a ratio of 0.20 yields an adjustment factor of 0.1448, and a ratio of 0.40 yields 0.2759.

The current IME adjustment formula under the capital PPS is as follows:

e raised to the power (0.2822 x resident-to-ADC ratio) - 1,

where e is the natural antilog of I (or 2.71828), and 0.2822 is the estimated impact on hospitals' total costs (operating and capital) of a teaching program. Under the capital IME specification, the rate of increase in the adjustment grows larger as the level of the ratio rises. Switching to the resident-to-ADC ratio has potentially important implications for the size and distribution of IME payments. Since the current level of the operating adjustment is based on an estimate using beds in the denominator, changing to ADC would necessitate re-estimating the level of the adjustment using the resident-to-ADC ratio. Because all recent estimates of the effect of teaching on costs are below the current 7.65 percent level of the operating IME adjustment, this re-estimation would result in a reduction in the level of the adjustment. Hospitals whose resident-to-ADC ratios are significantly higher than their resident-to-bed ratios may benefit from the switch despite the lower level of the adjustment because of their much higher ratios. In fact, mainly because of the beneficial impact on this group of hospitals, the overall IME payments would be slightly higher using the resident-to-ADC ratio. At the other end of the spectrum, however, hospitals with high occupancy rates stand to lose because their resident-to-ADC ratios are not large enough to compensate for a lower adjustment. This is particularly an issue in today's health policy environment, as many of the largest teaching hospitals, which are often on the front lines in addressing such needs as caring for the medically uninsured, fall into this group. The analysis that follows indicates that if a switch to ADC were made for the operating IME adjustment, simultaneously adopting the capital IME adjustment formula specification for the operating adjustment would alleviate the potential adverse impacts on this group of hospitals. This results from the increasing marginal rate of change in the adjustment factors as the ratio rises. Because high occupancy teaching hospitals also tend to have higher ratios under both measures, they would benefit more from the capital adjustment formula. Another issue discussed later is the year-to-year stability of ADC compared with beds. It is desirable from the standpoint of hospitals and the Medicare program that the level of IME payments be fairly stable and predictable because excessive fluctuation in the ineasure of teaching intensity would hamper budgeting efforts. Concern about the stability of ADC hinges on its susceptibility to random fluctuations. The following analysis indicates that ADC is somewhat more variable over time than beds, although the rate of change in the resident-to-ADC ratios from PPSI to PPS7 is equal to that for beds.

Data and methodology

The data used in this analysis were taken from the Medicare Hospital Cost Reports on the Health Care Provider Cost Report Information System and the provider specific files which are maintained by HCFA. The resident-to-bed ratios used in the regressions are from the provider specific file. The resident-to-ADC ratios used in the regressions were calculated by first multiplying the resident-to-bed ratios from the provider specific file by the number of available beds reported on the cost reports to determine the number of residents. Hospitals' ADCS were determined by dividing total inpatient days, in areas of the hospital paid under PPS as reported on the cost report, by the number of days in the cost reporting period.

The cost data used in the regressions come from hospitals' PPS5 and PPS6 (FYs 1988 and 1989) cost reports. A dummy variable is included to control for inflation. The logged values of total Medicare costs (operating and capital) are used as the dependent variable in all of the regressions. This is consistent with the stated positions of DHHS and ProPAC that the operating and capital IME adjustments should ultimately be combined into a single adjustment. Total costs are standardized by the case-mix index corresponding with the year from which the data are taken. Besides the teaching intensity variables, the independent variables that are included in the regressions are the logged value of the area wage index, the percentage of low-income patients for urban hospitals with 100 or more beds, and dummy variables for location in either large urban or other urban areas. For rural hospitals and urban hospitals with fewer than 100 beds, a value of 0 is assigned to the independent variable representing the percentage of low income patients. This corresponds with the specification for the disproportionate-share adjustment under the capital PPS.

The estimate of the relationship between teaching and costs is affected by the choice of factors included in the regression model except for teaching intensity. The regressions examined later are primarily intended to facilitate a comparison between the two denominators rather than to estimate the appropriate level of the adjustment. Therefore, the independent variables besides the teaching variable are specified in the same way across all of the regressions. This eliminates the interactive effects between varying specifications of the IME and disproportionate-share adjustment variables, for instance.

The teaching variables in the regressions are specified two different ways, corresponding with the different specifications used to estimate the operating and capital adjustments. The specification of the teaching variable used in the regression analysis for the operating IME adjustment was in the form:

the natural log of (1 + resident-to-bed ratio)

This is the first specification used in the regressions below, first with beds in the denominator and then with ADC. For simplicity, it is referred to as the operating specification. To remove any impact on the estimate where the constant (1 in the previous specification) is added to the ratio to avoid taking the natural log of 0 (which is undefined) for non-teaching hospitals, the specification used in determining the estimate for the capital IME was simply the unlogged value of the resident-to-ADC ratio. This is the second specification of the teaching variable employed later, and it is referred to as the capital specification.

The coefficients resulting from these regressions are then used to calculate adjustment factors with the corresponding formulas currently used for the operating and the capital adjustments. To calculate adjustment factors using the operating adjustment formula, the formula is revised to set 1.89 equal to 1, and 0.405 is set equal to the coefficient corresponding to the intensity measure that is used. That is, if the resident-to-ADC ratio is being used to determine the adjustment factor, 0.405 in the equation is replaced by the coefficient resulting from the operating specification using the resident-to-ADC ratio. To calculate IME adjustment factors using the capital adjustment formula, the coefficient corresponding with whichever ratio is employed is substituted for 0.2822. This coefficient is estimated using the capital specification as previously described.

For a comparison of hospitals' available beds, ADCS, and ratios during the period from PPSI through PPS7, data from the cost report files were used. It is worth noting, however, that the resident-to-bed ratio used for payment purposes is not taken from the cost report but is reported separately by HCFA'S fiscal intermediaries on the provider specific file. There is some variation between the ratios determined based on data from the Medicare cost reports and data that are used to compute a hospital's IME adjustment factor. These data were used for the descriptive comparisons because the provider specific file does not contain historical data or hospitals' ADCS.

Impact by program size

As previously noted, switching to the resident-to-ADC ratio would lead to a larger relative portion of IME payments going to small programs because of the lower occupancy rates of these programs. Because ADC is actually a measure of occupied beds, multiplying available beds by the occupancy rate results in ADC. The lower the occupancy rate, the lower ADC will be relative to beds and, conversely, the higher the resident-to-ADC ratio will be relative to the resident-to-bed ratio.

Table 1 displays the average resident-to-ADC and resident-to-bed ratios for hospitals grouped by their numbers of residents, and the average occupancy rates for each group. The averages are weighted by PPS payments to illustrate the relative budget impacts. Hospitals with the smallest graduate medical education programs would experience the largest percentage increase in their average ratios (57.9 percent) by moving from the resident-to-bed ratio to the resident-to-ADC ratio. This can be attributed to the fact that this group of hospitals has an average occupancy rate of 67.3 percent, well below that of the other hospital groups. The average resident-to-bed ratio for hospitals with fewer than 50 residents is 0.057, and the average resident-to-ADC ratio for this group of hospitals is 0.090. Among hospitals with more than 301 residents, the average resident-to-bed ratio is 0.570 and the average resident-to-ADC ratio is 0.717.

To more fully evaluate the effect of this phenomenon, the payment impacts of switching to ADC were analyzed. Payments for IME are affected by both the level of the adjustment factor and the values of the ratios. Although the resident-to-ADC ratios are higher than the resident-to-bed ratios, the regression coefficients for teaching are lower when using the resident-to-ADC ratio. Analysis of the capabilities of the two measures to predict cost variation indicates that they perform similarly, however, as discussed later. The higher resident-to-ADC ratios and the lower coefficients will offset each other in terms of their payment effects.

In order to simulate the payment impacts of switching to the resident-to-ADC ratio and to compare these impacts with using the resident-to-bed ratio, it was necessary to determine comparable adjustment factors. To do this, four regressions were performed, using both ratios and the operating and capital specifications. Table 2 shows the resulting coefficients and t-statistics for the various specifications of the teaching variable (the coefficient values of the other variables conformed to expectations and varied little across the alternative specifications). The smaller coefficients using the resident-to-ADC ratio are illustrated here. The teaching coefficients are: with beds, 0.4383 using the operating specification and 0.3552 using the capital specification; and, with ADC, 0.3674 using the operating specification and 0.2824 using the capital specification. Although the coefficients are lower when ADC is used, the t-statistics are slightly higher. To calculate IME adjustment factors, these coefficients are substituted into the current operating and capital IME adjustment formulas as previously described in the methodology section. To illustrate using these coefficient values, to calculate adjustment factors using the resident-to-ADC ratios and the operating IME formula, the formula would be:

1 x [([1 + resident-to-ADC ratio].sup.0.3674] - 1)

Alternatively, to calculate adjustment factors using the resident-to-bed ratios and the capital IME formula, the formula would be:

e raised to the power (0.3552 x resident-to-bed ratio) - 1.

Table 3 compares the effects of using both ratios with their corresponding re-estimated coefficients on hospitals' IME adjustment factors. It shows the weighted (by PPS payments) average IME payment adjustment factors under the current 7.65 percent adjustment rate using beds as a denominator, and the weighted average IME adjustment factors using the operating and capital formulas and both ratios for hospitals grouped by their numbers of residents (Table 1). The averages are weighted by PPS payments to indicate the actual IME payment impacts on the various hospital groups. The current law capital adjustment factors are equal to those under the capital specification using the resident-to-ADC ratio, therefore they are not shown separately. The IME adjustment factors represent the average per case add-on factor a teaching hospital would receive for IME. For instance, a hospital with an IME adjustment factor of 0.150 would receive a per case payment for IME equal to 15 percent of the diagnosis-related group (DRG) payment for that case.

As anticipated based on the previous discussion, the impact of switching to ADC would vary depending on program size. Examining the adjustment factors for the 2 ratios under each specification, it is clear that smaller programs would do better using the resident-to-ADC ratio, and the largest programs would do about the same using either ratio. To illustrate, hospitals with 1 to 50 residents currently have operating IME adjustment factors of 0.042. With a formula employing the 4.38 percent adjustment as estimated in the previously noted regression that uses the operating specification and beds, the average adjustment factor is 0.024. With the 3.67 percent adjustment suggested by the regression that uses the operating specification and ADC, the average adjustment factor for this group of hospitals is 0.031. With the adjustment levels suggested by the regressions that use the capital specifications (3.55 percent using beds and 2.82 using ADC), the corresponding averages are 0.021 and 0.026. This disparity persists but decreases as the size of the teaching programs increase. For hospitals with more than 300 residents, average operating adjustment factors would fall from 0.377 under current law to 0.218 using either beds or ADC. Under the capital specification, there is only a 0.001 difference in the average adjustments for these hospitals.

The effects of the different rates of change between the two adjustments is also evident in Table 3. As discussed previously, the current operating adjustment formula results in a declining marginal rate of change as the ratio value increases. The current capital formula, meanwhile, results in a rising marginal rate of change as the ratio rises. This effect is evidenced by the greater span between the average adjustment factors for the smallest and largest programs under the capital specification. In fact, the adjustment factors using this specification are lower for the smallest programs and higher for the largest programs.

A final point brought out in Table 3 is that the weighted average adjustment factors for all teaching hospitals are greater using ADC. The implication of this is that total IME payments would be greater using a re-estimated adjustment and ADC, regardless of the formula specification. Under the operating specification, the average adjustment factor for all teaching hospitals is 0.072 using beds and 0.078 using ADC. Under the capital specification, the averages are 0.068 and 0.073, respectively. This is due to the beneficial impact of ADC on smaller programs.

[TABULAR DATA OMITTED]

Stability of average daily census

The following discussion evaluates the relative stability of ADC, both during the 7-year period from PPSI to PPS7, and from year to year during the interim. As previously noted, concern that ADC would fluctuate excessively from year to year and would be too easily manipulated led to its past rejection as the denominator for the ratio. Excessive instability would hamper budgeting efforts, both at the national level and at the hospital level. It would also make the resident-to-ADC ratio a less reliable predictor of costs.

To examine whether ADC is changing at a different rate over time than bed size, Table 4 shows the percent changes in beds and ADC for a matched set of hospitals, grouped according to teaching program size, based on their average resident-to-bed or resident-to-ADC ratios (the latter for showing the change in ADC, and the former to show the change in beds) during the period from PPS1 through PPS7. The groups are based on levels of teaching intensity in order to evaluate whether changes over time in beds and ADC have varied by program size. The evaluation group was limited to hospitals reporting at least one resident in each of the 7 years under review. It was felt that this would provide a more useful indication of the trends in changes in bed size and ADC in teaching hospitals over time.

The columns display the average percent changes from the previous year for PPS2 through PPS7. The total column is the overall percent changes from PPSI to PPS7. The percents are weighted by PPS payments in order to illustrate the budgetary implications of the changes from year to year. Since reductions in bed size or ADC would lead to higher ratios and more IME payments, reductions at a hospital receiving $2 million in PPS payments are considered more significant from the standpoint of the Medicare budget than they would be at a hospital receiving only $ 10,000 in PPS payments.

Overall, the weighted average number of beds fell by 8.3 percent from PPSL to PPS7, and the weighted average ADC fell by 9.0 percent. Furthermore, Table 4 shows that the averages for both beds and ADC fell by greater amounts during the first 2 years of PPS than they have during later years. Given the incentives of PPS to reduce Medicare patient days, one would expect the initial declines exhibited here. Similarly, since the potential for reducing the length and corresponding costs of patient stays, or minimizing bed size to maximize the resident-to-bed ratio, are somewhat limited, the smaller percent changes in later years are not surprising. Both statistics continued on a downward trend through PPS7, however.

The larger decrease in the average ADCS would seem to indicate that the resident-to-ADC ratios would have increased more than the resident-to-bed ratios. Table 5 shows that this is not the case, however, as both ratios increased by 15.2 percent from PPSI through PPS7 (the averages in Table 5 are also weighted by PPS payments). This occurs because of differential rates of change in the numbers of residents among the hospital groups in the tables. For example, hospitals with resident-to-bed ratios between 0.048 and 0.102 (between the 25th and 50th percentile average resident-to-bed ranking) had their average number of beds fall by 10.0 percent, while the average ADCS of hospitals with resident-to-ADC ratios between 0.076 and 0.162 (comparable group based on their average resident-to-ADC ranking) fell by 9.7 percent. Meanwhile the respective increases in the average resident-to-bed and resident-to-ADC ratios were 8.0 percent and 3.7 percent. The difference is attributable to the rates of change in numbers of residents for these groups (not shown in the tables). The number of residents among the hospital groups noted above fell by 3.6 percent for the resident-to-bed group, while it fell by 6.7 percent among the corresponding group based on the resident-to-ADC ranking.

Table 6 shows the distributions of the year-to-year percent changes in beds and ADC for teaching hospitals for PPS1 through PPS7. A percent change was computed for a hospital if it had residents in any 2 consecutive years. The concern here is to evaluate the impact of using ADC on individual hospitals, regardless of their Medicare payments relative to other teaching hospitals. The mean percent change for all included hospitals, and the standard deviation of the distribution around that mean percent change, are shown below the respective columns.

The final column in Table 6, labeled "Overall," displays the distribution, the mean percent change, and the standard deviation, after combining all of the year-to-year changes from the previous columns. For example, if a hospital had residents for all 7 years under review, all six of its year-to-year changes are included in this column. The mean year to year percent change in beds over the 7 year period was -0.3 percent, and the standard deviation was 28.1. The corresponding values for changes in ADC were -1.4 and 24.7. On average, ADC is about 1 percent more variable from one year to the next than beds.

Most of this differential stems from the higher rates of change in ADC during the first few years of PPS. From PPS I to PPS2, the mean percent change was -5.1 percent for ADC, and -2.0 for beds, and from PPS2 to PPS3, the corresponding values were -2.8 for ADC and -1.7 for beds. It is likely that these reductions in ADC were in response to the incentive of PPS to reduce patient lengths of stay. More recently, the mean percent change in ADC among teaching hospitals has been less than that for beds in all but one period (PPS6 to PPS7).

The median percent change in beds is 0.0 for all of the years examined, while the median percent change in ADC ranges from -5.3 from PPSI to PPS2, to -0.3 from PPS5 to PPS6. Given that ADC is more subject to random fluctuation, this result was expected. Examining the percent changes at the 5th and 95th percentiles, however, reveals that the two measures are very similar in terms of their susceptibility to extreme changes. The 5th percentile values for all year to year changes (the last column) for beds and ADC respectively are -18.8 percent and -19.1 percent. The 95th percentile values are: 12.5 percent for beds, and 12.6 percent for ADC.

In conclusion, ADC does not appear to be dramatically more variable, and in light of the fact that the weighted average ratios changed at exactly the same rate in Table 5, the results in Table 6 do not appear to disqualify ADC as a useful denominator for measuring teaching intensity.

[TABULAR DATA OMITTED]

Discussion

This analysis was undertaken to ascertain whether ADC would serve as a suitable replacement for beds, given the administrative complexities inherent in determining bed size. The results indicate that the resident-to-ADC ratio would have been as valid and reliable a measure of teaching intensity as the resident-to-bed ratio during the period from PPSI to PPS7.

The results also illustrate potential political ramifications of making a switch to ADC, given the redistribution of teaching payments that would arise. This redistribution would occur as a result of the lower occupancy rates of the smallest teaching programs compared with larger teaching programs, leading to a larger proportion of IME payments going to smaller programs. This effect is alleviated by employing the capital specification in the estimating equation combined with the capital IME adjustment formula. The lower coefficient when using the capital specification, and the increasing marginal change in the adjustment factor as the ratios rise lead to smaller adjustment factors for the smallest programs, and bigger adjustments for the larger programs compared with those occurring under the operating formula. Although further analysis is needed to explain the lower coefficients when the capital specification of the teaching variable is used rather than the operating specification, it appears that, if ADC is adopted, it would be preferable to combine it with the current capital IME adjustment formula. Given the health-related social problems (i.e., maintaining access for the uninsured) that are addressed primarily by large inner-city hospitals, many of which are teaching hospitals, it would not seem appropriate to redistribute IME payments away from these hospitals into the smallest programs. Concern for the financial viability of large teaching hospitals is a separate issue, however, from whether the resident-to-ADC ratio is a better measure of the relationship between teaching and higher costs. For example, given the role of patient care in residency programs, it may be entirely consistent with the policy objective of compensating hospitals for their indirect teaching costs to realign the relative level of payments to, for example, a 100-bed hospital with 5 residents and an ADC of 45. The resident-to-bed ratio of such a hospital would be 0.05. Its resident-to-ADC ratio, on the other hand, would be 0.11. Absent evidence that the linear relationships between costs and teaching intensity vary by program size, one can very plausibly argue that such outcomes are justified.

A related issue is that the incentive to minimize the length of patient stays if the resident-to-ADC ratio were adopted, has given rise to some concern that patient care may be adversely affected. In terms of Medicare patients, however, hospitals have been faced with this incentive since the inception of PPS. The peer review organizations serve as a check to ensure that Medicare patients continue to receive quality care. While the incentive to discharge patients earlier would be enhanced by adopting ADC as the denominator in the ratio, the impact for Medicare patients should be minimal. Because ADC includes all patient days, however, this incentive would also extend to non-Medicare patients. Given the increasing movement toward PPS-type payment systems and managed care programs by non-Medicare payers, however, it seems questionable whether any incentive to reduce non-Medicare patient lengths-of-stay would have a significant impact.

(1) PPS1, PPS2, PPS3, etc., refer to the respective since the beginning of PPS during FY 1984. For instance, PPS1 corresponds with FY 1984, and PPS7 correspondents with FY 1990, the latest year for which Medicare cost report data are generally available.

References

Arthur Young and Co.: Study of the Financing of Graduate Medical Education." Contract No. HHS- 100-80-0155. Prepared for the U.S. Department of Health and Human Services, Oct. 1986.

Blue Cross and Blue Shield Association: Administrative Bulletin # 1841, 88-01. Chicago. 1988. Comptroller General of the United States: Flawed Data Add Millions to Teaching Hospital Payments, Report GAO/IMTEC-91-31. Washington. U.S. General Accounting Office, June 1991.

Federal Register: Medicare program; schedule of limits on hospital inpatient general routine operating costs of cost reporting period beginning on or after July 1, 1980. Final Notice. Vol. 45, No. 121, 41869. Office of the Federal Register, National Archives and Records Administration. Washington. U.S. Government Printing Office, June 20, 1980.

Federal Register: Medicare program; changes to the inpatient hospital prospective payment systems and fiscal year 1986 rates. Final Rule. Vol. 50, No. 170, 35683. Office of the Federal Register, National Archives and Records Administration. Washington. U.S. Government Printing Office, Sept. 3, 1985.

Federal Register: Medicare program; changes to the inpatient hospital prospective payment systems and fiscal year 1993 rates, Final Rule. Vol. 57, No. 170, 39807. Office of the Federal Register, National Archives and Records Administration. Washington. U.S. Government Printing Office, Sept. 1, 1992.

Health Care Financing Administration: Provider Reimbursement Manual. Pub. No. 15- 1. Office of Issuances, Health Care Financing Adminstration. Washington. U.S. Government Printing Office. Aug. 1988. O'Dougherty, S.M., Cotterill, P.G., Phillips, S.M., et al.: Medicare prospective payment without separate urban and rural rates. Health Care Financing Review 14(2):31-47. HCFA Pub. No. 03335. Office of Research and Demonstrations, Health Care Financing Administration. Washington. U.S. Government Printing Office, Winter 1992.

Prospective Payment Assessment Commission: Report and Recommendations to the Congress. Washington. U.S. Government Printing Office, Mar. 1, 1992a.

Prospective Payment Assessment Commission: Medicare and The American Health Care System: Report to Congress. Washington. U.S. Government Printing Office, June 1992b.

Sheingold, S.H.: Alternatives for using multivariate regression to adjust prospective payment rates, Health Care Financing Review 11 (3):314 1. HCFA Pub. No. 03295. Office of Research and Demonstrations, Health Care Financing Administration. Washington. U.S. Government Printing Office, Spring 1990.

Thorpe, K.E.: The Use of Regression Analysis to Determine Hospital Payment: The Case of Medicare's Indirect Teaching Adjustment, Inquiry 25:219-23 1, Summer 1988.

U.S. Congressional Budget Office: Setting Medicare's Indirect Teaching Adjustment for Hospitals, Working Paper, May 1989.

Welch, W.P.: Do All Teaching Hospitals Deserve an Add-on Payment Under the Prospective Payment System? Inquiry 24:221-232, Fall 1987.

Printer friendly Cite/link Email Feedback | |

Author: | Phillips, Stephen M. |
---|---|

Publication: | Health Care Financing Review |

Date: | Dec 22, 1992 |

Words: | 6425 |

Previous Article: | Geographic classification of hospitals: alternative labor market areas. |

Next Article: | Assessing the FY 1989 change in Medicare PPS outlier policy. |

Topics: |