Printer Friendly

Does a single case mix index fit all hospitals? Empirical evidence from Washington state.


The case mix index (CMI) is one of the most commonly used indicators of the intensity of hospital resource utilization in health services research. While many empirical studies in healthcare utilize some sort of CMI, very few studies have examined whether a single, overall measure is appropriate, or whether multiple measures are necessary to accurately reflect patient illness severity. Using a panel of Washington state hospitals, we find that the appropriateness of a single CMI depends crucially on the characteristics of the hospitals being compared. A single measure is appropriate if the hospital is relatively large, but may not be appropriate for small or mid-sized hospitals. Larger hospitals may not only treat a wide array of different patient groups, but also treat a wide variation of conditions within each patient group. Outliers thus have a smaller impact. Therefore, average intensity of resource utilization is similar cross patient groups, making the use of a single CMI appropriate. This is not usually the case for small or mid-sized hospitals that may treat a disproportionate number of patients within a particular diagnosis group, or may not treat a variety of different illnesses within each group.


Healthcare is one of the most complex industries. Hospitals and other healthcare providers use personnel, capital and supplies to treat sick and injured patients, and to improve patients' health status (Folland et al. 1997). The wide array of consumers in the healthcare market, each with varying health needs and socioeconomic backgrounds, requires that hospitals and other healthcare providers treat a multitude of different conditions. This produces a variety of different products and services. Producing each of these outputs requires a different mix of inputs, depending on the type and severity of each patient's medical condition, and the expected health outcomes. Not only do healthcare providers produce a multitude of products, but each of these products may also be fundamentally different in nature.

For health services researchers, administrators, regulators and policy makers, identifying the number of different outputs and the quantity of each output that a hospital produces is of paramount concern. Without a reliable, comprehensive method of measuring the breadth and depth of hospital output, managers are unable to make efficient operating decisions. Similarly, policy makers cannot enact policies (which regulators subsequently enforce) that contribute to optimizing society's welfare. Economic analysis of hospital behavior often anticipates a significant level of output homogeneity.

In the US (as well as in an increasing number of other countries), hospitals and other healthcare providers measure outputs using Diagnosis Related Groups (DRGs) (Fetter et al. 1980; Forgione et al. 2004; Park & Shin 2004). There are several hundred DRGs, each of which identifies a small group of similar illnesses. Each DRG is also assigned a weight, which compares the resource consumption of that group of illnesses (ostensibly through the mix of input costs necessary to treat an individual whose condition falls within that DRG) relative to all other DRGs. A CMI is calculated by taking a weighted average of DRGs based on the number of patient discharges in each category (or some variation on the proportion of patients such as average length of stay). The CMI provides a parsimonious method of comparing the mix of different outputs across hospitals. It is used extensively by researchers, administrators, regulators and policy makers (Folland et al., 1997).

While new, severity-adjusted DRGS have only recently been introduced in the US, our primary objective in this paper is to determine whether, when taken in the aggregate for a hospital, a single CMI is an appropriate measure of the overall illness severity for hospitals, using as little a priori information as possible. If it is not, then we also consider some initial information about the number and types of CMIs that could be calculated. We also discuss why this issue is important to hospital administrators and policy makers.


In health services research, the CMI is generally used in one of four contexts. First, a single CMI may be used as a control variable for differences across hospitals' patient mixes. In these papers, the focus of the research is not on the CMI itself, but is instead on other policies and operating decisions that cannot be analyzed unless some adjustment for the mix of services is made. For example, studies have examined the impact that changes in government policies have on the prices providers charge to other patients (Zwanziger et al. 2000; Friesner & Rosenman 2004) and the quality of care or service intensity provided to patients (Hodgkin & McGuire 1994; Maniadakis et al. 1999; Gilman 2000; Forgione et al., 2004). Others have controlled for case mix when measuring cost or technical efficiency (Fleming 1991; Bradley & Kominski 1992; Koop et al. 1997; Ferraz-Nunes 2001; Li & Rosenman 2001) or when measuring a provider's market power (Simpson & Shin 1998). This discussion is not intended to present an exhaustive list of studies utilizing a single CMI. Instead, it is merely intended to highlight some of the areas of research that utilize the CMI as a control variable.

A second line of research assumes that a single CMI is appropriate, and looks for more accurate and precise ways to construct the index. Farley (1989), for example, documents the magnitude of the bias due to calculating indices from discrete frequency counts and proposes an analytical information theory index to address this bias. Other studies, including Evans & Walker (1972), Horn & Schumacher (1979), Farley & Hogan (1990) and Park & Shin (2004) have also used information theory-based indices to more accurately control for hospital differences in patient illness severity, or focused on more appropriate case mix adjustments for specific diseases (Polanczyk et al. 1998).

A third area of research assumes that a single CMI is not appropriate. Multiple CMIs are necessary to control for patient illness severity across providers. For example, Graham & Cowing (1997) use two separate CMIs to account for differences in both the range and complexity of hospital care in their study of hospital reserve margins. The first measure is defined in terms of the number of case mix procedures performed by the hospital, while the second measure reflects the number of hospital wards. Zuckerman et al. (1994) measure hospital efficiency with frontier cost functions. They use a number of variables to measure case mix, including expected mortality rates for Medicare beneficiaries, the proportion of Medicare cases at various states of disease, and expected in-hospital complication rates for Medicare beneficiaries as reflected by discharge diagnosis and procedure codes. Dor & Farley (1996) construct separate CMIs for Medicare, Medicaid, and privately-insured patients. They use these indices as control variables to determine the impact of lower prospective payment on hospital quality.

A final area assumes that the traditional (non-information theory based) method of calculating CMIs is appropriate, and attempts to determine the appropriate number of CMIs that are necessary to adequately control for patient illness severity. For example, one study ("Geographic Variation" 1995) found significant case mix differences across general, short-stay hospital services based on geographic variation in the incidence of disease or illness. Similarly, O'Dougherty & Cotterill (1992) examined the elimination of urban-rural differences in the Medicare prospective payment system (PPS) standard rates in 1983. They concluded that changes in the PPS system are needed to balance relative Medicare payment costs among groups of urban and rural hospitals, suggesting that there may be fundamental differences between rural and urban hospitals that should be reflected in a CMI. The implication of these studies is that if health services researchers do not utilize the appropriate number and type (in this case, urban vs. rural) of CMIs, then the results obtained from such studies will be spurious. Operating decisions and government policies based on this research will be inefficient and possibly even detrimental to hospitals, and by extension society as a whole.

A simple example illustrates why a single case mix measure could be detrimental. An important report to Congress (MedPAC 2003, p. 50) concluded that Medicare reimbursement favors DRGs with higher weights. Higher DRG weights produce higher reimbursements, hence higher margins (for a given cost structure) when compared to DRGs with lower weights. Since the DRG weights determine the average CMI, hospitals with the same aggregate CMI and similar cost structures could have different margins. Exhibit 1 illustrates this fact--it shows three hypothetical DRG weights with three hypothetical margins (expressed as payment-to-cost ratios, i.e.: 0 K margin < 1 if payments are below cost, and margin [less than or equal to] 1 if payments are at or above cost) for three hospitals with different patient distributions over the DRGs. Each hospital has an average overall DRG weight, and hence a CMI of 1.10. If the margins are proportional to the DRG weights, as shown in the Panel A, each hospital would have the same overall margin whether the hospital is reimbursed based on the average DRG or on separate DRGs (i.e., separate CMIs), despite having different patient mixes. Alternatively, if margins are not proportional to the DRG weights and favor higher weighted DRGs (Panel B), the hospital with more patients in the more highly weighted DRGs will have a higher overall margin, despite all three hospitals having the same case mix measure.

Our simple example illustrates two important points. First, multiple CMIs will better reflect the operating margin of a hospital, especially when payment-to-cost ratios are not evenly distributed across DRGs. Second, if different types of hospitals (for example, rural vs. urban) treat fundamentally different groups of patient diagnoses, then reimbursement policies may favor hospitals that treat patients diagnosed in higher weight DRGs and hurt margins for hospitals treating patients diagnosed in lower weight DRGs. If rural hospitals treat low DRG weight patients, reimbursement policies may actually place these facilities in an adverse financial situation, thus potentially jeopardizing access to care for individuals living in their catchment areas.

Our study expands this last issue, by questioning the appropriateness of a single CMI to measure patient resource utilization intensity. Like the previous studies, we look for differences in CMIs across urban and rural hospitals. Our study also examines whether other factors, such as ownership status and hospital size, influence case mix differences.

As far back as 1988 there has been evidence that patient illness severity in for- profit hospitals is different from that in nonprofit hospitals (Office of Inspector General 1988). A later study suggests that this phenomenon has continued to exist (MedPAC 2003, p. 54). A third study of hospital conversions among large teaching hospitals in Philadelphia found that when hospitals converted to for-profit status, there was a significant decline in DRG weights, which necessarily changed their CMIs (Crawford et al. 2002).

In many areas of the US, rural hospitals tend to be much smaller than their urban counterparts, and are also more likely to be owned and operated by members of the communities they serve (Ricketts & Heaphy 2000; Avery 2002; Younis 2003). This implies that other hospital characteristics beside location may potentially influence the appropriateness of a single CMI of patient illness. Bond (1999, Table 1), for example, provides a comparison of CMIs across several characteristics, including size ownership and location. A second study (Drop in Severity 2005, Figure 7) shows how CMIs differ by payer mix. Smaller hospitals may be able to exploit case mix differences by transferring sicker patients (whose insurance plan(s) do not reimburse generously) to larger hospitals on the grounds that they are not capable of treating such severe illnesses. Hospitals can also be partitioned into one of three general types of organizations: private nonprofit (community-based or religious-affiliated), for-profit or governmental (federal, state or municipal). Because each type of organization is responsible to a different group of constituents, they have different operating objectives, and thus make use of case mix differences across groups in different ways in order to meet those objectives.

Another contribution of our paper is that we provide some insight into the means by which an appropriate number of CMIs is derived. Our study is similar to Dor and Farley (1996) and Zuckerman et al. (1994) in that we allow hospitals to have a large number of potential CMIs (as many as seven), each of which is constructed based on insurance-related patient groups. If hospitals of different sizes, locations or ownership status require a different number of CMIs, then different groups of patients will necessarily fall into different illness categories as hospital characteristics change. This implies that different types of hospitals may treat patients in different ways, depending on the type of insurance they carry. By determining how the distribution of patient groups in each CMI changes as hospital characteristics change, we provide additional information about the scope of the treatment differences associated with use of CMIs for administrative and policy decisions.

In the next section we describe the data used in this study and discuss our research methodology. Next, we present and discuss our empirical results. We conclude the paper by discussing the implications of our findings for policy and present suggestions for future research.


The data for this study come from the Washington State Department of Health (DOH). Every year, all general hospitals in the Washington state are required to report data on patient-level discharges, admissions, DRG-related information, the type of insurance each patient carries and charges levied per patient to the DOH. The DOH uses this information to construct payer-specific CMIs for each of nine payer categories: self-pay, Medicare, Medicaid, Workmen's Compensation, HMO, healthcare service contractors (such as Blue-Cross/Blue Shield), commercial insurance, other government insurance (including Indian Health) and charity care. If a hospital has nonacute care activities, such as psychiatric or swing bed activity, separate CMIs are constructed for these areas of production. The DOH also collects and disseminates data on hospitals' location (rural vs. urban, as defined by the DOH), ownership status (for-profit, private nonprofit or government) and size (as measured by DOH-designated peer groups). The data in our study apply solely to hospital specific (acute care) activity for the years 1998-2002.

Several considerations guided the data and time frame chosen for our analysis. First, our initial data set contains a relatively even mix of small, mid-sized and large hospitals, as well as rural and urban hospitals. Several publicly administered health insurance plans adopted prospective payment-based reimbursement either during or immediately prior to the time frame of our analysis, including Medicaid and Workmen's Compensation (Ambulatory Surgery 2001; Friesner & Rosenman 2004). Because prospective payment reimbursement incentivizes hospitals to reduce costs (and because reimbursement is partially determined by a patient illness category adjustment) managers have an increased motivation to more closely monitor (and possibly manage) case mix to simultaneously lower costs and increase revenues. In the health economics literature, this phenomenon has become known as "cost-adjusting" (Dor & Farley 1996). Our time frame begins in 1998, which corresponds to the first year the DOH released case mix-related information for all of the payer categories.

The complete panel contains 96 hospitals and 465 observations. Because of the relatively few hospitals reporting charity care (only 48 observations over the five years) we ignored this classification. We also eliminated hospitals that did not report information, reported unreliable (or missing) information, and for hospitals that did not treat at least five patients in each of the first eight payer categories. The latter was done to ensure that the CMIs were based on a minimally acceptable number of hospital discharges. We kept the minimum number of discharges at five to ensure that our sample is not overly biased against small hospitals, which may not treat a substantial number of patients from all of the eight payer categories (nine, less charity care). For-profit hospitals were also eliminated because of the small number in this category. We eliminated all specialty hospitals (including purely rehabilitation and psychiatric hospitals) on the grounds that they do not provide the same set of services as general hospitals. After eliminating hospitals based on these criteria, our final sample is an unbalanced panel of 48 hospitals and 199 observations (see Exhibit 2).

Exhibit 3 compares the initial sample (all non-specialty, non-profit hospitals) to our final sample. As shown in Exhibit 3, the final sample contains a disproportionate number of urban, private-nonprofit, mid-sized and large hospitals, due primarily to the fact that we required all hospitals to treat at least five patients in each of the eight payer categories. Since smaller hospitals (which also tend to be rural and government-owned, such as county hospitals) are less likely to meet this criterion, this should not be surprising. If we define our scope of interest as all general hospitals in the US, we are likely to see a similar tendency within the entire population. Since we define our population as all general hospitals in the US that treat at least five patients in each of the eight categories, our final sample appears to be a reasonable representation of the defined population. An alternative would be to simply define our data set as the population of general hospitals in Washington state that treat at least five patients in each payer group. Since we perform a factor analysis of the data, rather than classical hypothesis tests, the distinction is not important statistically.

Exhibit 4 provides some basic information about the characteristics of the hospitals in our final sample. Our private nonprofit hospitals tend to be large or mid-sized hospitals, while the governmental hospitals tend to be small or mid-sized. Most of the smaller hospitals (which are primarily government-owned) are located in rural areas, while the larger (mostly private nonprofit) hospitals are located in urban areas. The cross-tabulations in Exhibit 4 Panels D, E and F break present ownership status, hospital size and location by year. In all cases the _2 tests of independence indicate no significant differences in hospital classifications over time at the p K 0.05 level. As such, the use of an unbalanced panel is appropriate, since no significant changes in hospital location, ownership status or size occurred over our five-year time horizon.


Our primary objective is to determine whether a single CMI is an appropriate measure of aggregate illness severity for hospitals, using as little a priori information as possible. If it is not, then we also intend to provide some initial information about the number and types of CMIs that should be calculated.

Given the depth and scope of available data, our approach is one of exploratory factor analysis (Hair et al. 1998; Johnson & Wichern 2002). We chose this approach for several reasons. First, factor analysis is a commonly used technique in health services research to determine a reduced set of common factors underlying a large number of correlated variables (Zaslavsky et al. 2002; Cervero & Duncan 2003). We assume that the number of factors extracted from the eight payer-specific CMIs provides an indication about the "true" but unobservable number of CMIs underling the patient population for the hospitals in our sample. A single underlying factor would indicate that all eight payer-specific CMIs are proxies for a single, overall CMI. If there are multiple factors, then a single CMI would not be an appropriate representation of the patient population for our sample hospitals, since the payer-specific CMIs would be serving as proxies for different patient sub-groups with statistically different illness attributes.

Second, an exploratory factor analysis places virtually no a priori assumptions on the relationship(s) between the variables being analyzed. Common factors or relationships are identified using correlations within the data itself. Issues of endogeneity or specification error that are common in more parametric, empirical approaches (such as regression analysis) are not of significant concern.

Finally, factor analysis does not generally rely on random sampling and statistical inference. As such, issues of non-normality or sampling error (i.e., whether the sample accurately or precisely matches the underlying population) impact the results only if one wishes to conduct additional statistical analyses, or if one wishes to apply the implications of factor analysis results beyond the scope of the data being analyzed.

Conducting a factor analysis entails several steps. First, one must organize the data for the analysis. Given that we have data on eight distinct CMIs, we focus our analysis on the number of factors underlying these groups. Data aggregation is a crucial issue. As shown in Exhibit 3, there are fewer small, rural and government hospitals in the sample than large, urban and private nonprofit hospitals. If one conducts the analysis using the final sample of hospitals, the results may thus be skewed in favor of the larger, urban, private nonprofit hospitals.

To check for this type of aggregation bias, we conduct several different analyses. First, we analyze the full sample. We then disaggregate our data based on ownership status, hospital size and geographic location (urban vs. rural) and conduct separate factor analyses for each of these sub-samples. Differences in the number of extracted factors (as well as in the payer specific groups that load onto each factor) imply that aggregation bias is a significant concern. In these cases, we give more weight to the disaggregated results.

Next we determine whether the data are appropriate for the analysis. Factor analysis is a data reduction technique that assumes the variables of interest are empirical proxies for one or more common processes. Under this assumption, the variables should be highly correlated, and these correlations can be used to help identify the underlying processes of interest. If the variables are not highly correlated, then there is little or no statistical relationship between them, and no common latent process relating these variables will be identified. Statisticians have developed several heuristic measures to determine whether the data are appropriate for factor analysis. We employ four of the most commonly used measures (Sharma 1996; Hair et al. 1998; Johnson & Wichern 2002). We emphasize that these measures are primarily heuristic. Consequently, the data analyst must apply judgment in relying upon these measures to make research decisions.

The first heuristic measure examines the size and significance of the correlation coefficients between each of the variables. Large and statistically significant correlations indicate factor analysis is appropriate for the data. Weak correlations indicate that factor analysis is not an appropriate technique. A second measure is the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy. This is a statistic that is bounded between zero and one. In general, this measure must be greater than 0.5 (greater than 0.8 is preferred) for factor analysis to be appropriate for the data. The final statistic is the Bartlett test of sphericity. This is a [chi square] test intended to determine whether significant correlations jointly exist among the variables. Since this test follows the [chi square] distribution, it takes values between zero and infinity, with larger values indicating rejection of the null hypothesis of no joint correlation among the variables. One drawback to the Bartlett test is that large sample sizes may inflate the test statistic, thereby increasing the likelihood of Type I error. Since our data set is relatively small (N = 199), this is not a significant concern.

A fourth heuristic measure is the size of the data set (number of observations) relative to the number of variables used in the analysis. As the number of variables in the analysis increases, the number of correlation coefficients grows exponentially. A large number of variables requires a disproportionately larger sample size to ensure that the correlation coefficients are constructed accurately and precisely. A common textbook rule of thumb is that factor analysis can be applied when the sample size is at least 50 and it exceeds the number of variables by a margin of at least five to one (Hair et al. 1998).

All of our data sets, including the sub-samples based on location, ownership status and size, meet the sample size requirements (see Exhibits 5-9). In all cases the Bartlett test of sphericity indicates that our data are appropriate for factor analysis (Hair et al. 1998). For the large and mid-sized hospitals, the KMO measure is also quite high (approximately 0.83 and higher), indicating appropriateness for factor analysis. Thus we conclude that our data are generally appropriate for factor analysis.

Two statistics of potential concern are the KMO measures for the small and rural hospitals, which are both only slightly above the minimal acceptability threshold of 0.5. Thus, we also examined Pearson correlation matrices for the eight CMIs as a supplemental test (Hair et al. 1998). Exhibit 5, Panels A & B, presents these correlations. Of the 28 unique correlations in the small hospital sample (Panel A), nine are statistically significant at the p [less than or equal to] 0.05 level, 10 are significant at the p [less than or equal to] 0.10 level, and 14 are marginally significant at p [less than or equal to] 0.15 level. Similar correlations exist for rural hospitals (Panel B). Given that these two sub-samples largely overlap and differ by only four observations, this result is not surprising. Additionally, since most of these variables are highly correlated, we conclude that the data are appropriate for factor analysis.

Our final task is to choose the appropriate method of factoring the data. In our analysis, we used basic principal components analysis with Varimax rotation. Other factoring and rotational methods might give different results, depending on the parameters applied. However, we expect that our approach should yield substantially similar research conclusions (Sharma 1996). All "significant" factors were isolated using the eigenvalue-greater-than-one cutoff rule (Hair et al. 1998).


Exhibits 6-9 present the results of our factor analyses. Exhibit 6 presents the results of the factor analysis for our final sample. Our analysis extracted one significant factor and all eight payer categories load highly on that factor. This indicates on a prima facie basis that all of the payer group CMIs are representative proxy measures of a single underlying patient case mix factor, which we named the hospital's "aggregate illness severity." The Contractual and Commercial Insurance plan groups exhibit the highest factor loadings at more than 0.9, while the Workmen's Compensation and Other Government Insurance patients have the lowest factor weights at approximately 0.6 to 0.7. The overall fit of the model is also quite high, with the single factor explaining 75% of the variation in the eight payer group CMI variables.

Given that there are significant differences across hospitals based on ownership status and size, it may be that patterns in CMIs (as well as the underlying factors) differ significantly across these two sub-groups. If so, the results of the previous factor analysis may be masking these effects through aggregation. Exhibit 7 presents a series of factor analyses based on ownership status. While there are some slight differences in the magnitudes of the communalities and factor loadings, the patterns mimic the results for the aggregated data quite closely. Other Government Insurance and Workmen's Compensation patients have the lowest factor weights: 0.67 and 0.69 for private nonprofit hospitals; and in reverse order 0.61 and 0.80 for government hospitals. The commercial and contractual groups exhibit the highest factor loadings. The overall fit of the model is quite high, with the one extracted factor explaining approximately 70.3% and 78.7% of the variation in the eight indicator variables for private nonprofit and government hospitals, respectively. Thus, by profit status, we find that a single CMI is sufficient for our sample hospitals.

Exhibit 8 presents results for partitioning the data based on hospital size. The large hospital results closely mimic those of the final sample--there is a single factor that explains approximately 72% of the variance in the CMI indicator variables. As in our previous analyses, Workmen's Compensation and Other Government Insurance exhibit the lowest factor loadings, while the Commercial Insurance and Contractual Plan loadings exhibit the highest weights.

For the mid-sized hospitals, our analysis extracted two underlying factors, with those factors jointly explaining 61.8% of the variance in the CMI data. Of particular interest are the specific factor loadings. The Commercial Insurance, Contractual Plan and Workmen's Compensation CMIs load highest on the first factor. The Medicare and Other Government Insurance CMIs load relatively evenly on both factors, while the Self-Pay, Medicaid and HMO CMIs load highly on the second factor.

For small hospitals, our analysis extracted three underlying factors, explaining 64.8% of the variance in the CMI data. For these hospitals, the Contractual Plans, Medicaid, and Commercial Insurance CMIs load highly on the first factor. The Self-Pay and HMO CMIs load highest on the second factor, while the Medicare, Other Government Insurance, and Workmen's Compensation CMIs load highest on the third factor.

Exhibit 9 presents the results of our factor analyses with the data partitioned by hospital location. As would be expected considering the high correlation between rural location and small hospital size, the results for the rural hospitals are virtually identical to those of the small hospitals. The results for urban hospitals are also similar to those of the large hospitals. The role that the Medicaid CMIs play in our factor analyses is of particular interest. For small and rural hospitals, Medicaid loads highly on Factor 1 along with the Commercial Insurance and Contractual Plan CMIs. For mid-sized hospitals, however, the Medicaid CMIs load highly on a different factor (Factor 2) along with Self-Pay and HMO payer CMIs. This result implies that, compared to large and urban hospitals, the patients of small, rural hospitals either (1) are treated differently, (2) have differences in their illness attributes, or (3) some combination of the two. Thus, combining the findings of Exhibits 8 and 9 provides a perspective on health policies that advocate differences in CMI-based government reimbursements for urban and rural hospitals. Our results suggest that this approach to reimbursement may lead to inefficient resource allocation, as it overlooks mid-sized hospitals that are primarily urban, yet have different patient illness characteristics than their larger, urban counterparts. Reimbursing these hospitals in a manner similar to larger hospitals may both influence their operating decisions and reduce their financial viability.


For a variety of reasons, administrators, regulators, policy analysts and researchers often use a single, aggregate CMI to measure a hospital's patient illness severity. Our results indicate that, while using a single CMI may be appropriate for large hospitals, it may not be appropriate for mid-sized and small hospitals, particularly in rural locations. To some extent this finding is not surprising--larger hospitals treat many more patients from each insurance category, which may essentially "even out" differences in case mix across payer groups. Small and mid-sized hospitals may not treat a large number of patients in each payer group, and thus cannot "even out" these differences across groups. The fact that mid-sized hospitals have two underlying factors compared to three factors for small hospitals also implies that this averaging process increases with hospital size.

This finding also presents an additional policy implication. Much of the literature on the appropriateness of a single CMI (O'Dougherty & Cotterill 1992) has looked at differences across rural and urban hospitals. Our results suggest that this distinction is moderated by hospital size. We find that mid-sized hospitals, most of which are located in urban areas, also have multiple, distinct case mix patterns across patient groups. Policies designed to "even out" differences in case mix (and reimbursements that are based on case mix adjustments) may disproportionately impact a substantial group of mid-sized hospitals.

Our findings also provide some less intuitive, but intriguing results. First, we find no evidence that case mix differs systematically by governmental or private nonprofit ownership status. Governmental hospitals and private nonprofit hospitals both have single underlying factors with similar loadings explaining case mix. Second, Medicaid patient case mix, in particular, associates with different underlying illness factors as the size of the facility changes. This implies that Medicaid patients play an important intervening role in the process of averaging illness severity across patient groups. Thus, changes in Medicaid policy may be most effective when hospital size and location are considered in addressing issues such as prospective payment reimbursement that depend on measures of patient case mix.


While our findings provide information useful for evidence-based policy making with respect to CMI-based reimbursements, our analysis is subject to certain limitations. Our factor analysis is exploratory and only examines relationships within the data. Thus, future work needs to investigate the causal relationships underlying these relationships. Because our sample contains a disproportionate number of urban, private nonprofit, mid-sized and large hospitals, our findings are applicable only to our sample population. Future research that examines these attributes within other, possibly more broad-based, samples would provide an important replication and extension of our findings. In addition, we partition CMIs based on the type of primary payer, hospital size, location and ownership status. Studies that partition CMIs in different ways, perhaps by using different patient categories, different healthcare providers or by including for-profit hospitals and charity care patients in the analysis, would provide additional useful information for policy makers and administrators about patterns in patient illness and the mix of outputs in healthcare.


Anonymous. (1995). Geographic variation in Medicare short-stay hospital services. Health Care Financing Review, 16(2): 50.

Anonymous. (2001). Ambulatory surgery center project: May 2001 project summary. Department of Labor and Industries, Health Services Analysis Report, Accessed Dec. 24, 2005, at:

Anonymous. (2005). Drop in severity of illness further strains hospital finances. Hospital Watch, 16(1): 1-6.

Avery, S. (2002). Rural health care at risk: California small & rural hospitals. Rural Health Care Center, California Healthcare Association, Accessed Dec. 24, 2005, at:

Bond, C.A., Raehl, C.L., & Pitterle, M.E. (1999). Staffing and the cost of clinical and hospital pharmacy services in United States hospitals. Pharmacotherapy, 19(6): 767-81.

Bradley, T., & Kominski, G. (1992). Contributions of case mix and intensity change to hospital cost increases. Health Care Financing Review, 14(2): 151-163.

Cervero, R., & Duncan, M. (2003). Walking bicycling, and urban landscapes: Evidence from the San Francisco Bay area. American Journal of Public Health, 93(9): 1478-1483.

Crawford, A.G., Goldfarb, N., Mays, R., Moyer, K., Jones, J., & Nash, D.B. (2002). Hospital organizational change and financial Status: Costs and outcomes of care in Philadelphia. American Journal of Medical Quality, 17(6): 236-241.

Dor, A., & Farley, D. (1996). Payment source and the cost of hospital care: Evidence from a multi-product cost function with multiple payers. Journal of Health Economics, 15: 1-21.

Evans, R., & Walker, H. (1972). Information theory and the analysis of hospital cost structure. Canadian Journal of Economics, 5(3): 398-418.

Farley, D. (1989). Measuring casemix specialization and the concentration of diagnoses in hospitals using information theory. Journal of Health Economics, 8(2): 185-207.

Farley, D., & Hogan, C. (1990). Case-mix specialization in the market for hospital services. Health Services Research, 25(5): 757-783.

Ferraz-Nunes, J. (2001). Do we need DRGs to improve efficiency in health care? Research in Healthcare Financial Management, 6(1): 37-49.

Fetter, R., Shin, Y., & Freedman, J. (1980). Casemix definition by diagnosis related groups. Medical Care, 18(Supp): 2.

Fleming, S. (1991). The relationship between quality and cost: Pure and simple? Inquiry, 28: 29-38.

Forgione, D., Vermeer, T., Surysekar, J., Wrieden, J., & Plante, C. (2004). The impact of DRG-based payment systems on the quality of health care in OECD countries. Journal of Health Care Finance, 31(1): 41-54.

Folland, S., Goodman, A., & Stano, M. (1997). The economics of health and health car (2nd ed). Upper Saddle River, NJ: Prentice Hall.

Friesner, D., & Rosenman, R. (2004). Inpatient-outpatient cost shifting in Washington hospitals. Health Care Management Science, 7(1): 17-26.

Gilman, B. (2000). Hospital response to DRG refinements: the impact of multiple reimbursement incentives on inpatient length of stay. Health Economics, 9(4): 277-294.

Graham, G., & Cowing, T. (1997). Hospital reserve margins: Structural determinants and policy implications using cross-sectional data. Southern Economic Journal, 63(3): 692-709.

Hair, J., Anderson, R., Tatham, R., & Black, W. (1998). Multivariate data analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall.

Hodgkin, D., & McGuire, T. (1994). Payment levels and hospital response to prospective payment. Journal of Health Economics, 13: 1-29.

Horn, S., & Schumacher, D. (1979). An analysis of case mix complexity using information theory and diagnostic related grouping. Medical Care, 17: 382-389.

Johnson, R., & Wichern, D. (2002). Applied multivariate statistical analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall.

Koop, G., Osiewalski, J., & Steel, M. (1997). Bayesian efficiency analysis through individual effects: Hospital cost frontiers. Journal of Econometrics, 76: 77-105.

Li, T., & Rosenman, R. (2001). Estimating hospital costs with a generalized Leontief function. Health Economics, 10: 523-538.

Maniadakis, N., Hollingsworth, B., & Thanassoulis, E. (1999). The impact of the internal market on hospital efficiency, productivity and service quality. Health Care Management Science, 2: 75-85.

MedPAC. (2003, Jun.). Report to Congress: Variations and innovations to Medicare. Medicare Payment and Advisory Commission, Accessed Dec. 24, 2005, at June03_Entire_Report.pdf.

O'Dougherty, S., & Cotterill, P. (1992). Medicare prospective payment without separate urban and rural rates. Health Care Financing Review, 14(2): 31-47.

Office of the Inspector General. (1988, Aug.). National drug validation study: Unnecessary admissions to hospitals. U.S. Department of Health and Human Services Report OAI-09-8800880, Accessed Dec. 24, 2005, at:

Park, H., & Shin, Y. (2004). Measuring case-mix complexity of tertiary care hospitals using DRGs. Health Care Management Science, 7(1): 51-61.

Polanczyk, C., Rohde, L., Philbin, E., & Di Salvo, T. (1998). A new casemix adjustment index for hospital mortality among patients with congestive heart failure. Medical Care, 36(10): 1489-1499.

Ricketts, T.C., & Heaphy, P.E. (2002). Hospitals in rural America. Western Journal of Medicine, 173(6): 418-422.

Sharma, S. (1996). Applied multivariate techniques. New York, NY: John Wiley and Sons Press.

Simpson, J., & Shin, R. (1998). Do nonprofit hospitals exercise market power? International Journal of the Economics of Business, 5(2): 141-157.

Younis, M. (2003). A comparison study of urban and small rural hospitals' financial and economic performance. Online Journal of Rural Nursing and Healthcare, 3(1), Accessed Dec. 24, 2005, at:

Zaslavsky, A., Shaul, J., Zaborski, L., Cioffi, M., & Cleary, P. (2002). Combining health plan performance indicators into simpler composite measures. Health Care Financing Review, 23(4): 101-115.

Zuckerman, S., Hadley, J., & Iezzoni, L. (1994). Measuring hospital efficiency with frontier cost functions. Journal of Health Economics, 13(3): 255-280.

Zwanziger, J., Melnick, G., & Bamezai, A. (2000). Can cost shifting continue in a price competitive environment? Health Economics, 9(3): 211-225.

Daniel L. Friesner

Gonzaga University

Robert Rosenman

Washington State University

Matthew Q. McPherson

Gonzaga University

Address for correspondence: Daniel L. Friesner, School of Business Administration, Gonzaga University, 502 East Boone Avenue, Spokane, WA 99258-0001,

PANEL A: Margins Proportional to DRG Weights

 DRG Margin *
 Weight By DRG Hospital 1

DRG-A 1.0 1.00 30
DRG-B 1.1 1.05 40
DRG-C 1.2 1.1 30
Average CMI 1.10
Average Margin 1.05
Wt. Ave. Margin 1.05

 Hospital 2 Hospital 3

DRG-A 50 0
DRG-B 0 100
DRG-C 50 0
Average CMI 1.10 1.10
Average Margin 1.05 1.05
Wt. Ave. Margin 1.05 1.05

PANEL B: Margins Not Proportional to DRG Weights
and Favor Higher DRG Weights

 DRG Margin *
 Weight By DRG Hospital 1

DRG-A 1.0 1.00 30
DRG-B 1.1 1.05 40
DRG-C 1.2 1.30 30
Ave. CMI 1.10
Wt. Ave. Margin 1.11

 Hospital 2 Hospital 3

DRG-A 50 0
DRG-B 0 100
DRG-C 50 0
Ave. CMI 1.10 1.10
Wt. Ave. Margin 1.15 1.05

* Margin is expressed as payment-to-cost ratio:
0 [less than or equal to] margin < 1 if payments
are below cost, and margin [greater than or
equal to] 1 if payments are at or above cost.

(Initial & Final Sample)

Initial Sample:
Number of Observations 465 (from 96 hospitals)

Reduction in Sample:
All specialty hospital obs 43 (from 10 hospitals)
All obs with reported discharges < 5 108 (from 44 hospitals)
All obs with missing discharges 108 (from 44 hospitals)
All for-profit hospitals 2 (from 1 hospital)
All hospitals without size
 or ownership status info 5 (from 1 hospital)
Total Sample Reduction: 266
Final Sample: 199 (from 48 * hospitals)

* Note: This is an unbalanced panel. Data from hospitals may
be deleted in a particular year and not in others. In addition,
any single observation may have more than one reason for being
deleted from the sample. As a result, the order in which the
sample is reduced is important for the number of deletions in
each category, but has no effect on the size of the Final


PANEL A: By Hospital Size

 Small Mid-Sized Large

Deleted from Sample 125 67 31
Included in Final Sample 47 84 68
Total 172 151 99

 Special Total

Deleted from Sample 43 266
Included in Final Sample 0 199
Total 43 465

PANEL B: By Ownership Status

 For-Profit Private

Deleted from Sample 32 100
Included in Final Sample 0 116
Total 32 216

 Governmental Total

Deleted from Sample 134 266
Included in Final Sample 83 199
Total 217 465

PANEL C: By Location

 Rural Urban Total

Deleted from Sample 124 142 266
Included in Final Sample 43 156 199
Total 167 298 465

[chi square] = 30.93 **, df = 1

PANEL D: By Year

 1998 1999 2000

Deleted from Sample 53 51 54
Included in Final Sample 38 41 38
Total 911 92 92

 2001 2002 Total

Deleted from Sample 53 55 266
Included in Final Sample 42 40 199
Total 95 95 465

[chi square] = 0.337, df = 4

** Indicates statistical significance at the p [less
than or equal to] 0.05 level.
The [chi square] statistics are included only for Panels C
and D. The [chi square] statistics for Panels A and B
are biased because we deleted all specialty and for-profit
hospitals from our final sample.

Size is as measured by Washington State Department of Health
peer groups.


PANEL A: By Hospital Size and Ownership Status

 Non-Profit Governmental Total

Small 11 36 47
Mid-Sized 52 32 84
Large 53 15 68
Total 1 116 83 199

[chi square] 34.78 **, df = 2

PANEL B: By Ownership Status and Location

 Rural Urban Total

Private Nonprofit 7 109 116
Governmental 36 47 83
Total 43 156 199

[chi square] = 39.82 **, df = 1

PANEL C: By Size and Location

 Rural Urban Total

Small 43 4 47
Mid-Sized 0 84 84
Large 0 68 68
Total 43 156 199

[chi square] = 177.40 **, df = 2

PANEL D: By Ownership Status and Year

 1998 1999 2000 2001 2002 Total

 Nonprofit 21 24 22 26 23 116
Governmental 17 17 16 16 17 83
Total 38 41 38 42 40 199

[chi square] = 0.383, df = 4

PANEL E: By Size and Year

 1998 1999 2000 2001 2002 Total

Small 10 11 7 10 9 47
Mid-Sized 15 16 17 19 17 84
Large 13 14 14 13 14 68
Total 38 41 38 42 40 199

[chi square] = 1.283, df = 8

PANEL F: By Location and Year

 1998 1999 2000 2001 2002 Total

Rural 9 10 7 9 8 43
Urban 29 31 31 33 32 156
Total 38 41 38 42 40 199

[chi square] = 0.574, df = 4

** Significant at p [less than or equal to] 0.05.

Size is as measured by Washington State
Department of Health peer groups.


PANEL A: Small Hospitals
Payer Group Self-Pay Insured Medicare Medicaid

Self-Pay 1
 0.048 1
Other Gov't. (0.375)
 Insured 0.180 0.148 1
Medicare (0.113) (0.16)
 0.083 -0.022 -0.107 1
Medicaid (0.291) (0.443) (0.237)
 -0.307 0.075 0.388 -0.085
Workmen's Comp. (0.018) (0.309) (0.004) (0.285)
 0.374 0.053 0.012 0.128
HMO (0.005) (0.362) (0.469) (0.195)
 0.318 0.169 0.176 0.561
Contractual (0.015) (0.128) (0.118) (0.000)
 Plan 0.177 0.223 0.134 0.307
Commercial (0.118) (0.066) (0.185) (0.018)
 Ins. Plan

 Workmen's Contractual Commercial
Payer Group Comp. HMO Plan Ins. Plan


Other Gov't.

Workmen's Comp.
 -0.024 1
HMO (0.437)
 -0.062 0.453 1
Contractual (0.339) (0.001)
 Plan -0.002 0.402 0.667 1
Commercial (0.494) (0.003) (0.000)
 Ins. Plan

PANEL B: Rural Hospitals

Payer Group Self-Pa Insured Medicare Medicaid

Self-Pay 1
 0.058 1
Other Gov't. (0.356)
 Insured 0.175 0.183 1
Medicare (0.131) (0.120)
 0.137 0.169 -0.096 1
Medicaid (0.190) (0.139) (0.270)
 -0.405 0.165 0.395 -0.02
Workmen's Comp. (0.004) (0.145) (0.004) (0.449)
 0.367 0.171 0.005 0.157
HMO (0.008) (0.137) (0.488) (0.158)
 0.334 0.292 0.178 0.580
Contractual Platt (0.014) (0.029) (0.126) (0.000)
Commercial 0.176 0.363 0.133 0.331
 Ins. Plan (0.130) (0.008) (0.198) (0.015)

 Workmen's Contractual Commercial
Payer Group Comp. HMO Plan Ins. Plan


Other Gov't.

Workmen's Comp.
 -0.083 1
HMO (0.299)
 -0.057 0.460 1
Contractual Platt (0.359) (0.001)
Commercial -0.019 0.398 0.669 1
 Ins. Plan (0.451) (0.004) (0.000)

* Note: All significance levels are for one-tailed tests.
CMI is case mix index.

Size is as measured by Washington State Department of
Health peer groups.


 Factor 1
Payer Group CMI Mean SD Loadings Communality

Contractual Plan 0.8833 0.3112 0.968 0.936
Commercial Ins. Plan 0.9041 0.3472 0.957 0.917
Medicaid 0.6814 0.241 0.915 0.837
Self-Pay 0.8776 0.2058 0.882 0.778
Medicare 1.2227 0.2507 0.873 0.762
HMO 0.8942 0.3302 0.861 0.742
Other Gov't Ins. Plan 0.9392 0.3785 0.749 0.561
Workmen's Comp. 1.3129 0.2826 0.664 0.441

CMI is case mix index. Eigenvalue = 5.975

Explained Var. = 74.7%; Cumulative Explained Var. = 74.7%

KMO = 0.924, Bartlett's [chi square] = 1,761.903 (sig. at
p [less than or equal to] 0.05), df = 28, N = 199.


PANEL A: Private Non-Profit Hospitals

Payer Group CMI Mean SD Factor 1 Communality

Contractual Plan 0.9253 0.2393 0.953 0.909
Commercial Ins. Plan 0.9500 0.2841 0.937 0.879
Medicaid 0.7100 0.2174 0.890 0.793
Medicare 1.2925 0.2353 0.883 0.780
Self-Pay 0.9069 0.2058 0.841 0.708
HMO 0.8948 0.2445 0.793 0.628
Workmen's Comp. 1.3672 0.2419 0.692 0.479
Other Gov't Ins. 1.0042 0.3515 0.669 0.448

CMI is case mix index. Eigenvalue = 5.624

Explained Var. = 70.3%; Cumulative Explained Var. = 70.3%

KMO = 0.890; Bartlett's [chi square] = 883.83 (sig. at
p [less than or equal to] 0.05), df = 28!,V = 116.

PANEL B: Governmental H Mean SD Factor 1 Communality

Contractual Plan 0.8247 0.3841 0.979 0.958
Commercial Ins. Plan 0.8399 0.4133 0.969 0.940
Medicaid 0.6414 0.2668 0.938 0.880
HMO 0.8933 0.4235 0.936 0.876
Self-Pay 0.8366 0.2317 0.907 0.822
Medicare 1.1253 0.2401 0.891 0.794
Other Gov't Ins. 0.8484 0.3978 0.803 0.645
Workmen's Comp. 1.237 0.3175 0.614 0.377

CMI is case mix index. Eigenvalue = 6.293

Explained Var. = 78.7%; Cumulative Explained Var. = 78.7%

KMO = 0.926; Bartlett's [chi square] = 883.161 (sig. at
p [less than or equal to] 0.05), df = 28, N = 83.


PANEL A: Large Hospitals

 Factor 1
Payer Group CMI Mean SD Loadings Ex. Com.

Contractual Plan 1.163 0.371 0.973 0.946
Commercial Ins. Plan 1.183 0.450 0.968 0.937
Self-Pay 1.023 0.236 0.902 0.813
Medicaid 0.879 0.291 0.897 0.804
HMO 1.112 0.425 0.872 0.760
Medicare 1.456 0.250 0.780 0.609
Other Gov't Ins. 1.174 0.472 0.722 0.522
Workmen's Comp. 1.504 0.224 0.617 0.380

CMI is case mix index. Eigenvalue = 5.772

Explained Var. = 72.1%; Cumulative Explained Var. = 72.1%

KMO = 0.870, Bartlett's [chi square] = 550.26 (sig. at p
[less than or equal to] 0.05), df = 28, N = 68.

PANEL. B: Mid-Sized Hospitals

 Factor 1
Payer Group CMI Mean SD Loadings

Contractual Plan 0.791 0.100 0.869
Commercial Ins. Plan 0.807 0.126 0.854
Workmen's Comp. 1.285 0.256 0.694
Medicare 1.162 0.126 0.618
Other Gov't Ins. 0.856 0.256 0.459
Self-Pay 0.853 0.124 0.367
Medicaid 0.618 0.115 0.345
HMO 0.854 0.178 -0.106

 Factor 2
Payer Group CMI Loadings Ex. Com.

Contractual Plan 0.238 0.812
Commercial Ins. Plan 0.273 0.803
Workmen's Comp. -0.055 0.485
Medicare 0.618 0.646
Other Gov't Ins. 0.316 0.311
Self-Pay 0.602 0.497
Medicaid 0.738 0.671
HMO 0.839 0.715

Size is as measured by Washington State Department
of Health peer groups.

CMI is case mix index. Eigenvalues = 2.83, 2.11

Explained Var. = 35.4%, 26.4%; Cumulative Explained
Var. = 35.4%, 61.8%

KMO = 0.828; Bartlett's [chi square] = 262.547 (sig.
at p [less than or equal to] 0.05), df = 28, N = 84.

PANEL C: Small Hospitals

 Factor 1
Payer Group CMI Mean SD Loadings

Contractual Plan 0.642 0.085 0.847
Medicaid 0.507 0.075 0.809
Commercial Ins. Plan 0.672 0.084 0.736
HMO 0.649 0.118 0.390
Other Gov't Ins. 0.746 0.200 0.139
Medicare 0.992 0.082 -0.061
Self-Pay 0.711 0.110 0.049
Workmen's Comp. 1.087 0.209 -0.005

 Rotated Rotated
 Factor 2 Factor 3 Ex.
Payer Group CMI Loadings Loadings Com.

Contractual Plan 0.311 0.172 0.843
Medicaid -0.119 -0.231 0.721
Commercial Ins. Plan 0.236 0.261 0.665
HMO 0.579 0.125 0.502
Other Gov't Ins. 0.094 0.448 0.228
Medicare 0.140 0.822 0.699
Self-Pay 0.888 0.036 0.791
Workmen's Comp. -0.490 0.702 0.733

Size is as measured by Washington State Department
of Health peer groups.

CMI is case mix index. Eigenvalues = 2.089,
1.558, 1.537

Explained Var. = 26.1%,19.5%,19.2%; Cumulative Explained
Var. = 26.1%, 45.6%, 64.8%

KMO = 0.551; Bartlett's [chi square] = 87.65 (sig. at p
[less than or equal to] 0.05), df = 28; N = 47.


PANEL A: Urban Hospitals
 Factor 1
Payer Group CMI Mean SD Loadings Ex. Com.

Contractual Plan 0.949 0.318 0.966 0.933
Commercial Ins. Plan 0.968 0.365 0.961 0.933
Medicaid 0.727 0.251 0.912 0.831
Self-Pay 0.924 0.200 0.881 0.776
Medicare 1.286 0.243 0.849 0.721
HM 0.962 0.337 0.841 0.707
Other Gov't Ins. 0.996 0.401 0.727 0.529
Workmen's Comp. 1.377 0.267 0.617 0.381

CMI is case mix index. Eigenvalue = 5.801

Explained Var. = 72.5%; Cumulative Explained Var.= 72.5%

KMO = 0.910, Bartlett's [chi square] = 1310.63 (sig. at
p [less than or equal to] 0.05), df = 28; N = 156.

PANEL B: Rural Hospitals

 Factor 1
Payer Group CMI Mean SD Loadings

Contractual Plan 0.642 0.085 0.847
Commercial Ins. Plan 0.644 0.088 0.834
Medicaid 0.515 0.073 0.764
Other Gov't Ins. 0.732 0.151 0.507
HMO 0.647 0.122 0.432
Self-Pay 0.707 0.114 0.086
Workmen's Comp. 1.078 0.200 0.084
Medicare 0.991 0.085 -0.014

 Rotated Rotated
 Factor 2 Factor 3
Payer Group CMI Loadings Loadings Ex. Com.

Contractual Plan 0.311 0.172 0.843
Commercial Ins. Plan 0.322 0.088 0.807
Medicaid -0.068 -0.259 0.656
Other Gov't Ins. -0.053 0.388 0.410
HMO 0.541 0.065 0.484
Self-Pay 0.894 0.042 0.808
Workmen's Comp. -0.580 0.657 0.775
Medicare 0.174 0.887 0.817

CMI is case mix index. Eigenvalues = 2.322, 1.613, 1.473
Explained Var. = 29.0%, 20.2%, 18.4%; Cumulative

Explained Var. = 29.0%, 49.2%, 67.6%

KMO = 0.556, Bartlett's [chi square] = 90.51 (sig. at
p [less than or equal to] 0.05), df = 28, N = 43.
COPYRIGHT 2007 isRHFM Ltd. Towson, MD. All rights reserved.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Friesner, Daniel L.; Rosenman, Robert; McPherson, Matthew Q.
Publication:Research in Healthcare Financial Management
Article Type:Report
Geographic Code:1USA
Date:Jan 1, 2007
Previous Article:An easier method to extract data from large databases: the Medicare Hospital Cost Report database.
Next Article:Enterprise Resource Planning systems and HIPAA compliance.

Related Articles
Frontiers in health policy research.
Expense preference behavior and contract-management: evidence from U.S. hospitals.
Hospital Cost Containment and Length of Stay: An Econometric Analysis.
Cross-subsidization in nursing homes: Explaining rate differentials among payer types.
Hospital profitability in Florida: a revisitation.
Healthcare financial research and the need for aggressive US healthcare reform.
Restraining Medicare abuse: the case of upcoding.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters