Printer Friendly

A national study of the efficiency of hospitals in urban markets.

Significant increases in health care costs in the past decade have stimulated demands from employers, insurance companies, consumer groups, and others that such costs be brought under control. These demands have sparked continuing debate about the roles individual hospital characteristics might play in producing meaningful hospital efficiencies. However, despite concerted efforts to identify relative efficiencies across individual hospital characteristics, considerable uncertainty remains over the effects of even the most commonly examined factors (e.g., for the debate over the role of for profit versus not-for-profit ownership, see Arrington and Haddock 1990; Becker and Sloan 1985; Ginzberg 1988; Gray 1986; Herzlinger and Krasker 1987; Rundall and Lambert 1984; Schlesinger, Marmor, and Smithey 1987; Valdmanis 1990; and Wheeler, Zuckerman, and Aderholdt 1982; for discussions on the role of system membership, see Ermann and Gabel 1984; Shortell 1988; Watt, Renn, Hahn, et al. 1986; and Zuckerman 1979).

Uncertainties over the role of individual hospital characteristics in determining variations in costs and efficiency, however, are not unique to the recent decade but span many years of research. In the 1960s and early 1970s, for example, the role of hospital size was hotly debated only to remain unresolved, primarily because of an inability to control effectively for variations in hospital outputs (Carr and Feldstein 1967; Feldstein 1967; Klarman 1970; Mann and Yett 1968; and Zaretsky 1977). The importance of this problem spurred much research and development effort, culminating in the creation of various case-mix indexes (Hornbrook 1982a, 1982b; Jeffers and Siebert 1974), the most prominent of which are the diagnosis-related groups (DRGs) measures used by Medicare. But even with improvements in methods and requisite data bases for measuring case mix, the problem of controlling for the diversity of hospital outputs continues to complicate the study of hospital costs and efficiency.

The purpose of this study is to examine the effects of selected hospital characteristics on variations in technical efficiencies, while accounting for multiple hospital outputs and inputs across all urban, acute care-general hospitals in the United States. Specifically, four key hospital characteristics of obvious importance both conceptually and in terms of public policy concerns over the control of hospital costs are examined: hospital size, membership in a multihospital system, ownership, and payer mix. The literature dealing with the effects of these characteristics on the behavior and performance of hospitals is extensive. While not reviewed here, it should be noted that considerable variation in findings remains in that literature regarding the effects each of the characteristics has on hospital performance (for some overview discussions, see Gray 1986; Flood and Scott 1987).

In designing the study, a variety of approaches for handling output and input diversity were considered, including use of indicators or ratios, or examination of aggregations of individual indicators using various weighting schemes (Hadley, Mullner, and Feder 1987; Feldstein 1967; Grannemann, Brown, and Pauly 1986; McGuire 1987; Ruchlin 1977). Ultimately, an innovative technique that simultaneously takes into account multiple outputs and inputs in the computation of overall levels of efficiency was adopted for use in this study. That technique, called data envelopment analysis or DEA, is discussed further in the next section (for a description of the DEA software used in this study, see Ali 1991).


Data envelopment analysis is a tool in which linear programming is used to search for optimal combinations of inputs and outputs, based on the actual performances of, in this case, hospitals. The program evaluates the technical efficiency of each hospital relative to "optimal" patterns of production, which patterns are computed using the performance of hospitals whose input/output combinations are not bested by those of any other comparison or peer hospital. The way in which the DEA program computes efficiency scores is explained briefly using mathematical notations (adapted from Charnes and Cooper 1980). The efficiency scores (|E.sub.j~) for a group of peer hospitals (j = 1,...n), are computed for the selected outputs (|y.sub.rj~, r = 1,...s) and inputs (|x.sub.ij~, i = 1,...m) using the following linear programming formula:

Maximize: |Mathematical Expression Omitted~

Subject to: |Mathematical Expression Omitted~.

In this formulation, the weights for the outputs and inputs, respectively, are |u.sub.r~ and |v.sub.i~, and "0" denotes a focal hospital (each hospital, in turn, becomes a focal hospital when its efficiency score is being computed). Note that input and output values as well as all weights are assumed by the formulation to be greater than zero. The weights |u.sub.r~ and |v.sub.i~ for each hospital are determined entirely from the output and input data of all hospitals in the peer group. Therefore, the weights used for each hospital are those that maximize its -- the focal hospital's -- efficiency score. The program also identifies a group of optimally performing hospitals that are defined as efficient and assigns them a score of one. These efficient hospitals are then used to create an "efficiency frontier" or "data envelope" against which all other hospitals are compared. In sum, hospitals that require relatively more weighted inputs to produce weighted outputs or, alternatively, hospitals that produce less weighted output per weighted inputs than do hospitals defined by the program to be on the efficiency frontier, are considered technically inefficient. They also are given efficiency scores of less than one, but greater than zero (also, see Charnes and Cooper 1978; Charnes, Cooper, and Golany 1985; Morey, Fine, and Loree 1990; Rosko 1990; Sexton 1986; and Schinner et. al. 1990).

In this article, a hospital's peers include hospitals located in its own local market or, in cases where too few peers are available for computation of reliable efficiency measures, hospitals located in markets that have similar area characteristics. By comparing each hospital's efficiency to such peer hospitals, local environmental variations are controlled implicitly in the computation of efficiency scores.

DEA is a recent addition to the collection of quantitative techniques available for the analysis of organizational performance. Developed by Charnes, Cooper, and Rhodes (1978), DEA is an extension and generalization of Farrell's (1957) efforts to measure the efficiency of economic entities. The method can have some very practical value for planners and managers. (For a good examination of DEA's potential as a management tool, see Epstein and Henderson 1989, and for an extensive bibliography of publications on DEA and its applications see Seiford 1990.) However, our purpose in using DEA is not to provide managerial input but to compute the relative efficiencies with which hospitals combine major categories of inputs to generate general categories of outputs typically produced by hospitals.

Numerous examples now exist in which DEA has been successfully applied to the study of health care organizations and professionals. Papers by Sherman (1984, 1986) and Nunamaker (1983) were among the first to apply DEA measures to the study of hospitals, having examined hospitals in Massachusetts and Wisconsin, respectively. Grosskopf and Valdmanis (1987) applied DEA to the study of urban California hospitals, and Borden (1988) applied it to the study of New Jersey hospitals. Ozcan, Luke, and Haksever (1982) evaluated ownership and performance across hospital types using DEA. A particularly interesting application of DEA was provided by Morey, Capettini, and Dittman (1985), and again by Capettini, Morey, and Dittman (1985), to the analysis of rate setting for Medicaid drug reimbursement. In these studies, DEA was used to measure the efficiency of pharmacies and to investigate the use of DEA in establishing alternative policies for reimbursement. More recently, Huang and McLaughlin (1989) applied DEA to rural primary health care programs; Sexton, Leiken, Nolan, et al. (1989) to the Veterans Administration medical centers (VAMC); Sexton et al. (1989) to nursing homes; Chilingerian and Sherman (1990) to physicians; and Schinner et al. (1990) to mental health programs.

Collectively, such studies demonstrate that DEA is an effective technique for evaluating the efficiency of health care providers, given varying input mixes and types and numbers of outputs. It is important to note that none of these studies examines hospital efficiency on a national basis as is done in this study. (For applications of the DEA technique: to education, see Charnes, Cooper, and Rhodes 1978; Charnes and Cooper 1980; Bessent and Bessent 1980; Bessent et al. 1982; to governmental or military organizations, see Bowlin 1986, 1987; Charnes et al. 1985; to legal systems, see Lewin, Morey, and Cook 1982; to production, service, or transportation industries, see Byrnes, Fare, and Grosskopf 1984; Sherman and Gold 1985; Adolphson, Cornia, and Walter 1989.)


Data. Data for this study were drawn from the 1987 American Hospital Association annual survey file and were organized, as appropriate, by hospital and local metropolitan market. All metropolitan statistical areas (MSAs) in the United States are included in this analysis, resulting in a total of 317 such areas. Within the MSAs, only acute care general hospitals are incorporated in the analysis, producing a total of 3,000 hospital observations. It should be noted that only urban hospitals are included in this study. This is done to facilitate peer groupings within area or type of area and to eliminate possibly significant variations attributable to urban/rural differences.

Controls. Even if good methods are found to capture the diversity of hospital outputs, there remains the challenge to specify properly or control for key explanatory factors which, if not done, can bias the analysis of hospital efficiencies. Use of multivariate techniques and selected research design features, of course, makes it possible for multiple explanatory variables to be controlled. But no amount of statistical control will overcome the omission of relevant variables. One source of possibly important systematic variation in hospital performance is rarely controlled -- the local environments within which hospitals operate. Wennberg and Gittelsohn (1973) first demonstrated the importance of local environmental conditions in studying small-area variations in hospital utilization (for a more recent study, see Tedeschi, Wolfe, and Griffith 1990). Others have focused on the role of local factors in explaining market behaviors, service provision, and hospital costs (Luft, Robinson, Garnick, et al. 1986a, 1986b; Luke 1991; Luke and Begun 1988; Luke, Ozcan, and Begun 1990; Robinson and Luft 1987, 1985; and Zwanziger and Melnick 1988). But the control of local variations in the study of hospital efficiencies is in its infancy.

Hospitals operate in highly variable local contexts. Consider, for example, the tremendous differences that exist between cities with populations, say, over 2 million versus those with populations in the 100,000 to 500,000 range. Chicago, a metropolitan area that has a census in excess of 8 million, has over 90 hospitals that range in size from under 100 to over 1,000 beds and exhibit wide differences in service mix and populations served. Roanoke, Virginia, on the other hand, is a city with a population in the 200,000 range that has six hospitals, only three of which are in the general acute care business. And of the three, two have merged and another is owned by a major national for-profit chain. Roanoke is highly affected by a large and important rural environment, while Chicago is a major national, even international center for finance, distribution, and manufacturing. Many other important differences, of course, exist across local environments that can affect relative levels and patterns of hospital performance, including the market structures of local markets, degrees of HMO and PPO penetration, and local patterns of hospital/physician relationships, for example. The point is, local factors are likely to play an important role in determining variations in hospital behaviors, including patterns of input use and output generation.

While not the focus of this study, local environmental factors were controlled herein by computing efficiency scores for each hospital relative to peers that are located in the same metropolitan area. Since individual efficiency scores are unreliable when computed using small numbers of peer hospitals (Charnes, Cooper, Lewin, et al. 1985), it is important that the need to control for local market factors be balanced against the biasing effects of small numbers. Thus, for the purposes of computing efficiency scores in this study, in MSAs where small numbers of peer hospitals were found to exist, these hospitals were pooled together with hospitals in other areas that had similar environmental characteristics as well as small numbers of hospitals in them. This was done in two steps. First, a minimum of 13 or more hospitals was set (in order to provide sufficient degrees of freedom given the number of variables (seven) used by the DEA program to compute the efficiency scores) so that hospitals could be compared exclusively against peer hospitals located in their own metropolitan areas. Second, for those MSAs in which there were fewer than 13 hospitals, a pooling strategy was adopted in which areas located within the same region and falling within the same metropolitan size categories were combined into common peer groups. Specifically, areas were combined by region, using the nine 1987 AHA regions -- New England, Mid-Atlantic, South Atlantic, East North Central, East South Central, West North Central, West South Central, Mountain, and Pacific -- and general metropolitan size categories, which involved, in this case, only the three smallest metropolitan size categories -- under 250,000, 250,000 to 500,000, and 500,000 to 1,000,000.

To check on the possibility that combined areas might still be dissimilar from one another, one-way analyses of variance were conducted across metropolitan areas within each of the pooled groups to determine whether mean values for selected sociodemographic variables might be significantly different across areas. (Three such variables were examined: average income, doctors per capita, and education, i.e., low education.) While not reported here, a comparison of the means for the three variables within 25 pooled groups of areas (producing 75 analyses of variance), found only six (9 percent) to be significant at the p |is less than~ .05 level. This provided assurance that the pooling did combine areas that were not significantly dissimilar from one another.

All of the metropolitan areas in the smallest size category, all but one in the second-smallest size category, and 70 percent in the third-smallest size category had to be pooled into region/size combinations. None of the areas in the two largest size categories -- 1,000,000 to 3,000,000, and 3,000,000 and over -- fell below the 13-hospital minimum. Thus, having begun with a total of 317 MSAs, the pooling of MSAs reduced the number of local peer groups to 82 (the total of pooled and nonpooled areas).

Also, because of degrees of freedom limitations, the larger the peer group, the smaller the proportion of peer group hospitals that would be needed for efficiency frontiers to be formed (Charnes et al. 1985). Thus, area pooling had a positive side benefit: by increasing the size of the pools, greater proportions of hospitals fell outside the efficiency frontiers than would be the case had the areas not been combined; thus, the variance in the data was enriched. Overall, 45 percent of all study hospitals were used by the DEA program to form the efficiency frontiers (which were made up only of those hospitals defined to be efficient -- that received scores of one). The percentages by MSA size category were:
 Percent Efficient
Under 250,000 37.5
250,000 to 500,000 42.5
500,000 to 1,000,000 57.8
1,000,000 to 3,000,000 54.7
3,000,000 and over 28.6

While employment of local peer groups makes it possible to control for local environmental variations, their use means that efficiency scores are not absolute, but relative to the hospitals' own peer hospitals (Sherman 1984; Sexton et al. 1989). In other words, each hospital receives an efficiency score, the value of which is computed relative to its peer hospitals only. To minimize a possibly biasing effect stemming from cross-peer group comparisons, analyses reported in this study were conducted within each of five MSA population size categories as well as for all of the data combined into a nationwide analysis. In addition, to ensure that the results were not driven by some residual systematic between-area variation attributable to comparisons across peer groups, the statistical analyses were recalculated for data on which Z-score transformations of the efficiency scores were computed for each hospital relative to its particular peer group. By converting the peer group means to zero and standard deviations to one, between-area differences attributable to the local distributions were removed, while the relative differences across hospitals within groups were maintained. Analyses based upon Z-scores are not reported in this article. However, it is noted that this adjustment altered the significance levels only slightly across the estimated coefficients and, more importantly, that none of the signs of the coefficients were themselves changed. In sum, the results remained essentially the same even after Z-score adjustments removed possible between-area variations in scores across peer groups.

One additional methodological consideration relating to the calculation of efficiency scores needs to be addressed: the sensitivity of the scores to the selections of inputs and outputs and the ways in which they are measured. In the authors' ongoing study of DEA-generated hospital efficiencies, the scores have been found to be very stable across a wide variety of input and output combinations and alternative approaches to measurement. Many alternative combinations of scores have been found to be highly correlated to one another (the vast majority of Pearson correlation coefficients range from .8 to .98). This suggests that the DEA calculations, at least for the urban hospitals included in this study, may be relatively insensitive to measurement variation (Ozcan 1993).

Measures -- Hospital Inputs and Outputs. In this study, hospitals were assumed primarily to be producing three types of output: treated cases, outpatient visits, and teaching FTEs (full-time equivalents). Certainly, hospitals produce other outputs, including research, community service, and the goods and services of other health care and/or non-health care activities. Good measures of the latter types of output, however, were not available. Nevertheless, it is reasonable to assume that the three primary types captured the major outputs being produced by most urban hospitals. Four inputs were also included in the computation of efficiency scores: plant complexity, plant size, labor, and supplies. Again, other inputs are consumed by hospitals, but these were assumed to represent the major factors used in producing the above outputs.

The hospital output measures included were:

* Treated cases. Hospital inpatient discharges in 1987, adjusted using the Medicare case-mix index for each hospital for that year;

* Outpatient visits. All visits to hospital emergency and outpatient facilities that occurred during 1987; and

* Teaching FTEs. Weighted sum of medical and dental trainees and other professional trainees (i.e., nurses, physical therapists, etc.) trained during 1987. Full-time trainees were assigned a weight of 1 and part-time, a weight of .5.

The input measures included were:

* Capital. Two indicators were used:

* Plant size. Number of operational hospital beds during 1987;

* Plant complexity. Number of diagnostic and special services provided exclusively by the hospital in 1987;

* Labor. Number of nonphysician FTEs employed in 1987 plus the weighted (using a weight of .5) number of part-time personnel employed during 1987; and

* Supplies. Amount of operational expenses, not including payroll, capital, or depreciation expenses.

The plant size and complexity measures were used as proxies for the more general input, capital assets, which measure was not available for this study. To test the validity of these measures, asset measures for Virginia hospitals, for which such measures were available, were regressed on measures of plant size and plant complexity. Using 47 Virginia hospitals, the regression model was found to account for 63 percent of the variation in assets (F = 36.8, p |is less than~ .0001). The strong association between the two proxies and plant assets provides an indicator of their validity.

It is recognized also that a count of outpatient visits does not capture the diversity and complexity of ambulatory output. Perhaps a measure of ambulatory expenditures might have helped in this regard. However, use of an expenditure measure as a proxy for output would have introduced price variation into the data. Since the study focuses on technical, rather than allocative efficiency (efficiency which takes into consideration economic valuations based upon prices), non-price weighted values were appropriate (Morey, Fine, and Loree 1990). Nevertheless, use of unweighted raw counts of visits represents a limitation of this study.

Measures -- Explanatory Variables. As suggested at the beginning of this article, four general explanatory variables were used: hospital size, membership in a multihospital system, ownership, and payer mix. Use of the size measure both in computing the efficiency scores and as an explanatory variable did not produce a tautology since its true value was submerged in the computation of DEA scores, which scores are based upon interrelationships among the seven factors and not on their absolute values per se.

The specific explanatory measures included were:

* Hospital size. Number of operational hospital beds during 1987;

* System structure. A nominal measured variable reflecting three types of system or nonsystem affiliations: nonsystem, contract management, and multihospital;

* Ownership. A nominal measured variable representing government, church, for-profit, and not-for-profit;

* Payer mix. Three indicators were used to capture this variable:

* Managed care. Dummy variable representing whether or not a hospital had either or both PPO and HMO contracts;

* Percent Medicare. The percentage of Medicare patients;

* Percent Medicaid. The percentage of Medicaid patients.

In all, six explanatory variables were incorporated into the analyses (one indicator each for the first three variables and three for the fourth).

Analysis Technique. Cross-sectional analyses were conducted using a covariance analysis technique -- Multiple Classification Analysis (MCA) -- with individual hospitals as the unit of analysis. Since the independent variables include continuous, ordinal, and nominal measurement properties, a covariance analysis technique is preferred. MCA enables simultaneous consideration of multiple predictor variables that have any of these measurement properties. MCA's major advantage over conventional dummy variable regression techniques is that it enables one to examine the relative contributions of nominal and/or ordinal predictors while assessing the direction and levels of effect of all variables (Andrews et al. 1973).

Since the local environment was controlled by calculating efficiency scores relative to peer hospitals located in actual or pooled local markets, no environmental characteristics are included in the statistical analyses, with one exception: MSA size. Six covariance analyses were computed, one each for five MSA size categories and one for all hospitals pooled into a single analysis. By first examining the covariance analyses within MSA size category, it should be possible to detect any residual MSA size effects that remain in the data.


Table 1 presents summary statistics for selected hospital and local market characteristics by the five MSA size categories and for all hospitals and markets combined. These numbers reveal some obvious differences across the MSA size categories (e.g., the average size of hospitals increases with increases in MSA size, as do the percentages of hospitals that have some form of managed care contracts). There are also some remarkable consistencies across the MSA size categories (e.g., the percentages of Medicare and Medicaid populations seen in the hospitals).

Table 2 presents the results of the covariance analyses. Several interesting patterns become immediately apparent. First, two variables appear most consistently to be associated with variations in relative hospital efficiency: hospital ownership and percent Medicare payment. In four of the five MSA size categories and in the overall analysis, the ownership variables are significant at less than the .01 confidence level and in the remaining size category (MSA size range of 250,000-499,999), TABULAR DATA OMITTED TABULAR DATA OMITTED it is significant at less than the .10 level. It is interesting that in three of the five MSA size categories and in the overall analysis, the government hospitals received the highest efficiency scores and, in all analyses, for-profit hospitals received the lowest scores. While this may at first appear counterintuitive, such findings are reasonable. The relatively high efficiency scores of the government hospitals could reflect a variety of factors. It is possible, for example, that government hospitals, relative to other hospital types, might produce their outputs with a minimum of input support. Alternatively, their relatively higher scores could reflect the tendency of government hospitals to produce high levels of outpatient and teaching outputs in addition to acute care discharges. By contrast, for-profit hospitals, because they are less likely to be engaged in the production of teaching outputs, could have lower efficiency scores. As reported elsewhere, however, the teaching output was observed to play a minor role in determining efficiency scores of for-profit hospitals, relative to the contribution it made to the efficiency scores of hospitals in the other ownership categories (see Ozcan 1993).

A second general finding reported in Table 2 is that in all but the third MSA size category the percent Medicare variable is significant and negative in its association with hospital efficiency. By contrast, in none of the five analyses is the percent Medicaid variable significant. The relatively lesser dependency of hospitals on Medicaid versus Medicare funding could account for the difference. Another interesting contrast to the Medicare finding are the results for the managed care contract variable. While this variable is significant in only two of the five MSA size categories and is significant overall, hospitals not reporting managed care contracts, with one exception -- those in the third MSA size category -- have the lowest average efficiency scores. The lack of association in the two smaller MSA size categories could reflect the relatively lesser involvement in managed care of hospitals within smaller urban areas. The differences between the Medicare and managed care findings could be due either to something unique about managed care contracts or, more likely, to the effects of limitations in Medicare versus managed care financing.

The system structure variable shows some interesting results. While significant in only two of the five MSA size categories and in the overall analysis, nonsystem membership is consistently associated with low efficiency scores. Interestingly, the contract management hospitals scored the highest in four of the five MSA size analyses and overall. Since contract management hospitals constitute only a small percentage of the total number of hospitals in the sample (6 percent), these results must be viewed with some caution. Finally, hospital size is positive and significant in two of the MSA size categories and in the overall analysis, and is positive in all analyses. This result is consistent with probable expectations for the relationship between size and hospital efficiency.

The general consistency in findings across the five MSA size categories suggests that it may be valid to group the data for the purposes of conducting an overall covariance analysis, as is done in Table 2. A total of 3,000 observations are included in the overall analysis and all but one of the variables -- percent Medicaid -- emerge as significant. As would be expected, the signs and patterns in the relationships are consistent with those found in the within-MSA size analyses.


The analyses produced some interesting findings regarding the relationships between the selected hospital characteristics and relative technical efficiencies. First, it was observed that government hospitals have relatively higher efficiency scores and for-profit hospitals score relatively lower. These findings reveal a possible advantage of accounting for the multiple products of hospitals when comparing hospital efficiencies. It is conceivable, for example, that had the efficiencies been computed using only adjusted discharges, a far different result would have been produced. These findings also suggest that the cost and productivity effects of joint production, especially of inpatient and teaching outputs, should be carefully assessed in the current debate about the use of indirect teaching adjustments and direct teaching pass-through payments within Medicare's prospective payment system (PPS). Further investigation of relative levels of productivity when multiple outputs are considered may be needed before such payment mechanisms are eliminated or significantly modified.

One could argue that the differences in relative efficiencies are achieved at the expense of quality. This argument, however, may run counter to the findings reported in this study. While at least anecdotal evidence suggests that government hospitals may provide a relatively lower level of quality, conversely, there is no evidence that for-profit hospitals provide a uniquely high or low level of quality (e.g., see Longo, Chase, Ahlgren, et al. 1986; Shortell and Hughes 1988). For the for-profit hospitals, the multiproduct/input explanation may be the most persuasive: for-profit hospitals may achieve lower levels of technical efficiency because they do not produce teaching outputs to the degree produced by hospitals in the other ownership categories. Or they may utilize inputs in differing ways relative to hospitals in the other ownership categories.

The findings for percent Medicare, especially when contrasted with the finding for the government ownership and managed care contracting variables, raise related policy questions. First, it would be important to know whether the negative relationship between percent Medicare and efficiency is due to the financing constraints of Medicare or, alternatively, to the types of hospitals that become Medicare dependent. If the former is the case, financing policies would need to be assessed to determine the appropriateness of funding levels. If the second explanation is true, however, it would be important to determine what, if anything, might be unique about hospitals that become Medicare dependent, and then to direct attention to ways in which such hospitals might be given incentives to become more efficient. Consistent with this point is Altman's (1990) finding that Medicare-dependent hospitals tended to have relatively lower operating margins than did non-Medicare-dependent hospitals. The Medicare finding could also be attributable to the output patterns of Medicare-dependent hospitals. Altman found, for example, that Medicare-dependent hospitals tended to receive relatively lower levels of indirect teaching payments from Medicare than did other, less Medicare-dependent hospitals, indicating that they are relatively less involved in producing teaching outputs.

The finding that size consistently was related positively to efficiency conforms with the generally expected positive effects of scale. It also is consistent with the finding that for-profit hospitals, which tend to be smaller than hospitals in the other size categories, receive lower efficiency scores.

Finally, some comment is needed on the finding that a positive association existed between managed care contracting and the efficiency scores in two of the five MSA categories and in the overall analysis. To the extent that this finding is valid, it suggests the reverse of what was found for the Medicare payment variable. Just why managed care contracting is positively associated with technical efficiencies is not clear. Some possible explanations have already been provided. One additional interpretation might be that this relationship reflects the positive effects of competition. Where there is more insurance industry involvement in a local market, hospitals may be stimulated to engage more aggressively in efficiency-producing activities. Alternatively, the managed care finding may simply be due to the incentives associated with managed care contracting -- to minimize expenditures and maximize output.

This study revisited the role played by some key hospital characteristics in generating efficiencies in hospital production. An innovative technique -- DEA analysis -- was used, making possible the analysis of technical efficiencies in hospitals by taking into consideration multiple outputs and inputs. Since the DEA program computes efficiency scores by comparing each hospital to its "peers," the technique also made it possible to control for some important variables that otherwise could have affected observed hospital efficiencies. It is argued that local variations represent one possibly important source of variation that has too infrequently been controlled in studies of hospital costs and productivities. In this study, efficiency scores for individual hospitals were computed relative to those for other hospitals in their local areas or, where degrees of freedom requirements prevented such comparisons, with local competitors in similarly sized areas located in the same regions.

By applying the DEA methodology to 3,000 urban hospitals, this study has attempted to deal with the serious methodological problem of controlling for diversity in hospital output. The literature needs to focus much more attention on this and alternative approaches to accounting for hospital outputs in the study of hospital costs and efficiency. To do otherwise risks giving too much attention to results that stem from effectively flawed analyses. We are the students of a very complex and rapidly changing field. The search for easy answers without adequate attention being given to the devilment of methodological complexities will inevitably lead to inaccurate findings and, consequently, to inappropriate policy responses.


Appreciation is expressed to Professor Agha Iqbal Ali of the University of Massachusetts at Amherst for making available an updated version of the IDEAS (Integrated Data Envelopment Analysis System) program for use in this study.


Adolphson, D. L., G. C. Cornia, and L. C. Walter. "Railroad Property Valuation Using Data Envelopment Analysis." Interfaces 19, no. 3 (1989): 18-26.

Ali, A. I. IDEAS Version 3.0.5, Integrated Data Envelopment Analysis System. Amherst: The University of Massachusetts at Amherst, 1991.

Altman, S. H. Medicare Prospective Payment and the American Health Care System: Report to the Congress. Washington, DC: Prospective Payment Assessment Commission, 1990.

Andrews, F. M., J. N. Morgan, J. A. Sonquist, and L. Klem. Multiple Classification Analysis: A Report on a Computer Program for Multiple Regression Using Categorical Predictors. 2d ed. Ann Arbor, MI: Institute for Social Research, 1973.

Arrington, B., and C. C. Haddock. "Who Really Profits from Not-for-Profits?" Health Services Research 25, no. 2 (1990): 291-304.

Becker, E. R., and F. A. Sloan. "Hospital Ownership and Performance." Inquiry 23, no. 1 (1985): 21-36.

Bessent, A., and W. Bessent. "Determining the Comparative Efficiency of Schools through Data Envelopment Analysis." Educational Administration Quarterly 16, no. 2 (1980): 57-75.

Bessent, A., W. Bessent, J. Kennington, and B. Regan. "An Application of Mathematical Programming to Assess Productivity in the Houston Independent School District." Management Science 28, no. 12 (1982): 1355-67.

Borden, J. P. "An Assessment of the Impact of Diagnosis Related Group (DRG)-Based Reimbursement on the Technical Efficiency of New Jersey Hospitals Using Data Envelopment Analysis." Journal of Accounting and Public Policy 7, no. 2 (1988): 77-96.

Bowlin, W. F. "Evaluating the Efficiency of U.S. Air Force Real-Property Maintenance Activities." Journal of Operational Research Society 38, no. 2 (1987): 127-35.

-----. "Evaluating Performance in Governmental Organizations." The Government Accountants Journal 35, no. 2 (1986): 50-57.

Byrnes, P., R. Fare, and S. Grosskopf. "Measuring Productive Efficiency: An Application to Illinois Strip Mines." Management Science 30, no. 6 (1984): 671-81.

Capettini, R., D. A. Dittman, and R. C. Morey. "Reimbursement Rate Setting for Medical Prescription Drugs Based on Relative Efficiencies." Journal of Accounting and Public Policy 4, no. 2 (1985): 83-110.

Carr, W. J., and P. J. Feldstein. "The Relationship of Cost to Hospital Size." Inquiry 4, no. 2 (1967): 45-65.

Charnes, A., T. Clark, W. W. Cooper, and B. Golany. "A Developmental Study of Data Envelopment Analysis in Measuring the Efficiency of Maintenance Units in the U.S. Air Forces." Annals of Operations Research 2 (1985): 95-112.

Charnes, A., and W. W. Cooper. "Management Science Relations for Evaluation and Management Accountability." Journal of Enterprise Management 2, no. 2 (1980): 143-62.

-----. "Managerial Economics -- Past, Present, and Future." Journal of Enterprise Management 1, no. 1 (1978): 5-23.

Charnes, A., W. W. Cooper, A. Y. Lewin, R. C. Morey, and J. Rousseau. "Sensitivity and Stability Analysis in DEA." Annals of Operations Research 2 (1985): 139-56.

Charnes, A., W. W. Cooper, and E. Rhodes. "Measuring the Efficiency of Decision Making Units." European Journal of Operational Research 2, no. 6 (1978): 429-44.

Chilingerian, J. A., and H. D. Sherman. "Managing Physician Efficiency and Effectiveness in Providing Hospital Services." Health Services Management Research 3, no. 1 (1990): 3-15.

Epstein, M. K., and J. C. Henderson. "Data Envelopment Analysis for Managerial Control and Diagnosis." Decision Sciences 20, no. 1 (1989): 90-119.

Ermann, D., and J. Gabel. "Multi-Hospital Systems: Issues and Empirical Findings." Health Affairs 3, no. 1 (1984): 50-64.

Farrell, M. J. "The Measurement of Productive Efficiency." Journal of the Royal Statistical Society Series A, Part III (1957): 253-90.

Feldstein, M. S. Economic Analysis for Health Services Efficiency. Amsterdam: North Holland Publishing Company, 1967.

Flood, A. B., and W. R. Scott. Hospital Structure and Performance. Baltimore, MD: Johns Hopkins University Press, 1987.

Ginzberg, E. "For-Profit Medicine: A Reassessment." New England Journal of Medicine 319, no. 12 (1988): 757-61.

Gray, B. H., ed. For-Profit Enterprise in Health Care. Washington, DC: National Academy Press, 1986.

Grannemann, T. W., R. S. Brown, and M. V. Pauly. "Estimating Hospital Costs: A Multiple Output Analysis." Journal of Health Economics 5, no. 2 (1986): 107-27.

Grosskopf, S., and V. Valdmanis. "Measuring Hospital Performance: A Nonparametric Approach." Journal of Health Economics 6, no. 2 (1987): 89-107.

Hadley, J., R. Mullner, and J. Feder. "Special Report: The Financially Distressed Hospital." New England Journal of Medicine 307, no. 20 (1987): 1283-87.

Herzlinger, R. E., and W. S. Krasker. "Who Profits from Nonprofits?" Harvard Business Review 65, no. 1 (1987): 93-106.

Hornbrook, M. C. "Hospital Case Mix: Its Definition, Measurement and Use. Part I. The Conceptual Framework." Medical Care Review 39, no. 1 (1982a): 1-43.

-----. "Hospital Case Mix: Its Definition, Measurement and Use. Part II. Review of Alternative Measures." Medical Care Review 39, no. 2 (1982b): 73-123.

Huang, Y. L., and C. P. McLaughlin. "Relative Efficiency in Rural Primary Health Care: An Application of Data Envelopment Analysis." Health Services Research 24, no. 2 (1989): 143-58.

Jeffers, J. R., and C. D. Siebert. "Measurement of Hospital Cost Variation: Case Mix, Service Intensity, and Input Productivity Factors." Health Services Research 9, no. 4 (1974): 293-307.

Klarman, H. E., ed. (with the assistance of H. H. Jaszi). Empirical Studies in Health Economics: Proceedings of the Second Conference on the Economics of Health. Baltimore, MD: Johns Hopkins University Press, 1970.

Lewin, A. Y., R. C. Morey, and T. J. Cook. "Evaluating the Administrative Efficiency of Courts." Omega 10, no. 4 (1982): 401-11.

Longo, D. R., G. A. Chase, L. A. Ahlgren, J. S. Roberts, and C. S. Weisman. "Compliance of Multi-hospital Systems with Standards of the Joint Commission on Accreditation of Hospitals." In For-Profit Enterprise in Health Care. Edited by B. H. Gray. Washington, DC: National Academy Press, 1986.

Luft, H. S., J. C. Robinson, D. W. Garnick, S. C. Maerki, and S. J. McPhee. "The Role of Specialized Clinical Services in Competition among Hospitals." Inquiry 23, no. 1 (1986a): 83-94.

Luft, H. S., J. C. Robinson, D. W. Garnick, R. G. Hughes, S. J. McPhee, S. S. Hunt, and J. Showstack. "Hospital Behavior in a Local Market Context." Medical Care Review 43, no. 2 (1986b): 217-52.

Luke, R. D. "Spatial Competition and Cooperation in Local Hospital Markets." Medical Care Review 48, no. 2 (1991): 207-37.

Luke, R. D., and J. W. Begun. "Strategic Orientations of Small Multihospital Systems." Health Services Research 23, no. 5 (1988): 597-618.

Luke, R. D., Y. A. Ozcan, and J. W. Begun. "Birth Order in Small Multihospital Systems." Health Services Research 25, no. 2 (1990): 305-25.

Mann, J. K., and D. E. Yett. "The Analysis of Hospital Costs: A Review Article." Journal of Business 41, no. 2 (1968): 191-202.

McGuire, A. "The Measurement of Hospital Efficiency." Social Science Medicine 24, no. 9 (1987): 719-24.

Morey, R. C., R. Capettini, and D. A. Dittman. "Pareto Rate Setting Strategies: An Application to Medicaid Drug Reimbursement." Policy Sciences 18, no. 2 (1985): 169-200.

Morey, R. C., D. J. Fine, and S. W. Loree. "Comparing the Allocative Efficiencies of Hospitals." International Journal of Management Science 18, no. 1 (1990): 71-83.

Nunamaker, T. "Measuring Routine Nursing Service Efficiency: A Comparison of Cost per Patient Day and Data Envelopment Analysis Models." Health Services Research 18, no. 2, Part 1 (1983): 183-205.

Ozcan, Y. A. "Sensitivity Analysis of Hospital Efficiency under Alternative Output/Input and Peer Groups: A Review." International Journal of Knowledge and Policy (1993): in press.

Ozcan, Y. A., R. D. Luke, and C. Haksever. "Ownership and Organizational Performance: A Comparison of Technical Efficiency across Hospital Types." Medical Care 30, no. 9 (1992): 781-84.

Robinson, J. C., and H. S. Luft. "Competition and the Cost of Hospital Care, 1972 to 1982." Journal of the American Medical Association 257, no. 23 (1987): 3241-45.

Rosko, M. D. "Measuring Technical Efficiency in Health Care Organizations." Journal of Medical Systems 14, no. 5 (1990): 307-22.

Ruchlin, H. R. "Problems in Measuring Institutional Productivity." Topics in Health Care Financing 4, no. 2 (1977): 13-27.

Rundall, T., and W. Lambert. "The Private Management of Public Hospitals." Health Services Research 19, no. 4 (1984): 519-44.

Schinner, A. P., I. Kamis-Gould, N. Delucia, and A. B. Rothboard. "Organizational Determinants of Efficiency and Effectiveness in Mental Health Partial Care Programs." Health Services Research 25, no. 2 (1990): 377-420.

Schlesinger, M., T. R. Marmor, and R. Smithey. "Nonprofit and For-profit Medical Care: Shifting Roles and Implications for Health Policy." Journal of Health Politics, Policy and Law 12, no. 3 (1987): 427-57.

Seiford, L. M. A Bibliography of Data Envelopment Analysis. Version 5.0. Amherst: The University of Massachusetts, Department of Industrial Engineering and Operations Research, 1990.

Sexton, T. R. "The Methodology of Data Envelopment Analysis." In Measuring Efficiency: An Analysis of Data Envelopment Analysis. Edited by R. H. Silkman. San Francisco: Jossey-Bass Inc., 1986.

Sexton, T. R., A. M. Leiken, A. H. Nolan, S. Liss, A. Hogan, and R. H.

Silkman. "Evaluating Managerial Efficiency of Veterans Administration Medical Centers Using Data Envelopment Analysis." Medical Care 27, no. 12 (1989): 1175-88.

Sexton, T. R., A. M. Leiken, S. Sleeper, and A. F. Coburn. "The Impact of Prospective Reimbursement on Nursing Home Efficiency." Medical Care 27, no. 2 (1989): 154-63.

Sherman, H. D. "Hospital Efficiency Measurement and Evaluation: Empirical Test of a New Technique." Medical Care 22, no. 10 (1984): 922-38.

-----. "Measuring Productivity of Health Care Organizations." In Measuring Efficiency: An Assessment of Data Envelopment Analysis. Edited by R. H. Silkman. Publication no. 32 in the New Directions for Program Evaluation series. San Francisco: American Evaluation Association, Jossey-Bass Inc., 1986.

Sherman, H. D., and F. Gold. "Bank Branch Operating Efficiency: Evaluation with Data Envelopment Analysis." Journal of Banking and Finance 9, no. 2 (1985): 297-315.

Shortell, S. M. "The Evolution of Hospital Systems: Unfulfilled Promises and Self-Fulfilling Prophesies." Medical Care Review 45, no. 2 (1988): 177-214.

Shortell, S. M., and E. F. X. Hughes. "The Effects of Regulation, Competition, and Ownership on Mortality Rates among Hospital Inpatients." New England Journal of Medicine 318, no. 17 (1988): 1100-1107.

Tedeschi, P., R. A. Wolfe, and J. R. Griffith. "Micro-Area Variation in Hospital Use." Health Services Research 24, no. 6 (1990): 729-40.

Valdmanis, V. G. "Ownership and Technical Efficiency of Hospitals." Medical Care 28, no. 6 (1990): 552-60.

Watt, J. M., S. C. Renn, J. S. Hahn, R. A. Derzon, and C. J. Schramm. "The Effects of Ownership and Multihospital System Membership on Hospital Functional Strategies and Economic Performance." In For-Profit Enterprise in Health Care. Edited by B. H. Gray. Washington, DC: National Academy Press, 1986.

Wennberg, J., and A. Gittelsohn. "Small Area Variations in Health Care Delivery." Science 182, no. 4117 (1973): 1102-1108.

Wheeler, J., H. Zuckerman, and J. Aderholdt. "How Management Contracts Can Affect Hospital Finances." Inquiry 19, no. 2 (1982): 160-66.

Zaretsky, H. W. "The Effects of Patient Mix and Service Mix on Hospital Costs and Productivity." Topics in Health Care Financing 4, no. 2 (1977): 63-82.

Zuckerman, H. S. "Multi-institutional Systems: Promise and Performance." Inquiry 16, no. 4 (Winter 1979): 291-314.

Zwanziger, J., and G. A. Melnick. "The Effects of Hospital Competition and the Medicare PPS Program on Hospital Cost Behavior in California." Journal of Health Economics 7, no. 2 (1988): 301-20.

Address correspondence and requests for reprints to Yasar A. Ozcan, Ph.D., Assistant Professor, Department of Health Admnistration, Medical College of Virginia Campus, Virginia Commonwealth University, VCU Station, Box 203, Richmond, VA 23298-0203. Roice D. Luke, Ph.D. is Professor in the Department of Health Administration, Medical College of Virginia Campus, Virginia Commonwealth University. This article, submitted to Health Services Rsearch on November 16, 1990, was revised and accepted for publication on August 5, 1992.
COPYRIGHT 1993 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ozcan, Yasar A.; Luke, Roice D.
Publication:Health Services Research
Date:Feb 1, 1993
Previous Article:Governing board structure, business strategy, and performance of acute care hospitals: a contingency perspective.
Next Article:The financial performance of diversified hospital subsidiaries.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters