Quality improvement implementation and hospital performance on quality indicators.
Many believe that quality improvement (QI) represents a promising strategy for improving hospital quality of care. QI is a systemic approach to planning and implementing continuous improvement in performance. QI emphasizes continuous examination and improvement of work processes by teams of organizational members trained in basic statistical techniques and problem solving tools and empowered to make decisions based on their analysis of the data. The systemic focus of QI complements a growing recognition in the field that the quality of the care delivered by clinicians depends substantially on the performance capability of the organizational systems in which they work. While individual clinician competence remains important, many increasingly see the capability of organizational systems to prevent errors, coordinate care among settings and practitioners, and ensure that relevant, accurate information is available when needed as critical elements in providing high quality care (Institute of Medicine 2000). Reflecting the growing emphasis on organizational systems of care, the Joint Commission on Accreditation of Healthcare Organizations, the National Committee for Quality Assurance, and the Peer Review Organizations of the Centers for Medicare and Medicaid have all encouraged hospitals to use QI methods.
Although QI holds promise for improving quality of care, hospitals that adopt QI often struggle with its implementation (Shortell, Bennett, and Byck 1998). By implementation, we refer to the transition period, following a decision to adopt a new idea or practice, when intended users put that new idea or practice into use--for example, when clinical and nonclinical staff begin applying QI principles and practices to improve clinical care processes (Klein and Sorra 1996; Rogers 2003). Successful implementation is critical to the effectiveness of a QI initiative (Blumenthal and Kilo 1998; Shortell, Bennett, and Byck 1998). However, QI implementation is demanding on individuals and organizations. It requires sustained leadership, extensive training and support, robust measurement and data systems, realigned incentives and human resources practices, and cultural receptivity to change (Shortell, Bennett, and Byck 1998; Ferlie and Shortell 2001; Institute of Medicine 2001; Meyer et al. 2004). In addition, the systemic nature of many quality problems implies that the effectiveness of a QI initiative may depend on its implementation across many conditions, disciplines, and departments. This too often proves challenging (Gustofson et al. 1997; Blumenthal and Kilo 1998; Meyer et al. 2004). If successful, though, implementing QI in this manner may create a durable infrastructure for enhancing quality organization-wide.
In the present study, we examine the association of several dimensions of QI implementation in hospitals and hospital performance on selected indicators of clinical quality. To do so, we combine data from a national survey of hospital QI practices with a group of carefully screened and validated measures indicative of patient safety in hospital settings. In taking this approach, we seek to address several problems associated with existing research on hospital QI and quality of care. First, previous studies do not adequately account for differences in how hospitals implement QI. Consequently, the relative advantage of different implementation strategies remains unknown. Second, previous studies of hospital QI typically make use of small samples. This restricts the generalization of study findings to larger populations of hospitals and limits the extent to which study findings could be used to develop managerial or policy recommendations. Finally, data availability has precluded previous studies from examining a broad range of hospital quality indicators. This, in turn, has limited the opportunity to link specific QI structures and practices with a set of quality indicators that broadly reflect quality at the institutional level.
Study results will provide policy makers, accrediting bodies, and consumers with more precise information about how different approaches to QI implementation in hospital settings relate to a range of hospital-level quality indicators. Such information would facilitate the development of QI standards and benchmarks that make use of hospital-level quality indicators that are not only widely available, but also potentially amenable to change through the systematic application of QI practices. For instance, information on QI practices could be useful in the design of financial incentive programs to "make quality pay," such as the recent CMS program to reward hospitals scoring in the top 10th percentile on various measures. Finally, such information would help hospital managers and clinicians target those approaches to QI implementation that provide the greatest value for resources expended.
QI embraces a philosophy of meeting or exceeding customer expectations through the continuous improvement of the processes of producing a good or service. QI posits that the quality of goods and services depends foremost on the processes by which they are designed and delivered. Hence, QI focuses on understanding, controlling, and improving work processes rather than on correcting individuals' mistakes after the fact. QI also assumes that uncontrolled variance in work processes is the primary cause of quality problems. Hence, QI focuses analyzing the root causes of variability, taking appropriate steps to make work processes predictable, and then continuously improving process performance.
Operationally, QI combines three elements: use of cross-functional teams to identify and solve quality problems, use of scientific methods and statistical tools by these teams to monitor and analyze work processes, and use of process-management tools (e.g., flow charts that graphically depict steps in a clinical process) to help team members use collective knowledge effectively. Cross-functional teams play an integral role in QI because most vital work processes span individuals, disciplines, and departments. Cross-functional teams bring together the many clinical professionals and nonclinical hospital staff members who perform a process to document the process in its entirety, diagnose the causes of quality problems, and develop and test possible solutions to them.
Although many hospitals have implemented QI, the effectiveness of these efforts has not been systematically examined, especially at the organizational level of analysis (Shortell, Bennett, and Byck 1998). Several studies have examined the structures, processes, and relationships common to designing, organizing, and implementing hospital QI efforts (Barsness et al. 1993a, b; Blumenthal and Edwards 1995; Gilman and Lammers 1995; Shortell 1995; Shortell et al. 1995b; Weiner, Alexander, and Shortell 1996; Weiner, Shortell, and Alexander 1997; Westphal, Gulati, and Shortell 1997; Berlowitz et al. 2003). These studies indicate that hospitals vary widely in terms of: (1) their approach to implementing QI; (2) the extent to which QI has "penetrated" core clinical processes; and (3) the degree to which QI practices have been diffused across clinical areas. Few of these studies, however, examined the effectiveness of hospital QI practices. With few exceptions (e.g., Westphal, Gulafi, and Shorten 1997; Shortell et al. 2000), most have used perceptual measures of impact or self-reported estimates of cost or clinical impact rather than objectively derived measures of clinical quality (Gilman and Lammers 1995; Shortell et al. 1995b). Other multisite, comparative studies have explicitly examined the impact of hospital QI on clinical practice (Carlin, Carlson, and Nordin 1996; O'Connor et al. 1996; Gordian and Ballard 1997; Goldberg et al. 1998; Ferguson et al. 2003). However, nearly all of these studies focused on a single clinical quality indicator (e.g., risk-adjusted mortality for coronary artery bypass surgery [CABG], adverse drug event) or single clinical practice (e.g., immunization, guideline use) rather than a broad range of measures indicative of quality at an institutional level.
Theory and Hypotheses
Our thesis is that higher values on multiple hospital-level quality indicators will be associated with the implementation of QI structures and practices that provide a durable infrastructure for continuous improvement. Specifically, we propose that the effectiveness of QI at the organizational level depends in part on the scope of QI implementation--that is, the extent or range of application of QI philosophy and methods. Our thesis is premised on the notion that QI achieves its full potential when it pervasively penetrates organizational routines and becomes a "way of doing business" throughout the organization. Such penetration is critical for sustainable success in enhancing quality across clinical conditions, organizational units, and time. We offer three arguments in support of the proposition that broad implementation scope enhances a hospital's capability to systematically improve work processes and thereby enhance quality organization-wide.
First, as noted earlier, most vital work processes in organizations span individuals, disciplines, and departments (Ishikawa 1985; Deming 1986; Juran 1988; James 1989; Walton 1990). Improving clinical care processes generally requires that clinical professionals and hospital staff from different specialties, functions, or units work together in order to document how the process works in its entirety and identify the key process factors that play a causal role in process performance. Implementing systemic changes also typically requires collaboration across disciplinary, functional, and unit boundaries. For example, improving cardiac surgery outcomes may entail multiple, simultaneous changes in physician offices, inpatient units, and home-health units (Gustofson et al. 1997). Even when implementing "local" changes (i.e., those within a single unit), cross-unit collaboration is often necessary in order to avoid undesirable, unintended consequences to arise in other units because of task interdependencies.
Second, enhancing quality on an organization-wide basis requires mobilizing large numbers of hospital staff, equipping them with technical expertise in QI methods and tools, and empowering them to diagnose and solve patient safety problems. A small number of people working together on a cross-functional team could, with the right support, make systems improvements that address a specific quality problem (e.g., stroke mortality). An organization-wide effort focusing on 5, 10, or even 15 quality problems, however, would require harnessing the knowledge, the energy, and the creativity of many hospital staff members.
Finally, extensive involvement of hospital staff across multiple units may also strengthen the effectiveness of QI efforts by promoting a "quality" culture. That is, pervasive participation in QI promotes shared values about the importance of continuous improvement, using data and scientific methods to identify problems, communicating openly, and collaborating to implement solutions. These shared values, in turn, support the implementation of systemic changes that cross disciplinary, departmental, and organizational boundaries (by reducing turf battles) and increase the likelihood of "holding the gain" (O'Brien et al. 1995).
H1: Hospitals that report more extensive involvement of hospital units in QI efforts will exhibit better values on multiple hospital-level indicators of clinical quality.
H2: Hospitals that report higher participation of hospital staff in QI teams will exhibit better values on multiple hospital-level indicators of clinical quality.
In addition to these general arguments, we offer specific arguments in support of pervasive participation of two targeted groups: senior managers and physicians. Direct senior management participation in cross-functional QI teams signals to other organizational members that senior management views QI as a top priority. This, in turn, may strengthen the effectiveness of QI efforts by increasing the commitment and contributions of front-line workers. Moreover, senior managers who participate in QI teams may develop a deeper understanding of the root causes of quality problems and feel greater ownership of recommended solutions that such teams generate. As a result, senior managers may be more willing to commit the resources and make the policy changes necessary to ameliorate systemic causes of quality problems.
H3: Hospitals that report higher participation of senior managers in QI teams will exhibit better values on multiple hospital-level indicators of clinical quality.
Widespread physician participation in QI teams may also be critical to QI effectiveness because physicians play a critical role in clinical resource allocation decisions and possess the clinical expertise needed to differentiate appropriate from inappropriate variation in care processes. Pervasive physician participation may not only enhance the quality of analysis and problem solving in QI teams, but also support the implementation of changes recommended by such teams. Research indicates that peer influence can be a powerful lever for provider behavior change. Hence, widespread physician participation in QI teams may facilitate those changes in physician behavior needed to address quality problems.
H4: Hospitals that report higher participation of active staff physicians in QI teams will exhibit better values on multiple hospital-level indicators of clinical quality.
Data on hospital QI practices were derived from a 1997 national, mailed survey sent to the CEOs of all 6,150 U.S. hospitals. The CEO was asked to complete the survey and seek the assistance of the person responsible for the hospital's QI effort in order to ensure the most accurate data or assessment about the hospital's QI activities. The 26-page survey requested information about all hospital efforts to improve quality and did not assume (nor encourage respondents to make assumptions about) the superiority of any specific approach. The survey provided definitions of terms like "QI," "quality assurance," "continuous QI," "total quality management," and "QI project" in order to increase the consistency and comparability of respondents' answers. Of the 6,150 hospitals in the sampling frame, 2,300 (or 38 percent) responded to the survey. Multivariate logistic regression analysis showed that survey respondents were less likely than nonrespondents to be investor-owned, smaller, or unaffiliated with a healthcare system (see Appendix A, Tables 1A and 1B).
The Medicare Inpatient Database was the source of data for the risk-adjusted quality indicators used in the study. The Medicare database contains the universe of the inpatient discharge abstracts for Medicare patients, translated into a uniform format to facilitate multistate and multihospital comparisons. The Medicare data contain more than 100 clinical and nonclinical variables included in a hospital discharge abstract such as principal and secondary diagnoses, principal and secondary procedures, admission and discharge status, patient demographics (e.g., gender, age, and, for some states, race), expected payment source (e.g., Medicare, Medicaid, private insurance, self-pay; for some states, additional discrete payer categories, such as managed care), total charges, and length of stay.
In addition to these two principal data sources, the study used data from the 1997 and 1998 American Hospital Association's (AHA) Annual Survey of Hospitals, the 1998 Bureau of Health Professions' Area Resource File, and two proprietary datasets compiled by Solucient Inc. The AHA Annual Survey is administered in the fourth quarter of every year to all AHA registered and nonregistered facilities. The Area Resource File supplies county-specific data on an annual basis for numerous market and demographic factors, making it a rich source of information about the local operating environments of hospitals. Solucient Inc. supplied financial performance ratios for each hospital derived from the 1997 and 1998 Medicare Cost Reports. Solucient Inc. also provided information on county-level insurance coverage for six types of insurance, making possible the construction of managed care penetration measures. Solucient uses multiple sources of insurance coverage data and various models to construct these local estimates. These sources include: Population and Income Demographic Estimates, Claritas Inc.; CMS Medicaid State summaries; County Surveyor Data, Interstudy; health maintenance organization (HMO) county files, Interstudy; State Medicaid agency data, Interstudy; Oregon Department of Human Services; CMS Medicare TEFRA county files, Interstudy; and Current Population Survey (CPS).
Our study focuses on community hospitals located in the U.S. Therefore, from the 2,300 hospitals that responded to the 1997 QI survey, we eliminated federal hospitals, specialty hospitals, and hospitals located in U.S. territories. We also eliminated hospitals that responded to the QI survey but lacked a Medicare (HCFA) identification number or any MEDPAR discharges. These screening procedures generated a useable sample of 1,784 hospitals. Multivariate logistic regression analysis showed that sample hospitals were more likely than nonsample hospitals to be larger and system-affiliated, but less likely to be investor-owned or located in the East South and East North Central census regions (see Appendix A, Tables 1C and 1D).
Independent Variables. Scope of a hospital's QI effort was measured by four variables. The extent of organizational deployment was defined as the average level of hospital unit involvement in the QI efforts, measured on a five-point scale that ranged from (1) "not at all" to (5) "very great extent". The seven possible units included acute inpatient care, outpatient clinics, major physician offices or group practices, home health agencies, owned or affiliated nursing homes, owned or affiliated ambulatory surgery centers, and owned or affiliated hospices. Factor analysis supported the construction of a single scale, which showed reasonably good reliability ([alpha] = 0.73). The remaining three variables indicated the percentage of hospital senior managers participating in QI teams, the percentage of hospital staff (total full time equivalents) participating in QI teams, and the percentage of physicians participating in QI teams.
Dependent Variables. Analysis focused on six hospital-level Quality Indicators developed for the Agency for Healthcare Research and Quality (AHRQ) by the University of California San Francisco-Stanford Evidence-Based Practice Center (UCSF-Stanford EPC). Quality Indicators are measures that use discharge abstracts to identify or signal potential quality problems. AHRQ Quality Indicator development followed the Institute of Medicine's definition of quality, which is "the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge" (Institute of Medicine 2001). AHRQ Quality Indicators include measures of inpatient mortality for medical conditions and procedures; measures of utilization for procedures for which there are questions of overuse, underuse, or misuse; and measures of volume for procedures for which evidence exists that a higher volume is associated with lower mortality.
AHRQ developed the Quality Indicators to address several limitations of the Health Care Utilization Project (HCUP) Quality Indicators, including the lack severity or risk adjustment and the absence of population-based denominators (Agency for Healthcare Research and Quality 2004d). The UCSF-Stanford EPC used an extensive process for identifying, refining, risk-adjusting, and evaluating the AHRQ Quality Indicators (Agency for Healthcare Research and Quality 2004a). Appendix B provides a brief description of this process. Although AHRQ. developed the Quality Indicators for the purposes of national tracking and QI, some public and private purchasers and data organizations have used them for public reporting and pay-for-performance initiatives (Agency for Healthcare Research and Quality 2004b).
We selected six AHRQ Quality Indicators based on their favorable performance on four empirical tests of precision and five empirical tests of minimum bias (see Appendix B). Five indicators focused on inpatient procedures or conditions for which mortality rates have been shown to vary substantially across institutions and for which evidence suggests that high mortality, at least in part, may be associated with deficiencies in quality of care. These indicators were inpatient hospital mortality for CABG, acute myocardial infarction, congestive heart failure, stroke, and pneumonia. In addition, one indicator (bilateral catheterization) focused on a procedure whose use varies significantly across hospitals, and for which evidence suggests that a high rate of use represents inappropriate care. Right-sided catheterization coincidental with left-sided catheterization yields little clinically useful information and, hence, is contraindicated in most patients without specific indications (Agency for Healthcare Research and Quality 2004c). We constructed each Quality Indicator as a 2-year average (1997 and 1998) to smooth short-term fluctuations.
Control Variables. We included four categories of covariates expected to relate to QI scope, hospital quality indicators, or both: market characteristics, hospital characteristics, accreditation and quality standards, and length of QI experience. In terms of market characteristics, we included three competition variables: a variable-radius Herfindal index of market concentration, defined as the sum of the squared market shares of the hospitals within a radius capturing 75 percent of a hospital's patient admissions in 1997 (Gresenz, Rogowski, and Escarce 2004); the self-reported number of other hospitals with which the hospital directly competes for patients on either an inpatient or outpatient basis; and the self-reported intensity of competition for patients among hospitals in the market, measured on a seven-point scale that ranged from (1) "not at all intense" to (7) "highly intense". We also included two managed care variables: managed care penetration, defined as the percentage of the total insured population in a county enrolled in a private risk, Medicare risk, or Medicaid risk insurance product in 1997; and self-reported percentage of patients for which hospital is paid on a capitated, negotiated per case rate, or discounted basis (excluding Medicare and Medicaid).
In terms of hospital characteristics, we included seven variables indicating the hospital's structural complexity. These included a binary variable indicating whether the hospital belonged to the Council of Teaching Hospitals (COTH); a binary variable whether the hospital was owned, leased, or sponsored by a health care system or health network; a binary variable indicating whether the hospital had its own governing board; three binary variables indicating whether a hospital had developed--on its own or through a health system, health network, or joint venture with an insurer--a HMO, preferred provider organization, or indemnity product (FFS); and a variable indicating the number of physician arrangements in which the hospital participates, either on its own or through a health system or health network. Eight physician arrangements were examined (e.g., independent practice association, integrated salary model).
Other hospital characteristics included (1) the number of registered nurse FTEs per inpatient day, divided by 1,000; (2) a measure of hospital volume: number of inpatient surgeries performed in the last 12 months, divided by 1,000; (3) two binary variables to indicate whether the hospital was public (nonfederal) or investor-owned; and (4) a measure of hospital financial health: 2-year average cash flow margin, defined as the net available for debt service (i.e., net patient revenue+total other income--total operating expense--total other expense+depreciation and amortization+interest expense) divided by the sum of net patient revenue and total other income, multiplied by 100; and (4) two measures of hospital service mix variables: total outpatient visits, including emergency room visits and outpatient surgeries, adjusted by hospital bed size and divided by 1,000; and ratio of the number of outpatient services offered by the hospital to the number of inpatient services offered by hospital. For the latter, 25 services listed in AHA Annual Survey were designated outpatient services; 47 were designated inpatient services.
With respect to accreditation and quality standards, we included three variables that indicated the self-reported influence of the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO), Foundation for Accountability, and National Committee of Quality Assurance on the hospital's QI effort. These variables were measured on a five-point scale that ranged from (1) "no influence" to (5) "very great influence".
Finally, we controlled for number of years since hospital first became involved in QI. We defined "involved" as the first training of organizational members in QI principles and methods or the substantive investment of top management's time in organizing QI. We used a square-root transformation to correct for positive skew.
Table 1 summarizes the descriptive statistics for all study variables.
We address our hypotheses by estimating regression models that relate the hospital-level quality indicators to the QI implementation scores, controlling for hospital organizational and financial characteristics and hospital market attributes. While results from such a model may be informative, a concern with this type of modeling is that an endogenous relationship might exist between hospital-level quality indicators and QI implementation scope. For example, poor performing hospitals may be motivated to correct performance problems by adopting initiatives aimed at improving quality of care. Or, some hospitals may have unobserved attributes that predispose them to higher quality of care, and which also increase the likelihood that they will invest in QI activities. If this happens, results would not reflect the true causal effect of QI implementation scores on quality indicators. To examine this possibility, we performed the Hausman test for endogeneity for each of our dependent variables. The test strongly rejected the hypothesis of no endogeneity (p<.0001) in five of six models (CABG mortality proved the exception). Based on these results, we took an instrumental variables (IV) estimation approach.
Estimation using IV provides a means of determining the extent to which any such bias exists, and, if necessary, correcting for it. To implement the IV technique, we first identified a set of "instruments," that is, variables that are hypothesized to influence the level of a hospital's QI activity, but not directly influence quality of care. We selected potential instruments by reviewing research on the organizational and environmental factors associated with QI adoption and implementation. Prior studies show that QI implementation varies as a function of organizational receptivity to QI, appropriate and sustained leadership, measurement and information systems, and appropriate levels of funding (Dean and Bowen 1994; Hackman and Wageman 1995; Powell 1995; Weiner, Alexander, and Shortell 1996; Weiner, Shortell, and Alexander 1997). Based on these findings, and the data available to us, we identified 10 potential instruments that we hypothesized would influence the level of a hospital's QI activity, but not directly influence quality of care. These instruments included three measures of leadership for QI, four measures of hospital infrastructure for QI, and three measures of resources for QI (see Appendix C). In order to evaluate whether these instruments presented possible alternative pathways for "management concern for quality" to affect hospital quality of care, we conducted several MEDLINE searches to identify organizational predictors of hospital performance on quality indicators. Based on this review, we included additional variables in our models to control for potential unobserved determinants. For example, we added as a control variable the number of registered nurses per 1,000 patient days in order to statistically control for the possibility that "management concern for quality" could affect hospital quality of care through nursing staff quality.
We tested the predictive power of the instruments by regressing QI implementation scope variables first on the control variables and then on the controls and instruments (Bowden and Turkington 1990). We used these results to evaluate the IV set on three criteria: incremental contribution to variance explained, consistency of statistical significance across models, and consistency of direction of effects. We found that adding the set of IVs resulted in statistically significant increases in the variance explained, with changes in [R.sup.2] ranging from a low of 0.04 for physician participation in QI teams to a high of 0.06 for hospital staff participation in QI teams. Moreover, the F-statistics for the joint statistical significance tests ranged from 9.71 for physician participation in QI teams (p<.001) to 14.28 for hospital staff participation in QI teams (p<.001). Finally, as reported below under Results, both the second and third criteria were substantially met. These results provide at least partial evidence that our proposed IV set meets the criteria for estimation using IVs.
We then estimated the models using a two-stage IVs approach. The first stage instrumented QI scope variables as a function of leadership for QI, hospital infrastructure and resources for QI. The second-stage model included both the predicted values (instruments) of the QI scope variables and the control (exogenous) variables in predicting each of eight hospital-level quality indicators. Results of both the first- and second-stage model are presented below.
Table 2 shows the results of the first stage model in which we regressed QI implementation scope variables on the control and IVs. Seven of the 10 proposed instruments exhibited statistically significant relationships with at least two measures of QI implementation scope. The four proposed instruments that did not meet this criterion included: board activity in QI, integrated database, and total expenses for QI. With the exception of total expenses for QI activities, these variables were, however, significantly associated with at least one of the QI scope measures. Those proposed instruments that achieved statistical significance generally displayed effects in the predicted direction. Only hospital size and its squared term failed to demonstrate such consistency across models. With respect to the control variables, we see some evidence that market competition is negatively associated with QI implementation scope. Hospitals in more concentrated markets exhibited greater proportional participation of physicians in QI teams and greater involvement of hospital units in QI, although the latter effect achieves only marginal significance (p<.10). Conversely, hospitals in markets with higher managed care penetration exhibited less involvement of hospital units in QI and less proportional physician participation in QI teams. Further, length of hospital involvement in QI was positively related to hospital staff, senior manager, and physician participation in QI teams.
Table 3 presents the results of the two-stage IV models. Hypothesis 1 predicted that greater hospital unit involvement in QI would be positively associated with higher values on quality of care indicators (e.g., lower adjusted mortality). Our results do not support this hypothesis. For five of the six quality indicator models, the parameter estimates for the instrumented version of hospital unit involvement in QI are statistically significant, but in the opposite direction predicted. That is, those hospitals that reported a higher average level of involvement of seven hospital units in hospital QI efforts exhibited poorer values on hospital-level quality indicators. Hospital unit involvement in QI did not exhibit a statistically significant association with CABG mortality, possibly because this model possessed less statistical power than the others.
Hypotheses 2-4 predicted, respectively, that the greater the percentage of hospital staff, senior managers and physicians participating in QI teams, the better the scores on hospital quality indicators. Results of the two-stage IV model are generally supportive of Hypotheses 2 and 3. The percentage of hospital staff participating in QI teams showed statistically significant, positive associations with four of the six hospital-level quality indicators. The percentage of senior managers participating in QI teams showed statistically significant associations with three of the six hospital quality indicators. However, the percentage of physicians participating in QI teams showed no statistically significant, positive associations with any of the six hospital-level quality indicators. Thus, Hypothesis 4 did not receive support.
This study assessed the association between one aspect of hospital QI activity, namely the scope of QI implementation, and several hospital-level indicators of clinical quality. To date, studies making use of hospital-level quality indicators have focused on identifying and describing differences among hospitals that might be indicative of potential quality problems. Relatively little research has focused on the possible determinants of such differences. This study represents one of the first attempts to systematically link management and clinical efforts to improve quality of care in the hospital setting with hospital-level quality indicators.
Study results generally support the proposition that the scope of QI implementation in hospitals is significantly associated with hospital-level quality indicators. However, the direction of the association varies across different measures of QI implementation scope. Consistent with expectations, hospitals that reported a higher percentage of hospital staff and senior managers participating in QI teams also exhibited higher values on hospital-level quality indicators. Contrary to expectations, hospitals that reported more extensive involvement of multiple hospital units in hospital QI efforts exhibited poorer values on hospital-level quality indicators.
Three interrelated interpretations might account for the negative association of hospital unit involvement in QI and hospital-level quality indictors. First, the effectiveness of a hospital's QI effort could suffer from a dilution of focus if extensive involvement by disparate hospital units (e.g., acute inpatient care units, outpatient clinics, and home health agencies) leads to the hospital to engage in a diverse array of potentially unrelated QI projects. Under such conditions, senior leaders and front-line organizational members might find it difficult to give each QI project the attention it deserves. Similarly, extensive involvement by multiple hospital units might spread a hospital's QI resources too thinly. QI efforts often require staffing expertise and financial resources to support the systematic study and continuous improvement of work processes. Hospitals might exhibit little or no improvement in hospital quality indicators if QI projects do not receive adequate technical and financial support. Finally, extensive involvement by multiple units could yield little to no improvement in hospital-level quality indicators if QI projects work at cross purposes because of poor coordination or inappropriate sequencing.
The divergent pattern of findings for our QI implementation scope measures has implications for QI research. Specifically, our findings suggest the need for further theoretical refinement of the construct of implementation scope. QI implementation appears to be a multidimensional construct, and future research would profit from conceptually and empirically distinguishing the diffusion of QI across organizational units from the mobilization of organizational members for QI. Differentiating the construct in this way could stimulate theory development and, by encouraging consistency in measurement, promote greater cumulativeness of empirical research.
Our divergent pattern of findings also has implications for QI practice. As noted earlier, quality experts emphasize that QI requires organization-wide commitment and involvement because most, if not all, vital work processes span many individuals, disciplines, and departments. Debate exists, however, about the best approach for encouraging organization-wide participation in QI. Some favor building a "critical mass" by providing training in QI philosophy and methods to many organizational members at the outset and then initiating a broad array of QI projects across the organization. Others favor providing "just-in-time" training for organizational members and focusing selected groups of organizational members on a few strategically important clinical quality issues. Since studies to date have not adequately accounted for differences in QI implementation, existing research offers little guidance on this issue.
Our study results suggest, however, that a blended approach might prove most useful. That is, hospitals might exhibit higher values on hospital-level quality indicators by encouraging many organizational members to participate in QI activities, yet limiting hospital deployment of QI to a few organizational units. Intriguingly, results indicate that greater participation of hospital staff and senior managers in QI teams is positively associated with higher values on several hospital-level quality indicators, not just one or two. Perhaps intensive mobilization of organizational personnel within organizational units (e.g., acute inpatient care) creates the "critical mass" necessary to overcome the structural, cultural, and technical barriers that often obstruct organization-wide application of QI or otherwise restrict the gains from QI activity to a few clinical outcomes.
The finding of no statistically significant associations between physician participation in QI teams and hospital-level quality indicators speaks to another practical issue: the role of physicians in clinical QI efforts. Many believe that lack of physician involvement represents the single most important obstacle to the success of clinical QI (Berwick, Godfrey, and Roessner 1990; Board 1992; Health Care Advisory Board 1992; McLanghlin and Kaluzny 1994; Blumenthal and Edwards 1995; Shortell 1995). Physicians play a central role in clinical resource allocation decisions and possess the clinical expertise needed to differentiate appropriate from inappropriate variation in care processes. Yet, reports indicate that physicians are reluctant to participate in QI projects because of distrust of hospital motives, lack of time, and fear that reducing variation in clinical processes will compromise their ability to vary care to meet individual needs (Blumenthal and Edwards 1995; Shorten 1995; Shortell et al. 1995a). Study results suggest that widespread physician participation in QI teams, while perhaps desirable, might not be necessary. Widespread participation of hospital staff and senior managers, it seems, is more important, at least for the hospital-level quality indicators examined here. Rather than attempting to mobilize much of the medical staff, hospital leaders could perhaps secure needed physician input by involving selected physicians on an as-needed basis.
Three important study limitations should be noted. First, because our analysis required the merger of several existing databases, our sample is not statistically representative of the population of U.S. community hospitals. Hence, while our sample appears to be formally representative of the population with respect to several organizational and environmental characteristics, we cannot discount the possibility of sampling bias. This suggests that caution should be exercised in generalizing our study findings to a specific hospital population.
Second, dealing with the potential for endogeneity in cross-sectional studies like this can be difficult. We used the IVs approach to address this issue here, but use of IV is not entirely without difficulty. In particular, in this study, as in many studies of this type, it is difficult to find perfect instruments. While we believe that the instruments we used were likely to produce useful results, and our statistical tests were consistent with their validity, if our instruments were imperfect the results from the IV estimation may also be inaccurate. Further work, either with alternate instruments or with alternate study designs that approach endogeneity in other ways (e.g., analysis of changes over time when hospitals adopt QI), could help us better understand these issues.
Finally, we explicitly acknowledge limits of our ability to measure actual quality of care. The literature on quality of care clearly underscores the difficulty of measuring quality in a valid, reliable fashion. Each approach to assessing quality is subject to criticism, regardless of whether measures derive from discharge abstract databases or from patient chart abstracts. We have been careful to not suggest that quality indicators derived from discharge data abstracts are the only measures, or even the best measures, of quality. Moreover, we scrupulously avoided using the term "quality of care" in describing our dependent variables, instead emphasizing the term "quality indicators." By considering our measures as indicative of possible problems of overuse, underuse or misuse of health services, rather than quality of care per se, we sought to take a conservative position on these issues.
Despite these limitations, we believe that the breadth and depth of our hospital QI data, coupled with reliable, validated quality indicators, represents an advance over previous small-sample studies of hospital QI and provides a solid basis for subsequent research. Multiple stakeholders--from community members and patients to employers and purchasers--are demanding data and evidence from providers regarding the effectiveness of care. Despite a shift in attention toward clinical outcomes, there has not been a commensurate shift in efforts to examine why variations in clinical outcomes exist, and perhaps more importantly, what organizational practices and procedures are associated with improved quality indicators at an institutional level. The present study provides multiple stakeholders with information about the relationship of one aspect of hospital QI activity to hospital-level quality indicators. Given recent changes in JCAHO measurement requirements and Medicare reporting systems, a contemporary replication of this study would be valuable. In addition, more research is needed in order to investigate the association between other aspects of hospital QI activity (e.g., extensive use of outcomes data and statistical tools) and hospital-level quality indicators, and also to identify the organizational and market conditions under which specific QI practices affect quality indicators.
This research was supported by grant no. 5 R01 HS11317-02 by the Agency for Healthcare Research and Quality (AHRQ). Reviewed and approved by Institutional Review Board, School of Public Health, University of North Carolina at Chapel Hill (IRB 01-1257; approved 1/29/01). There are no disclosures or disclaimers.
The following supplementary material for this article is available online:
APPENDIX A. Description of Survey Respondents and Usable Sample Relative to Nonrespondents, Nonsample Hospitals, and Population of U.S. Hospitals.
APPENDIX B. AHRQ Quality Indicator Development, Risk-Adjustment, and Evaluation.
APPENDIX C. Instrumental Variables Selection.
Agency for Healthcare Research and Quality. 2004a. "AHRQ Quality Indicators" [accessed on October 20, 2004a]. Available at http://www.qualityindicators.ahrq.gov/documentation.htm
--. 2004b. Guidance for Using the AHRQ Quality Indicators for Hospital-Level Public Reporting or Payment. Rockville, MD: Agency for Healthcare Research and Quality.
--. 2004c. Guide to Inpatient Quality Indicators, AHRQ Publication Number 02-Ro204. Rockville, MD: Agency for Healthcare Research and Quality.
--. 2004d. "HCUP Quality Indicators Archive" [accessed on October 20, 2004d]. Available at http://www.qualityindicators.ahrq.gov/hcup_archive.htm
Barsness, Z. I., S. M. Shortell, R. R. Gillies, E. F. X. Hughes, J. L. O'Brien, D. Bohr, C. Izui, and P. Kravolec. 1993a. "The Quality March: National Survey of Hospital Quality Improvement Activities." Hospital &Health Networks 67 (24): 40-2.
--. 1993b. "The Quality March: National Survey Profiles Quality Improvement Activities." Hospital &Health Networks 67 (23): 52-5.
Berlowitz, D. R., G. J. Young, E. C. Hiekey, D. Saliba, B. S. Mittman, E. Czarnowski, B. Simon, J.J. Anderson, A. S. Ash, L. V. Rubenstein, and M. A. Moskowitz. 2003. "Quality Improvement Implementation in the Nursing Home." Health Services Research 38 (1, part 1): 65-83.
Berwick, D. M., A. B. Godfrey, and J. Roessner. 1990. Curing Health Care: New Strategies for Quality Improvement. San Francisco: Jossey-Bass.
Blumenthal, D., and J. N. Edwards. 1995. "Involving Physicians in Total Quality Management: Results of a Study." In Improving Clinical Practice: Total Quality Management and the Physician, edited by D. Blumenthal and A. C. Sheck, pp. 229-66. San Franciseo: Jossey-Bass.
Blumenthal, D., and C. M. Kilo. 1998. "A Report Card on Continuous Quality Improvement." Milbank Quarterly 76 (4): 625-48, 511.
Board, H. C. A. 1992. TQM: Directory of Hospital Projects. Washington, DC: The Advisory Board Company.
Bowden, R.J., and D. A. Turkington. 1990. Instrumental Variables. New York: Cambridge University Press.
Bradley, E. H., J. Herrin, J. A. Mattera, E. S. Holmboe, Y. Wang, P. Frederick, S. A. Roumanis, M. J. Radford, and H. M. Knamholz. 2005. "Quality Improvement Efforts and Hospital Performance: Rates of Beta-Blocker Prescription after Acute Myocardial Infarction." Medical Care 43 (3): 282-92.
Bradley, E. H., E. S. Holmboe, J. A. Mattera, S. A. Roumanis, M. J. Radford, and H. M. Krumholz. 2001. "A Qualitative Study of Increasing Beta-Blocker Use after Myocardial Infarction: Why Do Some Hospitals Succeed?" Journal of the American Medical Association 285 (20): 2604-11.
Brook, R. H., C. J. Kamberg, A. Mayer-Oakes, M. H. Beers, K. Raube, and A. Steiner. 1990. "Appropriateness of Acute Medical Care for the Elderly: An Analysis of the Literature." Health Policy 14 (3): 225-42.
Caputo, R. P., K. K. Ho, R. C. Stoler, C. A. Sukin, J. J. Lopez, D. J. Cohen, R. E. Kuntz, A. Berman,J. P. Carrozza, and D. S. Bairn. 1997. "Effect of Continuous Quality Improvement Analysis on the Delivery of Primary Percutaneous Transluminal Coronary Angioplasty for Acute Myocardial Infarction." American Journal of Cardiology 79 (9): 1159-64.
Carlin, E., R. Carlson, and J. Nordin. 1996. "Using Continuous Quality Improvement Tools to Improve Pediatric Immunization Rates." Journal on Quality Improvement 22 (4): 277-88.
Carman, J. M., S. M. Shortell, R. W. Foster, E. F. X. Hughes, H. Boerstler, J. L. O'Brien, and E. J. O'Conner. 1996. "Keys for Successful Implementation of Total Quality Management in Hospitals." Health Care Management Review 21 (1): 48-60.
Dean, J. W., and D. E. Bowen. 1994. "Management Theory and Total Quality Management: Improving Research and Practice through Theory Development." Academy of Management Review 19 (3): 459-80.
Deming, W. E. 1986. Out of the Crisis. Cambridge, MA: Massachussetts Institute of Technology Center for Advanced Engineering Study.
Donaldson, M. S., and J. J. Mohr. 2000. Exploring Innovation and Quality Improvement in Health Care Microsystems. Washington, DC: Institute of Medicine.
Dubois, R. W., and R. H. Brook. 1988. "Preventable Deaths: Who, How Often, and Why?" Annals of Internal Medicine 109 (7): 582-9.
Ellerbeck, E. F., T. F. Kresowik, R. A. Hemann, P. Mason, R. T. Wiblin, and T. A. Marciniak. 2000. "Impact of Quality Improvement Activities on Care for Acute Myocardial Infarction." International Journal of Quality and Health Care 12 (4): 305-10.
Ferguson, T. B. Jr., E. D. Peterson, L. P. Coombs, M. C. Eiken, M. L. Carey, F. L. Grover, and E. R. DeLong. 2003. "Use of Continuous Quality Improvement to Increase Use of Process Measures in Patients Undergoing Coronary Artery Bypass Graft Surgery: A Randomized Controlled Trial." Journal of the American Medical Association 290 (1): 49-56.
Ferlie, E. B., and S. M. Shortell. 2001. "Improving the Quality of Health Care in the United Kingdom and the United States: A Framework for Change." Milbank Quarterly 79 (2): 281-315.
Gallivan, M. J. 2001. "Information Technology Diffusion: A Review of Empirical Research." Database for Advances in Information Systems 32:51-85.
Gilman, S. C., and J. C. Lammers. 1995. "Tool Use and Team Success in CQI: Are All Tools Created Equal?" Quality Management in Health Care 4 (1): 56-61.
Gilutz, H., A. Battler, I. Rabinowitz, Y. Snir, A. Porath, and G. Rabinowitz. 1998. "The 'Door-to-Needle Blitz' in Acute Myocardial Infarction: The Impact of a CQI Project." Joint Commission Journal on Quality Improvement 24 (6): 323-33.
Goldberg, H. I., E. H. Wagner, S. D. Fihn, D. P. Martin, C. R. Horowitz, D. B. Christensen, A. D. Cheadle, A. D. Diehr, and G. Simon. 1998. "A Randomized Controlled Trial of CQI Teams and Academic Detailings Can They Alter Compliance with Guidelines?" Joint Commission Journal on Quality Improvement 24 (3): 130-42.
Gordian, M. E., and J. E. Ballard. 1997. "Expanding the Use of Thrombolitic Therapy in the Treatment of Acute Myocardial Infarction in Rural Alaskan Hospitals." Alaska Medicine 39 (2): 43-6.
Gresenz, C. R., J. Rogowski, and J. J. Escarce. 2004. "Updated Variable-Radius Measures of Hospital Competition." Health Services Research 39 (2): 417-30.
Gustofson, D., L. Risburg, D. Gering, A. S. Hundt, S. Escamilla, and M. Wenneker. 1997. Case Studies from the Quality Improvement Support System, Rockville, MD: Agency for Health Care Policy and Research.
Hackman, J. R., and R. Wageman. 1995. "Total Quality Management: Empirical, Conceptual, and Practical Issues." Administrative Science Quarterly 40 (2): 309-42.
Halm, E. A., C. Horowitz, A. Silver, A. Fein, Y. D. Dlugacz, B. Hirsch, and M. R. Chassin. 2004. "Limited Impact of a Multicenter Intervention to Improve the Quality and Efficiency of Pneumonia Care." Chest 126 (1): 100-7.
Health Care Advisory Board. 1992. TQM: Directory of Hospital Projects. Washington, DC: The Advisory Board Company.
Helfrich, C. D. 2004. Exploring a Model of Innovation Implementation: Cancer Prevention and Control Trials in Community Clinical Oncology Program Research Bases. Chapel Hill, NC: Department of Health Policy and Administration, University of North Carolina at Chapel Hill.
Holahan, P. J., Z. H. Aronson, M. P. Jurkat, and F. D. Schoorman. 2004. "Implementing Computer Technology: A Multi-Organizational Test of Klein and Sorra's Model." Journal of Engineering and Technology Management 21: 31-50.
Houston, S., L. O. Gentry, V. Pruitt, T. Dao, F. Zabaneh, and J. Sabo. 2003. "Reducing the Incidence of Nosocomial Pneumonia in Cardiovascular Surgery Patients." Quality Management in Health Care 12 (1): 28-41.
Institute of Medicine. 2000. To Err Is Human. Washington, DC: National Academy Press.
--. 2001. Crossing the Quality Chasm. Washington, DC: National Academy Press.
Ishikawa, K. 1985. What Is Total Quality Control? The Japanese Way. Englewood Cliffs, NJ: Prentice-Hall, Inc.
James, B. C. 1989. Quality Management for Health Care Delivery. Chicago: Hospital Research and Educational Trust.
Juran, J. M. 1988. Juran on Planning for Quality. New York: Free Press.
Kinsman, L., E. James, and J. Ham. 2004. "An Interdisciplinary, Evidence-Based Process of Clinical Pathway Implementation Increases Pathway Usage." Lippincotts Case Management 9 (4): 184-96.
Kishnan, R., A. B. Shani, R. M. Grant, and R. Baer. 1993. "In Search of Quality Improvement: Problems of Design and Implementation." Academy of Management Executive 7 (4): 7-20.
Klein, K. J., A. B. Conn, and J. S. Sorra. 2001a. "Implementing Computerized Technology." Journal of Applied Psychology 86 (5): 811-24.
--. 2001b. "Implementing Computerized Technology: An Organizational Analysis." Journal of Applied Psychology 86 (5): 811-24.
Klein, K. J., and R. S. Rails. 1995. "The Organizational Dynamics of Computerized Technology Implementation: A Review of the Empirical Literature." In Implementation Management in High Technology, edited by L. R. Gomez-Meija and M. W. Lawless, pp. 31-79. Greenwich, CT:JAI Press.
Klein, K. J., and J. S. Sorra. 1996. "The Challenge of Implementation." Academy of Management Review 21 (4): 1055-80.
Leape, L. L. 1994. "Error in Medicine." Journal of the American Medical Association 272 (23): 1851-7.
Lurie, J. D., E. J. Merrens, J. Lee, and M. E. Splaine. 2002. "An Approach to Hospital Quality Improvement." Medical Clinics of North America 86 (4): 825-45.
McLaughlin, C. P., and A. D. Kaluzny. 1994. Continuous Quality Improvement in Health Care: Theory, Implementation, and Applications. Gaithersburg, MD: Aspen Publishers, Inc.
Meehan, T. P., M. J. Fine, H. M. Krumholz, J. D. Scinto, D. H. Galusha, J. T. Mockalis, G. F. Weber, M. K. Petrillo, P. M. Houck, and J. M. Fine. 1997. "Quality of Care, Process, and Outcomes in Elderly Patients with Pneumonia." Journal of the American Medical Association 278 (23): 2080-4.
Mehta, R. H., S. Das, T. T. Tsai, E. Nolan, G. Kearly, and K. A. Eagle. 2000. "Quality Improvement Initiative and Its Impact on the Management of Patients with Acute Myocardial Infarction." Archives of Internal Medicine 160 (20): 3057-62.
Meyer, J. A., S. Silow-Carroll, T. Kutyla, L. S. Stepnick, and L. S. Rybowski. 2004. Hospital Quality: Ingredients for Success--Overview and Lessons Learned New York: Commonwealth Fund.
Mitchell, P. H., and S. M. Shortell. 1997. "Adverse Outcomes and Variations in Organization of Care Delivery." Medical Care 35 (11 suppl): N19-32.
O'Brien, J. L., S. M. Shortell, E. F. Hughes, R. W. Foster, J. M. Carman, H. Boerstler, and E. J. O'Conner. 1995. "An Integrative Model for Organization-Wide Quality Improvement: Lessons from the Field." Quality Management in Health Care 3 (4): 19-30.
O'Connor, G. T., S. K. Plume, E. M. Olmstead, J. R. Morton, C. T. Maloney, W. C. Nugent, F. Hernandez Jr., R. Clough, B. J. Leavitt, L. H. Coffin, C. A. Martin, D. Wennberg, J. D. Birkmeyer, D. C. Charlesworth, D. J. Malenka, H. B. Quinton, and J. F. Kasper. 1996. "A Regional Intervention to Improve the Hospital Mortality Associated with Coronary Artery Bypass Graft Surgery." Journal of the American Medical Association 275 (11): 841-6.
Powell, T. C. 1995. "Total Quality Management as Competitive Advantage: A Review and Empirical Study." Strategic Management Journal 16 (1): 15-37.
Rogers, E. M. 2003. The Diffusion of Innovations. New York: Free Press.
Rollins, D., C. Thomasson, and B. Sperry. 1994. "Improving Antibiotic Delivery Time to Pneumonia Patients: Continuous Quality Improvement in Action." Journal of Nursing Care Quality 8 (2): 22-31.
Scott, I. A., M. D. Coory, and C. M. Harper. 2001. "The Effects of Quality Improvement Interventions on Inhospital Mortality after Acute Myocardial Infarction." Medical Journal of Australia 175 (9): 465-70.
Shortell, S. M. 1995. "Physician Involvement in Quality Improvement: Issues, Challenges, and Recommendations." In Improving Clinical Practice: Total Quality Management and the Physician, edited by D. Blumenthal and A. C. Sheck, pp. 207-17. San Francisco: Jossey-Bass.
Shortell, S. M., C. L. Bennett, and G. R. Byck. 1998. "Assessing the Impact of Continuous Quality Improvement on Clinical Practice: What It Will Take to Accelerate Progress." Milbank Quarterly 76 (4): 593-624, 510.
Shortell, S. M., R. H. Jones, A. W. Rademaker, R. R. Gillies, D. S. Dranove, E. F. Hughes, P. P. Budetti, K. S. Reynolds, and C. F. Huang. 2000. "Assessing the Impact of Total Quality Management and Organizational Culture on Multiple Outcomes of Care for Coronary Artery Bypass Graft Surgery Patients." Medical Care 38 (2): 207-17.
Shortell, S. M., D. Z. Levin, J. L. O'Brien, and E. F. Hughes. 1995a. "Assessing the Evidence on CQI: Is the Glass Half Empty or Half Full?" Hospital & Health Services Administration 40 (1): 4-24.
Shortell, S. M., J. L. O'Brien, J. M. Carman, R. W. Foster, E. F. Hughes, H. Boerstler, and E. J. O'Connor. 1995b. "Assessing the Impact of Continuous Quality Improvement/Total Quality Management: Concept versus Implementation." Health Services Research 30 (2): 377-401.
Tu, G. S., T. P. Meehan, J. M. Fine, Y. Wang, E. S. Holmboe, Z. Mohsenifar, and S. R. Weingarten. 2004. "Which Strategies Facilitate Improvement in Quality of Care for Elderly Hospitalized Pneumonia Patients?" Joint Commission Journal on Quality and Safety 30 (1): 25-35.
Wakefield, B. J., M. A. Blegen, T. Uden-Holman, T. Vaughn, E. Chrischilles, and D. Wakefield. 2001. "Organizational Culture, Continuous Quality Improvement, and Medication Administration Error Reporting." American Journal of Medical Quality 16 (4): 128-34.
Walton, M. 1990. Deming Management at Work. New York: GP Putnam.
Weiner, B. J., J. A. Alexander, and S. M. Shortell. 1996. "Leadership for Quality Improvement in Health Care: Empirical Evidence on Hospital Boards, Managers, and Physicians." Medical Care Research and Review 53 (4): 397-416.
Weiner, B. J., S. M. Shortell, and J. A. Alexander. 1997. "Promoting Clinical Involvement in Hospital Quality Improvement Efforts: The Effects of Top Management, Board, and Physician Leadership." Health Services Research 32 (4): 491-510.
Westphal, J. D., R. Gulati, and S. M. Shortell. 1997. "Customization or Conformity?" Administrative Science Quarterly 42 (2): 366-94.
Zarich, S. W., R. Sachdeva, R. Fishman, M. J. Werdmann, M. Parniawski, L. Bernstein, and M. Dilella. 2004. "Effectiveness of a Multidisciplinary Quality Improvement Initiative in Reducing Door-to-Balloon Times in Primary Angioplasty." Journal of Interventional Cardiology 17 (4): 191-5.
Address correspondence to Bryan J. Weiner, Ph.D., Department of Health Policy and Administration, School of Public Health, 1102-C McGavran-Greenberg Hall, CB 7411, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7411. Jeffrey A. Alexander, Ph.D., is with the Department of Health Management and Policy, University of Michigan School of Public Health, Ann Arbor, Ml. Stephen M. Shortell, Ph.D., is with the Division of Health Policy and Management, University of California at Berkeley, Berkeley, CA. Laurence C. Baker, Ph.D., is with the Department of Health Research and Policy, Stanford University, Stanford, CA. Mark Beaker, Ph.D., is with the University of Minnesota, Minneapolis, MN. Jeffrey J. Geppert, Ph.D., is with the National Bureau of Economic Research, Stanford, CA.
Table 1: Descriptive Statistics for Study Variables (Untransformed) Category Variable N QI scope Involvement of hospital units in QI 1,751 variables Percentage of FTEs on QI teams 1,749 Percentage of managers on QI teams 1,749 Percentage of physicians on QI teams 1,749 Hospital-level Inhospital mortality--CABG 414 quality Inhospital mortality--AMI 1,762 indicators Inhospital mortality--CHF 1,781 Inhospital mortality--stroke 1,776 Inhospital mortality--pneumonia 1,784 Bilateral catherization 843 Control variables Market concentration 1,784 Number of hospital competitors 1,749 Hospital competition intensity 1,749 Managed care penetration 1,783 Pct of patients in managed care 1,749 Teaching hospital status 1,780 System or network affiliated 1,784 No governing board 1,784 HMO ownership 1,784 PPO ownership 1,784 Indemnity ownership 1,784 Number of physician arrangements 1,784 Registered nurses per inpatient day 1,784 Number of inpatient surgeries 1,784 Public, nonfederal ownership 1,784 For-profit ownership 1,784 Profitability 1,773 Outpatient/inpatient ratio 1,784 Outpatient visits (adjusted by beds) 1,784 Perceived influence of FAACT 1,749 Perceived influence of JCAHO 1,749 Perceived influence of NCQA 1,749 Years of formal involvement in QI 1,568 Instrumental CEO participation in QI activities 1,749 variables Board monitoring of QI 1,784 Board activity in QI 1,784 Perceived barriers to QI 1,751 Clinical IS capabilities 1,751 Total expenses on QI 1,749 Integrated database 1,733 Hospital size (beds) 1,784 Hospital size (beds-squared) 1,784 Category Variable Mean QI scope Involvement of hospital units in QI 3.80 variables Percentage of FTEs on QI teams 0.23 Percentage of managers on QI teams 0.72 Percentage of physicians on QI teams 0.23 Hospital-level Inhospital mortality--CABG 0.05 quality Inhospital mortality--AMI 0.18 indicators Inhospital mortality--CHF 0.05 Inhospital mortality--stroke 0.12 Inhospital mortality--pneumonia 0.09 Bilateral catherization 0.15 Control variables Market concentration 0.60 Number of hospital competitors 4.55 Hospital competition intensity 5.00 Managed care penetration 0.22 Pct of patients in managed care 0.32 Teaching hospital status 0.25 System or network affiliated 0.61 No governing board 0.03 HMO ownership 0.27 PPO ownership 0.36 Indemnity ownership 0.12 Number of physician arrangements 1.22 Registered nurses per inpatient day 0.50 Number of inpatient surgeries 2,251.31 Public, nonfederal ownership 0.27 For-profit ownership 0.09 Profitability 10.61 Outpatient/inpatient ratio 0.49 Outpatient visits (adjusted by beds) 60.50 Perceived influence of FAACT 1.19 Perceived influence of JCAHO 4.01 Perceived influence of NCQA 2.00 Years of formal involvement in QI 4.15 Instrumental CEO participation in QI activities 3.66 variables Board monitoring of QI 10.40 Board activity in QI 1.95 Perceived barriers to QI 3.23 Clinical IS capabilities 2.37 Total expenses on QI 246,637.82 Integrated database 0.21 Hospital size (beds) 185.60 Hospital size (beds-squared) 68,746.16 Category Variable STD QI scope Involvement of hospital units in QI 1.72 variables Percentage of FTEs on QI teams 0.22 Percentage of managers on QI teams 0.33 Percentage of physicians on QI teams 0.27 Hospital-level Inhospital mortality--CABG 0.03 quality Inhospital mortality--AMI 0.12 indicators Inhospital mortality--CHF 0.03 Inhospital mortality--stroke 0.06 Inhospital mortality--pneumonia 0.04 Bilateral catherization 0.17 Control variables Market concentration 0.35 Number of hospital competitors 4.43 Hospital competition intensity 1.52 Managed care penetration 0.20 Pct of patients in managed care 0.30 Teaching hospital status 0.43 System or network affiliated 0.49 No governing board 0.16 HMO ownership 0.44 PPO ownership 0.48 Indemnity ownership 0.32 Number of physician arrangements 1.29 Registered nurses per inpatient day 2.89 Number of inpatient surgeries 3,536.99 Public, nonfederal ownership 0.44 For-profit ownership 0.29 Profitability 14.62 Outpatient/inpatient ratio 0.22 Outpatient visits (adjusted by beds) 514.22 Perceived influence of FAACT 0.54 Perceived influence of JCAHO 1.26 Perceived influence of NCQA 1.11 Years of formal involvement in QI 2.36 Instrumental CEO participation in QI activities 1.17 variables Board monitoring of QI 3.17 Board activity in QI 1.61 Perceived barriers to QI 0.96 Clinical IS capabilities 0.88 Total expenses on QI 392,170.41 Integrated database 0.41 Hospital size (beds) 185.26 Hospital size (beds-squared) 161,588.04 QI, Quality Improvement; CABG, coronary artery bypass surgery; AMI, acute myocardial infarction; CHF, congestive heart failure; HMO, health maintenance organization; PPO, preferred provider organization; FAACT, Foundation for Accountability; JCAHO, Joint Commission on the Accreditation of Healthcare Organizations; NCQA, National Committee of Quality Assurance. Table 2: First-Stage Regression of QI Implementation Scope on Controls and Instruments Hospital Unit Involvement in QI [beta] SE Intercept 2.059 0.166 *** Market concentration 0.115 0.067 (M) Number of hospital competitors -0.004 0.004 Hospital competition intensity 0.003 0.013 Managed care penetration -0.246 0.101 * Per patients in managed care 0.047 0.058 Teaching hospital status 0.107 0.054 * System/network affiliated -0.026 0.041 No hospital board 0.486 0.218 * HMO ownership 0.008 0.047 PPO ownership -0.099 0.044 * Indemnity ownership -0.021 0.059 Number of physician -0.008 0.016 arrangements RNs per inpatient day -0.005 0.007 Number of inpatient surgeries -0.012 0.007 (M) Public, nonfederal ownership -0.028 0.044 For-profit ownership -0.220 0.064 *** Hospital profitability 0.001 0.001 Outpatient/inpatient ratio 0.106 0.087 Outpatient visits per bed 0.023 0.036 Perceived influence of FAACT 0.037 0.034 Perceived influence of JCAHO -0.016 0.015 Perceived influence of NCQA 0.074 0.018 *** Years of QI 0.007 0.008 CEO participation in QI 0.025 0.015 Board monitoring of Q1 0.046 0.007 *** Board activity in QI 0.021 0.011 (M) Perceived barriers to QI -0.090 0.019 *** Clinical IS capabilities 0.070 0.021 ** Clinical integration 0.152 0.038 *** Total expenses on Q1 0.000 0.000 (M) Integrated database 0.083 0.043 (M) Hospital size 6.4E-4 2.4E-4 ** Hospital size squared -4.0E-7 2.8E-7 N 1,726 Adjusted [R.sup.2] 0.16 Percentage of FTEs on QI Teams [beta] SE Intercept 0.162 0.049 ** Market concentration 0.031 0.020 Number of hospital competitors 0.001 0.001 Hospital competition intensity 0.000 0.004 Managed care penetration -0.005 0.030 Per patients in managed care 0.001 0.017 Teaching hospital status 0.004 0.016 System/network affiliated -0.006 0.012 No hospital board 0.051 0.065 HMO ownership -0.007 0.014 PPO ownership 0.001 0.013 Indemnity ownership -0.031 0.018 (M) Number of physician -0.001 0.005 arrangements RNs per inpatient day 0.001 0.002 Number of inpatient surgeries 0.000 0.002 Public, nonfederal ownership 0.001 0.013 For-profit ownership -0.022 0.019 Hospital profitability 0.000 0.000 Outpatient/inpatient ratio 0.025 0.026 Outpatient visits per bed -0.020 0.011 (M) Perceived influence of FAACT -0.010 0.010 Perceived influence of JCAHO -0.011 0.004 * Perceived influence of NCQA 0.008 0.005 Years of QI 0.008 0.002 *** CEO participation in QI 0.011 0.005 * Board monitoring of Q1 0.007 0.002 ** Board activity in QI -0.004 0.003 Perceived barriers to QI -0.025 0.006 *** Clinical IS capabilities 0.015 0.006 * Clinical integration 0.009 0.011 Total expenses on Q1 0.000 0.000 Integrated database -0.035 0.013 ** Hospital size -2.8E-4 7.2E-5 *** Hospital size squared -1.4E-7 8.4E-8 (M) N 1,726 Adjusted [R.sup.2] 0.08 Percentage of Managers on QI Teams [beta] SE Intercept 0.428 0.076 *** Market concentration 0.023 0.031 Number of hospital competitors -0.002 0.002 Hospital competition intensity -0.001 0.006 Managed care penetration -0.018 0.047 Per patients in managed care 0.004 0.027 Teaching hospital status -0.025 0.025 System/network affiliated 0.033 0.019 (M) No hospital board 0.091 0.100 HMO ownership -0.003 0.022 PPO ownership 0.000 0.020 Indemnity ownership -0.011 0.027 Number of physician -0.001 0.007 arrangements RNs per inpatient day 0.004 0.003 Number of inpatient surgeries -0.002 0.003 Public, nonfederal ownership 0.039 0.020 * For-profit ownership 0.014 0.030 Hospital profitability 0.000 0.001 Outpatient/inpatient ratio -0.064 0.040 Outpatient visits per bed 0.002 0.017 Perceived influence of FAACT -0.006 0.016 Perceived influence of JCAHO -0.001 0.007 Perceived influence of NCQA 0.006 0.008 Years of QI 0.007 0.003 * CEO participation in QI 0.045 0.007 *** Board monitoring of Q1 0.007 0.003 * Board activity in QI 0.008 0.005 Perceived barriers to QI -0.005 0.009 Clinical IS capabilities 0.004 0.010 Clinical integration 0.036 0.017 * Total expenses on Q1 0.000 0.000 Integrated database 0.023 0.020 Hospital size -9.6E-5 1.1E-4 Hospital size squared -1.6E-7 1.3E-7 N 1,726 Adjusted [R.sup.2] 0.06 Percentage of Physicians on QI Teams [beta] SE Intercept 0.269 0.058 *** Market concentration 0.071 0.024 ** Number of hospital competitors 0.001 0.001 Hospital competition intensity -0.010 0.004 * Managed care penetration -0.096 0.036 ** Per patients in managed care -0.005 0.021 Teaching hospital status -0.002 0.019 System/network affiliated 0.020 0.014 No hospital board 0.119 0.077 HMO ownership -0.013 0.017 PPO ownership 0.024 0.01 Indemnity ownership -0.024 0.021 Number of physician -0.007 0.006 arrangements RNs per inpatient day -0.002 0.002 Number of inpatient surgeries 0.001 0.002 Public, nonfederal ownership 0.010 0.015 For-profit ownership -0.038 0.023 (M) Hospital profitability 0.000 0.000 Outpatient/inpatient ratio 0.039 0.030 Outpatient visits per bed -0.033 0.013 * Perceived influence of FAACT 0.024 0.012 (M) Perceived influence of JCAHO -0.015 0.005 ** Perceived influence of NCQA 0.002 0.006 Years of QI 0.005 0.003 (M) CEO participation in QI 0.020 0.005 *** Board monitoring of Q1 0.001 0.003 Board activity in QI 0.001 0.004 Perceived barriers to QI -0.028 0.007 *** Clinical IS capabilities -0.004 0.007 Clinical integration -0.005 0.013 Total expenses on Q1 0.000 0.000 Integrated database 0.005 0.015 Hospital size -4.1E-4 8.5E-5 *** Hospital size squared 4.1E-7 1.0E-7 *** N 1,733 Adjusted [R.sup.2] 0.12 (M) p < .10 * p < .05 ** p < .01 *** p < .001. QI, Quality Improvement; HMO, health maintenance organization; PPO, preferred provider organization; FAACT, Foundation for Accountability; JCAHO, Joint Commission on the Accreditation of Healthcare Organizations; NCQA, National Committee of Quality Assurance; SE, standard error. Table 3: Two-Stage Instrumental Variables Regression of Hospital Quality Indicators on QI Implementation Scope CABG Mortality [beta] SE Intercept 0.77 0.42 Market concentration -0.12 0.12 Number of hospital competitors 0.00 0.00 Hospital competition intensity 0.01 0.02 Managed care penetration -0.08 0.12 Pct patients in managed care -0.14 0.09 Teaching hospital status -0.02 0.05 System/network affiliated 0.07 0.06 No hospital board -0.01 0.21 HMO ownership 0.01 0.05 PPO ownership 0.04 0.07 Indemnity ownership 0.00 0.07 Number of physician 0.00 0.02 arrangements RNs per inpatient day 0.02 0.01 Number of inpatient surgeries 0.02 0.01 * Public, nonfederal ownership -0.05 0.08 For-profit ownership -0.02 0.09 Hospital profitability 0.00 0.00 Outpatient/inpatient ratio -0.35 0.22 Outpatient visits per bed 0.01 0.05 Perceived influence of FAACT 0.00 0.04 Perceived influence of JCAHO -0.04 0.03 Perceived influence of NCQA -0.01 0.02 Years of QI 0.01 0.01 Involvement of hospital 0.09 0.12 units in QI# Percentage of FTEs on -1.05 0.77 QI teams# Percentage of managers -0.44 0.26 on QI teams# Percentage of physicians 0.59 1.30 on QI teams# N 406 AMI Mortality [beta] SE Intercept -0.20 0.36 Market concentration 0.05 0.12 Number of hospital competitors 0.00 0.01 Hospital competition intensity -0.02 0.02 Managed care penetration 0.13 0.19 Pct patients in managed care -0.11 0.09 Teaching hospital status -0.18 0.09 * System/network affiliated 0.13 0.07 (M) No hospital board -0.01 0.33 HMO ownership -0.03 0.07 PPO ownership 0.25 0.07 *** Indemnity ownership -0.13 0.09 Number of physician 0.01 0.03 arrangements RNs per inpatient day 0.01 0.01 Number of inpatient surgeries 0.05 0.01 *** Public, nonfederal ownership 0.00 0.07 For-profit ownership 0.00 0.11 Hospital profitability 0.00 0.00 Outpatient/inpatient ratio -0.05 0.15 Outpatient visits per bed -0.07 0.06 Perceived influence of FAACT -0.07 0.06 Perceived influence of JCAHO 0.00 0.03 Perceived influence of NCQA -0.08 0.03 ** Years of QI 0.04 0.01 ** Involvement of hospital 1.01 0.16 *** units in QI# Percentage of FTEs on -2.30 0.99 * QI teams# Percentage of managers -0.95 0.56 (M) on QI teams# Percentage of physicians -1.37 0.86 on QI teams# N 1,710 CHF Mortality [beta] SE Intercept -0.08 0.22 Market concentration 0.05 0.07 Number of hospital competitors 0.00 0.00 Hospital competition intensity -0.02 0.01 Managed care penetration 0.15 0.12 Pct patients in managed care -0.05 0.06 Teaching hospital status -0.10 0.06 (M) System/network affiliated 0.07 0.04 (M) No hospital board -0.17 0.21 HMO ownership -0.04 0.05 PPO ownership 0.15 0.05 ** Indemnity ownership -0.07 0.06 Number of physician 0.02 0.02 arrangements RNs per inpatient day 0.00 0.01 Number of inpatient surgeries 0.03 0.01 *** Public, nonfederal ownership -0.04 0.05 For-profit ownership -0.02 0.07 Hospital profitability 0.00 0.00 Outpatient/inpatient ratio -0.08 0.09 Outpatient visits per bed -0.06 0.04 (M) Perceived influence of FAACT -0.05 0.04 Perceived influence of JCAHO -0.01 0.02 Perceived influence of NCQA -0.05 0.02 * Years of QI 0.02 0.01 * Involvement of hospital 0.64 0.10 *** units in QI# Percentage of FTEs on -1.41 0.62 * QI teams# Percentage of managers -0.70 0.35 * on QI teams# Percentage of physicians -0.67 0.55 on QI teams# N 1,726 Stroke Mortality [beta] SE Intercept -0.06 0.34 Market concentration 0.11 0.11 Number of hospital competitors 0.00 0.01 Hospital competition intensity -0.03 0.02 Managed care penetration 0.08 0.18 Pct patients in managed care -0.12 0.09 Teaching hospital status -0.01 0.09 System/network affiliated 0.09 0.06 No hospital board 0.01 0.32 HMO ownership -0.03 0.07 PPO ownership 0.20 0.07 ** Indemnity ownership -0.10 0.09 Number of physician 0.00 0.02 arrangements RNs per inpatient day 0.01 0.01 Number of inpatient surgeries 0.05 0.01 *** Public, nonfederal ownership 0.01 0.07 For-profit ownership -0.03 0.10 Hospital profitability 0.00 0.00 Outpatient/inpatient ratio -0.10 0.14 Outpatient visits per bed -0.11 0.06 * Perceived influence of FAACT -0.07 0.06 Perceived influence of JCAHO -0.01 0.02 Perceived influence of NCQA -0.06 0.03 * Years of QI 0.03 0.01 * Involvement of hospital 0.92 0.15 *** units in QI# Percentage of FTEs on -2.09 0.93 * QI teams# Percentage of managers -0.93 0.53 (M) on QI teams# Percentage of physicians -1.39 0.82 (M) on QI teams# N 1,722 Pneumonia Mortality [beta] SE Intercept -0.29 0.47 Market concentration -0.08 0.16 Number of hospital competitors 0.01 0.01 Hospital competition intensity 0.00 0.03 Managed care penetration 0.50 0.25 * Pct patients in managed care -0.15 0.12 Teaching hospital status -0.16 0.12 System/network affiliated 0.09 0.09 No hospital board -0.36 0.44 HMO ownership -0.02 0.10 PPO ownership 0.19 0.10 (M) Indemnity ownership -0.18 0.12 Number of physician 0.00 0.03 arrangements RNs per inpatient day 0.01 0.01 Number of inpatient surgeries 0.04 0.01 ** Public, nonfederal ownership 0.02 0.10 For-profit ownership 0.16 0.15 Hospital profitability 0.00 0.00 Outpatient/inpatient ratio -0.27 0.20 Outpatient visits per bed -0.08 0.08 Perceived influence of FAACT -0.15 0.08 (M) Perceived influence of JCAHO 0.03 0.03 Perceived influence of NCQA -0.10 0.04 * Years of QI 0.04 0.02 * Involvement of hospital 1.32 0.20 *** units in QI# Percentage of FTEs on -4.02 1.31 ** QI teams# Percentage of managers -1.94 0.75 * on QI teams# Percentage of physicians 0.45 1.16 on QI teams# N 1,727 Bilateral Cath. [beta] SE Intercept 1.64 1.91 Market concentration 0.80 0.65 Number of hospital competitors 0.01 0.03 Hospital competition intensity -0.06 0.11 Managed care penetration 0.59 0.81 Pct patients in managed care -0.27 0.47 Teaching hospital status -0.02 0.36 System/network affiliated -0.58 0.40 No hospital board 1.41 1.69 HMO ownership -0.18 0.34 PPO ownership 0.40 0.37 Indemnity ownership 0.55 0.43 Number of physician 0.19 0.12 (M) arrangements RNs per inpatient day 0.05 0.08 Number of inpatient surgeries 0.10 0.04 ** Public, nonfederal ownership 0.17 0.49 For-profit ownership 1.03 0.55 (M) Hospital profitability 0.00 0.01 Outpatient/inpatient ratio 0.71 1.02 Outpatient visits per bed -0.20 0.32 Perceived influence of FAACT -0.27 0.27 Perceived influence of JCAHO -0.44 0.17 * Perceived influence of NCQA -0.08 0.17 Years of QI 0.06 0.07 Involvement of hospital 2.49 0.68 *** units in QI# Percentage of FTEs on -3.39 4.49 QI teams# Percentage of managers -5.69 2.48 * on QI teams# Percentage of physicians -8.00 4.88 (M) on QI teams# N 819 Note: Bolded terms represent independent variables. Two-stage, least squares regression does not produce adjusted [R.sup.2]-statistics. Symbols for statistical significance in this table match those in Table 2; (M) p < .10; * p < .05; ** p < .01; *** p < .001. QI, Quality Improvement; CABG, coronary artery bypass surgery; AMI, acute myocardial infarction; CHF, congestive heart failure; HMO, health maintenance organization; PPO, preferred provider organization; FAACT, Foundation for Accountability; JCAHO, Joint Commission on the Accreditation of Healthcare Organizations; NCQA, National Committee of Quality Assurance; RN, registered nurse; SE, standard error. Note: Independent variables indicated with #.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Quality of Care|
|Author:||Weiner, Bryan J.; Alexander, Jeffrey A.; Shortell, Stephen M.; Baker, Laurence C.; Becker, Mark; Gep|
|Publication:||Health Services Research|
|Date:||Apr 1, 2006|
|Previous Article:||Increasing the global exchange of evidence-based research.|
|Next Article:||Predicting nursing facility residents' quality of life using external indicators.|