The relationship of hospital quality and cost per case in Hawaii.One of the leading questions of our time is whether high-quality care leads to lower health care costs. Using data from Hawaii hospitals, this paper addresses the relationship of overall cost per case to a composite measure of the quality of inpatient care and a 30-day readmission rate. We found that low-cost hospitals tend to have the highest quality but the worst readmission performance. Change in quality and change in cost were also negatively correlated, but not statistically significant. We conclude that high-quality hospital care does not have to cost more, but that the dynamics of the readmission rate differ substantially from other quality dimensions.
The assumption that high-quality care will help control costs undergirds many of today's leading policy initiatives. In particular, pay-for-performance systems for physicians and hospitals generally have the dual objectives of improving the quality of care and reducing costs. Bundled payment and accountable care organizations also envision bending the cost curve while improving quality. And yet the evidence linking quality and costs is fairly limited, particularly for overall hospital quality and cost per case.
The Hawaii Medical Service Association (HMSA), or Blue Cross and Blue Shield of Hawaii, has been a leader in quality incentive payments. HMSA implemented one of the nation's first preferred provider organization (PPO) pay-for-performance systems covering all physician specialties in 1998, and followed with a hospital incentive program in 2000. The hospital program, known as the Hospital Quality and Service Recognition (HQSR) system, began with structural measures and has gradually extended into a variety of process and outcome measures. The system applies primarily to 13 acute care hospitals in Hawaii that contract with HMSA, and, on average, provides hospital rewards equal to about 2% of HMSA's inpatient payments. (1)
This paper aims to determine whether there is a link between quality of care, broadly measured, and inpatient cost per case, using evidence from Hawaii's acute care hospitals. We analyze this relationship in both static and dynamic terms.
Historically, studies addressing the relationship between cost and quality have focused on measures of per capita costs. The best known example is based on the pioneering work of Stephen Jencks in measuring hospital quality at the state level using Quality Improvement Organization measures (Jencks et al. 2000; Jencks, Huff, and Cuerdon 2003). A follow-up study using the Jencks quality data was the first to establish a negative correlation between costs and quality, specifically that the states with the lowest overall Medicare spending per beneficiary had among the highest overall quality rankings on 22 process measures (Baicker and Chandra 2004).
However, lower per capita costs for hospital care are likely driven primarily by lower utilization rather than lower unit costs. Hawaii provides an excellent example of this premise. The Baicker and Chandra study established that Hawaii's Medicare per capita costs are by far the lowest in the country, and another study from the same era showed that Hawaii's overall use of services for Medicare patients was the lowest in the country while its costs per unit of service were well above average (Ashby et al. 1996).
A pay-for-performance demonstration funded by the Centers for Medicare and Medicaid Services (CMS) and conducted by the Premier Alliance from 2003 through 2006 showed strong evidence of a link between quality improvement and reduced hospital costs (Premier Inc. 2008). The 250 hospitals participating in this project, known as the Hospital Quality Incentive Demonstration, achieved increases ranging from 37% to more than 300% in the median percentage of Medicare patients with perfect process scores across five clinical areas. At the same time, their average cost per case in the five clinical areas actually declined, with the decreases ranging from 4% to 12%. However, these impressive quality and cost change results only apply to the patients in the specific clinical areas covered by the quality metrics. Premier did not attempt to generalize its results to all Medicare patients.
Several other studies have assessed the quality/cost relationship for a specific condition in a single period of time with mixed results. One study found that among community hospitals, outcomes for congestive heart failure (CHF) patients in the lowest cost quartile of facilities were the same or better than those in the higher-cost quartiles on each of six adverse event measures. The results, however, were mixed for teaching hospitals (Siegrist and Kane 2003). A more recent study found that high quality as measured by CMS's core process indicators was associated with low per case costs for pneumonia patients, but not for CHF patients (Chen et al. 2010). Other studies documented that specific interventions can have simultaneous effects of improving quality and reducing costs (for example, Fleming and Ballard 2009).
Yet another recent study found that for CHF patients, better quality was achieved by hospitals treating a large volume of patients, but with higher costs per case (Joynt, Orav, and Jha 2011). However, the study's quality benefit of additional volume leveled off at about 400 CHF patients per year, and more than 60% of all CHF patients are treated in facilities with larger volumes. The study was not designed to address the relationship between cost and quality among these higher-volume facilities.
The study that is most similar in basic design to ours attempted to measure the relationship between hospital quality and overall cost per case (Jha et al. 2009). This study found no significant relationship between quality and cost; in fact, the quality scores of hospitals in the bottom quartile of cost per case were marginally below average. However, the study used a narrow measure of hospital quality--the 10-indicator starter set of CMS core measures.
In summary, the literature presents a mixed picture of the relationship of cost and quality for hospital services when cost is measured in per case terms. When the cost measure covers the same types of cases as the quality metrics used, some studies have found the hoped-for negative relationship while others have not, and some studies have found the desired relationship for some groups of patients and not for others. The one study attempting to relate quality to average cost per case for all types of inpatient care found no discernable relationship; however, the study used a quality measure that was narrow in terms of both dimension (clinical process only) and types of patients covered (acute myocardial infarction, CHF, and pneumonia patients only). Our study is the first to use broad-based measures of both cost per case and quality of care in assessing the hospital cost/ quality relationship.
Methods and Data
We assessed the relationship of hospital quality and cost per case using a broad composite measure of inpatient quality and a standardized measure of hospital costs so as to apply across a diverse set of hospitals and types of inpatient care. Our goal was to measure both costs and quality across all payers, rather than for Medicare patients only as some previous research efforts have done.
Our quality measure is a composite derived from the 47 indicators of HMSA's 2009 HQSR program. As shown in Table 1, these include two outcome and three process dimensions, with the dimensions weighted to match the weighting used in the HQSR program (HMSA 2011).
The first outcome dimension is clinical complications, which uses a composite measure developed by IMS Health that captures the incidence of adverse events for a set of 10 high-volume surgical and maternity procedures. Each procedure has a customized set of adverse events, such as deep vein thrombosis (DVT), pulmonary embolism, and nosocomial infections (sepsis or wound) for a number of surgical procedures and third-degree and fourth-degree lacerations for maternity cases. The composite measure is risk-adjusted for differences in severity of illness and procedure type, and captures complications both during the hospital stay and within 30 days post discharge. While clinically robust, this measure unfortunately is only available for HMSA's commercial patients, who comprise 23% of the study hospitals' inpatient volume. (2) All other components of our composite quality measure cover all patients.
The second outcome dimension is patient satisfaction as measured by CMS's Hospital Consumer and Healthcare Providers and Systems (HCAHPS) survey and reported on its Hospital Compare website. (3) This measure combines on a nonweighted basis the 10 scores that CMS reports covering various aspects of patients' experience. These include two broad questions regarding patients' overall satisfaction with the care they received and how likely they are to recommend the hospital, along with questions on specific aspects of care ranging from communication with nurses to cleanliness of their room.
Our clinical effectiveness, or process measures, begin with CMS core measures, encompassing 20 indicators defining appropriate care for acute myocardial infarction (AMI), CHF, and pneumonia patients. Within each of these diagnostic groups, the applicable indicators were weighted equally. Where a specific indicator was inapplicable to a hospital or the hospital did not have a sample size of at least 15 cases, the indicator was dropped from both the numerator and denominator of the calculation. Examples of core process measures are whether a thrombolytic agent was received within 30 minutes of hospital arrival for AMI patients and whether a venous thromboembolism (VTE) prophylaxis was ordered for surgery patients.
The Get With The Guidelines program, administered by the American Heart Association, offers clinical process measures similar to the CMS core measures (AHA 2011). We used the 12 indicators of the system's coronary artery disease and stroke measure sets, and once again, the applicable indicators were weighted equally within each of these diagnostic groups.
The last component of our quality measure was internal quality initiatives (IQIs). These are quality improvement projects where the hospitals design, conduct, and document results themselves. Each hospital could address up to two quality issues of its own choosing plus one that addresses clinical issues prescribed by HMSA. The initiatives were graded on a 1 to 5 scale by independent clinical experts affiliated with IMS Health. In the handful of cases where a hospital elected not to participate in one or more of the three possible initiatives, it was given a score equal to the lowest score achieved by all participating facilities. The most common IQI during the study years was an initiative to reduce hospital-acquired MRSA infections (involving procedure changes, added surveillance, staff education, and so on). Others sought to improve hyperglycemic management for diabetics, reduce medication errors and adverse drug events related to anticoagulants, and reduce catheter-associated urinary tract infections.
We developed a standardized cost per discharge measure to create a level playing field for comparing hospitals' costliness. We started with a measure of the hospitals' operating costs per case, which was based on all-payers data from their Medicare Cost Reports. Then we adjusted the cost values for factors thought to be beyond the hospitals' control.
First, we subtracted out all interest expense. The costs incurred to finance plant and equipment purchases add to hospital costs and must ultimately be paid for by public and private payers alike. However, the difference between a hospital that must borrow extensively and one that can fund most of its capital spending with donations and investment income does not affect the efficiency with which the respective facilities deliver patient care.
The next adjustment was to neutralize differences in case mix and severity of illness, using all-payer case-mix index (CMI) values available from Hawaii's state data consortium, the Hawaii Health Information Corporation. Since the CMIs were based on Medicare-severity diagnosis-related groups (MS-DRGs), a single adjustment for both diagnostic mix and severity level could be accomplished by dividing each hospital's cost per case by its CMI value for the same period of time.
The last two adjustments--for teaching intensity and children's specialty--were based on the results of a multivariate regression model. Using data from 340 acute care hospitals in California, Washington, and Hawaii, this model regressed cost per case against these two covariates along with case-mix index and input price variables based on CMS's hospital wage index and nonlabor cost-of-living adjustment (COLA) (applicable only in Hawaii). Teaching intensity was measured by the ratio of residents to beds and the children's specialty variable was dichotomous. The regression coefficient of the teaching variable was then used to adjust a hospital's costs up or down according to its residents-to-beds ratio relative to the state average, and a similar approach was taken for children's specialty.
We also tested the need for two other adjustments--for the shares of low-income patients and Medicare discharges--that we considered outside the hospitals' control. However, neither was found to have a
statistically significant relationship with cost per case in our database, and so they were excluded from the model. Low-income share is recognized as a cost-influencing variable in Medicare's DRG payment system, but its effect on costs nationally is actually quite small (MedPAC 2007). While aged patients dominating the Medicare population are known to be more costly on average than nonaged patients, this difference is apparently captured adequately by a case-mix system that recognizes differences in severity of illness. (4)
Readmission rate was not a component of HMSA's pay-for-performance system in 2009, but we nonetheless included an all-payer readmission measure in our study because of its important cost and quality implications. However, we considered readmission rate separately from the other quality dimensions that are focused primarily on care processes during the hospital stay.
Hawaii began collecting readmissions data in 2009 using 3M's potentially preventable readmissions software. (5) This system captures clinically related readmissions occurring within 30 days to any Hawaii hospital, and all hospitals in the state participate. The readmission rates used in this study were risk-adjusted.
The sample for this study--13 hospitals--is quite small. But the hospitals studied provide the full spectrum of acute inpatient care for the self-contained health system of Hawaii, thus providing a microcosm of the hospital system nationally. As shown in Table 2, Hawaii's one large major teaching hospital comprises 8% of the sample, which is about the same share as similar facilities nationally. (6) The only significant difference is that small and nonteaching hospitals (which tend to be located in rural areas) are somewhat underrepresented in Hawaii relative to mid-sized and teaching facilities. Larger facilities--in Hawaii and elsewhere--are the most likely to be able to respond to the types of quality incentives offered by HMSA's pay-for-performance system.
The results of our analysis are presented in three parts: the 2008 relationship of standardized cost per case and quality, the 2008 relationship of cost per case and readmission rate, and the relationship of change in cost per case to change in quality between 2008 and 2009.
Cost and Quality--Static Model
Using our first year data (2008), we ran correlations between cost per case and each of the five dimensions of quality in our composite measure as well as with the overall quality measure. We found a negative correlation in each case, suggesting a wide scope to the pattern of high quality being associated with low cost per case (see Table 3). However, the relationship was not statistically significant for any of the individual components of our quality measure.
Patient satisfaction (HCAHPS) had the weakest relationship. This is consistent with the aforementioned study using a per capita measure of cost, which found a strong negative correlation of costs to process measures of quality but no relationship between costs and several measures of patient satisfaction (Baicker and Chandra 2004).
The strongest relationship was found for the internal quality initiatives, where the correlation coefficient was -.535 with close to statistical significance (p = .06). This may indicate that direct attention and involvement of hospital staff in activities ranging from identifying the most compelling area for concentrating their efforts to planning and carrying out the initiatives--often with physician education efforts make a difference in impact.
The most compelling finding, however, was that our composite measure of quality was highly correlated with cost per case (coefficient of -.655 and p = .015), despite the fact that none of the individual components of the measure had a significant relationship (see Table 3 and Figure 1). As long as we use a broad measure of quality, the analysis provides strong evidence that in Hawaii, the hospitals with the lowest per case costs tend to have the best quality of care.
One of the factors behind the pattern showing a stronger relationship to costs for our composite quality measure than for its individual components is that the dimensions affect different groups of patients, and both process and outcomes. CMS process measures relate directly to AMI, CHF, and pneumonia patients. The Get With The Guidelines program extends these to stroke patients, as well as to a broader set of cardiac patients. The complications measure broadens the quality focus from process to outcomes and extends the scope of care to include a variety of surgical and maternity cases. And finally, internal quality initiatives often affect quality concerns not directly covered by any of the other measures (such as MRSA infections). Several of the hospitals were above average in most dimensions of quality without being the best in any particular one, possibly building to a greater overall effect on cost and suggesting that systems and management commitment were in place and may have had beneficial effects beyond the types of care being measured.
Cost and Readmission Rate
When we tested the relationship of cost per case and potentially preventable readmission rate, the correlation was again strong (coefficient of -.676 and p = .011). In this case, the data (see Table 3 and Figure 2) suggest that low-cost hospitals tend to have the worst readmission rates.
This unexpected finding could reflect that leading interventions--such as scheduling patients' first follow-up visit before they leave the hospital and calling patients shortly after discharge to review medication status--cost money (Kanaan 2009). But it also raises the prospect that low-cost hospitals might be discharging patients "quicker and sicker," only to have those patients return quickly for follow-up inpatient care.
[FIGURE 1 OMITTED]
[FIGURE 2 OMITTED]
If this were occurring, we would expect low readmission rates to be associated with longer lengths of stay for the initial admissions. We used a risk-adjusted length of stay measure to test this premise, but found no discernable relationship between readmission rate and length of stay. Thus, the timing of discharge may be a factor in certain cases, but differences in initial length of stay do not consistently explain why low-cost hospitals tend to have the highest readmission rates among our study hospitals. Because readmissions have only recently become a major focus of policy attention both nationally and in Hawaii, this issue will warrant further research.
Change in Cost and Change in Quality
We began our dynamic analysis of the relationship between quality and cost by measuring the change in quality between 2008 and 2009. For the hospitals as a group, the improvement in quality ranged from a low of 3.3% for complications rate to a high of 6.4% for internal quality initiatives, with a weighted average for the five quality components of 4.8%. Across the hospitals, the weighted average quality changes ranged from 12% to -2.8%. The range of changes in cost per case was even wider, from 13.2% to -7.9%, with a weighted average of 3.4%.
The correlation of change in quality and change in cost was negative as expected (coefficient -.351), but the relationship was not statistically significant.
This study provides solid evidence that at least in Hawaii, high-quality hospital inpatient care does not have to cost more. The negative relationship between our composite quality measure and standardized cost per case is highly statistically significant, despite a sample size of only 13 hospitals. The ultimate question, however, is whether investing in quality improvement will pay off with lower cost growth. Our study provides a tantalizing suggestion--but not solid evidence--that quality improvement is associated with a reduced rate of growth in costs. One hospital showed a 12% improvement in quality coupled with an 8% decline in costs per case.
The strong negative correlation between quality and cost per case in the absolute is not consistent with a recent study by Jha et al. (2009). That study found no relationship between cost per case and quality. The hospital sample in that study was national in scope, and it is possible that the dynamics of cost and quality are different in Hawaii than in other parts of the country. The small number of hospitals in Hawaii may also have influenced our results. In the Jha et al. study, however, the measure of quality was limited to a single dimension (clinical process) and a single set of measures (CMS's starter set of clinical process measures). We too tested the relationship between cost per case and clinical effectiveness according to the CMS core measures and similarly found no significant relationship. The evidence of high quality care being associated with low per case costs emerged only with our 47-indicator composite measure. This speaks to the importance of using a broad-based, multi-dimensional measure to gauge hospitals' overall level of quality.
For readmissions, our study suggests that interventions designed to reduce readmissions will not bring about lower per case costs. In fact, the costs incurred to improve care transitions toward the goal of minimizing readmissions may well increase per case costs. The additional costs could include an extra day or two of care before discharge to facilitate optimal care coordination, which may reduce the probability of patients deteriorating to the point of requiring readmission and therefore increasing overall episode cost. This underscores that the dynamics involved with readmission rates are fundamentally different than those of other quality measures. For other dimensions of inpatient quality, the interests of hospitals and payers may often be aligned. The hospital and the payer, as well as their patients/policyholders, will likely benefit from quality improvement efforts that serve to hold down per case costs. For readmissions, the payer and its policyholders enjoy savings as well as better care when unnecessary readmissions are avoided; under current payment policies, however, the hospital faces an adverse financial incentive. It must bear the intervention costs needed to prevent readmissions while losing the revenue these cases would otherwise have produced. Of course, this loss of volume could be balanced by beds being available for more patients and more severely ill patients who require longer lengths of stay.
Hospitals should be devoted to minimizing readmissions solely to provide the best care possible for their patients. But alignment of financial incentives is still important. Giving readmission rates substantial weight in a pay-for-performance system, as the HMSA is now doing, can help accomplish this realignment. (7) What is ultimately called for, however, is tying financial incentives to bundles of care that include inpatient admissions and all post-acute care, including readmissions.
American Heart Association. 2011. Get with the Guidelines for Stroke. http://www.heart.org/HEARTORG/HealthcareProfessional/Get-With-The-Guidelines- Stroke-Home-Page_UCM_306098_SubHomePage.jsp
Ashby, J., K. Fisher, A. Lynch, S. Guterman, J. Pettengill, B. Gage, and D. Kelley. 1996. State Variation in the Resource Costs of Treating Aged Medicare Beneficiaries. Washington, D.C.: Medicare Payment Advisory Commission (MedPAC).
Baicker, K., and A. Chandra. 2004. Medicare Spending, the Physician Workforce, and Beneficiaries' Quality of Care. Health Affairs Web Exclusive April 7, W4:184-197.
Chen, L. M., A. Jha, S. Guterman, A. Ridgway, J. Orav, and A. Epstein. 2010. Hospital Cost of Care, Quality of Care, and Readmission Rates. Archives of Internal Medicine 170(4):340-346.
Fleming, N. S., and O. Ballard. 2009. Implementing a Standardized Order Set for Community-Acquired Pneumonia: Impact on Mortality and Cost. Joint Commission Journal on Quality and Patient Safety 35(8):414-421.
Hawaii Medical Service Association. 2011. HQSR Program Guide--2009. http://www.hmsa.com/portal/provider/zav_IN.HQSR_INDEX.htm.
James, B., and L. Savitz. 2011. How Intermountain Trimmed Health Care Costs Through Robust Quality Improvement Efforts. Health Affairs 30(6):1185-1191.
Jencks, S. F., T. Cuerdon, D. Burwen, P. Houck, A. Kussmaul, D. Nilasena, D. Ordin, and D. Arday. 2000. Quality of Medical Care Delivered to Medicare Beneficiaries: A Profile at State and National Levels. Journal of the American Medical Association 284(13): 1670-1676.
Jha, A. K., E. Orav, A. Dobson, R. A. Book, and A. M. Epstein. 2009. Measuring Efficiency: The Association of Hospital Costs and Quality of Care. Health Affairs 28(3):897-906.
Joynt, K. E., E. J. Orav, and A. K. Jha. 2011. The Association between Hospital Volume and Processes, Outcomes, and Costs of Care for Congestive Heart Failure. Annals of Internal Medicine 154(1 18):94-102.
Kanaan, S. B. 2009. Homeward Bound." Nine Patient-Centered Programs Cut Readmissions. Oakland: California Healthcare Foundation.
Mechanic, R., K. Coleman, and A. Dobson. 1998. Teaching Hospital Costs: Implications for Academic Missions in a Competitive Market. Journal of the American Medical Association 280(11): 1015-1019.
Medicare Payment Advisory Commission (MedPAC). 2010. Report to the Congress." Medicare Payment Policy (Efficient Hospitals Defined in Cost and Quality Terms). Washington, D.C.: MedPAC.
--. 2007. Report to the Congress: Medicare Payment Policy (Effect of Teaching and Low-Income Share on Hospital Costs Per Case). Washington, D.C.: MedPAC.
Premier, Inc. 2008. Hospital Quality Improvement Demonstration (HQID) Performance Update and Analysis of Quality, Cost and Mortality Trends. Charlotte, N.C.: Premier.
Siegrist, R., and N. Kane. 2003. Exploring the Relationship Between Inpatient Hospital Costs and Quality of Care. American Journal of Managed Care 9(Spec No 1):SP43-9.
Skinner, J., A. Chandra, D. Goodman, and E. Fisher. 2009. The Elusive Connection between Health Care Spending and Quality. Health Affairs 28(1):W119-123.
Yasaitis, L., E. S. Fisher, J. S. Skinner, and A. Chandra. 2009. Hospital Quality and Intensity of Spending: Is There an Association? Health Affairs 28(4):566-572.
The authors would like to thank Paul Young of the Healthcare Association of Hawaii and Cathy Yamauchi, Lianne Higashida, and Ed Frankel of HMSA for their assistance in developing the quality and cost data upon which this study was based. We also acknowledge the contribution of Judy Chen, M.D., and Karen Hsu of IMS Health for their work in developing the complications measures applied in the study.
(1) Critical access hospitals are excluded from HQSR due to low inpatient volume and limited capacity for quality measurement. Two facilities--Kaiser Moanalua Medical Center and Tripler Army Medical Center--are excluded because they do not contract with HMSA; Kapiolani Medical Center for Women and Children is excluded because many of the quality measures used in the system are not applicable to pediatric patients.
(2) Payer share based on 2008 data from Hawaii's state data consortium, Hawaii Health Information Corporation.
(3) Found at http://hospitalcompare.hhs.gov.
(4) Among the 13 sample hospitals in this study, average cost per case was 67% higher for Medicare patients than for private insurers, per data published by Hawaii Health Information Corporation.
(5) The readmissions data system is operated by the Hawaii Health Information Corporation, the state's data consortium. HMSA added a Medicare readmission rate measure to its HQSR system for 2010; an all-payer readmissions rate based on the 3M potentially preventable readmission system will be part of the 2011 program.
(6) This hospital falls slightly short on Medicare's criterion for major teaching, which is a ratio of full-time equivalent (FTE) residents to beds greater than .25. However, because it is the primary teaching hospital of the University of Hawaii School of Medicine and has about 88 FTE residents, it is generally seen as comparable to other major teaching facilities.
(7) HMSA is now in the first year of a three-year transition that will raise the proportion of payments for hospital inpatient care tied to performance from about 2% to 15%. In this new system, the readmission rate is generally given a 20% weight in the overall performance score that determines a hospital's incentive payment. Jencks, S. F., E. Huff, and T. Cuerdon. 2003. Change in the Quality of Care Delivered to Medicare Beneficiaries, 1998-1999 to 2000-2001. Journal of the American Medical Association 289(3):305-312.
Jack Ashby, M.H.A., is the hospital research director; John Berthiaume, M.D., is vice president and medical director for care management, and Paul Sibley, M.B.A., is the business project planning manager, all at the Hawaii Medical Service Association. Deborah Taira Juarez, Sc.D., is an associate professor of pharmacy at the University of Hawaii. Richard S. Chuug, M.D., is chief clinical officer at APS Healthcare, Inc. Address correspondence to Mr. Ashby at Hawaii Medical Service Association, Blue Cross Blue Shield of Hawaii, P.O. Box 860, Honolulu, HI 96808-0860. Email: email@example.com
Table 1. Components of hospital quality in HMSA's Hospital Quality and Service Recognition (HQSR) program, 2009 Weight in Points in composite Component HQSR program measure (%) Outcome measures Clinical complications 25 23.6 Patient satisfaction 25 23.6 Process measures CMS core measures Acute myocardial infarction 8 7.5 Heart failure 9 8.5 Pneumonia 8 7.5 Get With the Guidelines Coronary artery disease 10 9.4 Stroke 5 4.7 Internal quality initiatives Initiatives of hospital's choice 10 9.4 MRSA or 5 Million Lives (a) 6 5.7 Total 106 99.9 Source: HMSA's Hospital Quality and Service Recognition (HQSR) program. (a) The 5 Million Lives program, sponsored by the Institute for Healthcare Improvement, involved specific goals and interventions for hospitals to reduce levels of morbidity and mortality. Table 2. Distribution of sample Hawaii hospitals and the nation, 2009 Hawaii Hawaii National Characteristic number share (%) share (%) Bed size 400+ 1 8 10 200-399 4 31 20 100-199 5 38 23 25-99 3 23 47 Teaching status Major teaching 1 8 8 Other teaching 4 31 22 Non-teaching 8 62 70 Source: Medicare cost reports (for Hawaii hospitals), AHA Hospital Statistics 1011, and A Data Book: Healthcare Spending and the Medicare Program, June 2011 (MedPAC). Note: Critical access hospitals (CAHs) were excluded from the study because they are not capable of reporting the necessary quality data. The national data from MedPAC also exclude CAHs, and the AHA data exclude hospitals with fewer than 25 beds, almost all of which are CAHs. Table 3. Cost and quality correlations in sample of Hawaii hospitals, 2008-2009 P value if Correlation tested Coefficient significant Cost and HQSR quality Patient satisfaction -0.225 -- CMS core measures -0.318 -- Complications -0.346 -- Get With The Guidelines -0.417 -- Internal quality initiatives -0.535 -- Overall -0.655 0.015 Cost and readmission rate -0.676 0.011 Change in cost and change in -0.351 -- HQSR quality Source: Analysis of data from Medicare cost reports, Hawaii Health Information Corporation, Hospital Compare, IMS Health, American Heart Association, and HMSA's HQSR program. Note: In two of the six quality measures in this analysis- complication rate and readmission rate-hospitals strive for the lowest score possible. These scores were inverted so that favorable performance would be consistently defined by the highest value possible.