Printer Friendly

Assessing health care report cards.

Report cards have become a prominent feature of the health policy landscape (F. Thompson 1994; Epstein 1995; Corrigan and Rogers 1995; Brennan and Berwick 1996). Although the federal government no longer distributes annual hospital report cards, many state governments do. Some state governments also have begun to distribute report cards on health maintenance organizations (HMOs). In addition, one state government releases periodic reports on nursing homes, based on state inspections.

Report cards are potential solutions to the problem of information asymmetries, which afflict many markets. Producers of services possess much better information about the quality of their services than consumers do. Lacking good information, consumers may overestimate or underestimate service quality, making inappropriate choices. At the same time, producers are tempted to behave opportunistically, taking advantage of consumer ignorance. Market failure is a likely result (Weisbrod 1988, 6; Vining and Weimer 1988), but it can, in theory, be alleviated through report cards.

Governments are not the only suppliers of health-related report cards. Indeed, the principal purveyors of information on HMO quality are nonprofit and for-profit organizations, sometimes working together. National magazines, such as U.S. News and World Report and Consumer Reports, also have begun to publish assessments of HMOs, hospitals, and nursing homes on an occasional basis. In contrast to elementary and secondary education, where state and local governments dominate the production of report cards, a diverse assortment of public and private organizations is involved in producing health care report cards.

Whether such diversity is good or bad is not clear. On the one hand, it permits experimentation in the choice of variables, in the construction of measures, and in the design of formats for releasing information. In these respects, it may promote innovation. On the other hand, if economies of scale exist, they are not being fully realized. Health care firms are besieged by partially overlapping requests for information; consolidated report cards might require less time and money. And consumers are exposed to report cards whose utility is limited because certain providers and certain relevant indicators are excluded. A glut of report cards, not all of them trustworthy, also could be confusing. Thus both efficiency and consumer choice may suffer.

Although report cards have received increased attention in the scholarly literature (Hannan et al. 1994; Corrigan 1995; Gold and Wooldridge 1995; Zimmerman et al. 1995; Chassin, Hannan, and DeBuono 1996), most studies of report cards have focused exclusively on a particular report card (e.g., New York's coronary artery bypass graft [CABG] surgery report card) or on a particular type of report card (e.g., hospital report cards). To date, no one has offered a systematic assessment of report cards conducted under different auspices, in different health policy sectors, and in different states.

In this article, I assess report cards that appraise three types of health care providers: hospitals, nursing homes, and health maintenance organizations (HMOs). I begin by articulating several criteria for evaluation. Next, I assess the quantity and quality of state government report cards and magazine report cards for all three types of health care providers in the nation's largest and smallest states. Finally, I take a broader view by characterizing the information systems we have created in each of the three domains.


Report cards may be viewed from three perspectives: that of the scientist, including experts in health care and statistics; that of the consumer, including public and private purchasers; and that of the provider, including hospitals, HMOs, and nursing homes. While each of these perspectives points in a somewhat different direction, each is also legitimate and important.(1)

In looking at report cards, scientists have a right to demand validity and comprehensiveness. A valid report card is one that comes close to measuring quality. As the U.S. General Accounting Office (GAO) has warned (1994, 39), structural and process variables are not necessarily satisfactory proxies for quality of care. If a report card focuses on inputs or processes as surrogates for outcomes, then the literature should support the use of such measures as a reasonably good approximation of desired results. If the report card focuses on outcomes, and those outcomes depend on clientele characteristics, then a risk-adjusted model is essential. The raw data that undergird the report card should be standardized so that accurate comparisons can be made. The raw data also should be gathered by objective observers or, barring that, should be audited by such observers.

Although validity is of primary importance, comprehensiveness is also highly desirable. A hospital report card that looks at CABG surgery, C-sections, and prostatectomies is superior to one that looks at CABG surgery alone. An HMO report card that looks at preventive medicine and care of the sick is superior to one that looks at preventive medicine alone. A nursing home report card that looks at pressure sores and recreational activities is superior to one that looks at pressure sores alone. If summary appraisals are offered, they should be based on a wide range of indicators. Health care quality is multidimensional and should be treated as such. A sample that approximates the universe is also preferable to increase the chances of helping consumers who seek information about a particular facility or set of facilities.

Consumers have a right to insist that report cards be comprehensible and relevant. Obviously, comprehensibility is a relative concept. What is comprehensible to a health benefits manager at Xerox may be incomprehensible to a machinist with a high school education. In a marketplace dominated by managed care insurance, the corporation and its well-trained staff currently make most of the critical early decisions, with individual consumers playing a modest residual role. Nevertheless, even employees whose company presents them with a choice between only one HMO and one fee-for-service plan should be able to gauge the relative merits of that HMO, with an eye toward challenging the company's menu of health care options. A report card comprehensible only to experts in the field is not adequately comprehensible.

It is also essential that report cards address the issues and concerns that consumers care about. Here the concerns of all consumers count--individual consumers, corporate consumers, and governmental consumers. Although conclusive evidence is not yet available on consumers' information priorities (U.S. GAO 1995; Hibbard and Jewett 1996; Isaacs 1996), it does appear that individual consumers want to know more about "desirable events" (such as preventive medicine) and customer satisfaction (Hibbard and Jewett 1996). Studies also suggest that consumers care about cost (Edgman-Levitan and Cleary 1996, 42; Zelman 1996, 227). A report card that addresses both cost and quality is more relevant than one with a narrower focus.

Finally, providers have a right to expect that report cards are reasonable in their demands and functional in their impact. The production and preparation of (quality-relevant) information is not cost free, and those costs are ultimately borne by investors, consumers, and taxpayers. In their zeal to promote comprehensiveness, report card managers should not ask for everything but the kitchen sink. The difficulty of gathering certain data also should be factored into the equation. Specifically, the marginal benefits of consuming additional information should exceed the marginal costs of producing it. Deadlines also should be realistic; without sufficient lead time, mistakes are more easily made.

It is equally important that report cards be designed to encourage appropriate (or functional) responses. Like other policy instruments, report cards create incentives, some good, some bad. The trick is to know which are which and to institutionalize the good incentives. For example, report cards should encourage providers to make real improvements in the quality of care, not bogus improvements that create the mere illusion of success. Report cards that promote cream skimming, teaching to the test, and other forms of data manipulation are dysfunctional. They encourage providers to sustain or develop a positive image not by reducing their deficiencies but in spite of their deficiencies.

Fortunately, some of these criteria for choice point in the same direction. For example, valid report cards are less likely to be dysfunctional than others, because they accurately measure outcomes (or variables closely related to outcomes) and reward providers whose outcomes are superior. However, trade-offs are often involved, which is why several criteria must be considered. Comprehensiveness and comprehensibility often tug report card designers in different directions. As Hibbard and Jewett (1997) have pointed out, a report card that covers too much ground runs the risk of being perceived as cluttered and perplexing. Validity and comprehensibility also present dilemmas--a report card complex enough to satisfy the canons of science may be bewildering to someone who doesn't know the difference between a beta weight and a beta blocker.


To assess the merits of a diverse set of report cards, I have relied on three techniques: a review of the relevant literature; a content analysis of report cards; and interviews with health care officials in ten states. I selected the five largest and five smallest states in the hope of capturing variability in both the quantity and quality of report cards. As expected, report cards were much more likely to be found in the largest states. Indeed, in the smallest states, I found no true health care report cards.

I conducted telephone interviews with at least two state officials in each of the smallest states, at least three state officials in each of the largest states. The primary purpose of these interviews was to learn more about the content and methodology of particular report cards. I obtained state government report cards where these were available and magazine report cards (usually national in scope) as well.

In addition to discussing individual report cards, I offer a convenient tabular summary of my assessments, based on the six normative criteria I have outlined. I use a star system similar to that found in certain report cards, with five stars being the best and one star the worst. Although these appraisals are explained and defended in the text, they do, of course, represent my personal judgment.


Hospital report cards, though flawed in certain respects, are generally more valid than other health care report cards. Unlike others, hospital report cards typically have focused on a clear, important outcome variable--death--and have utilized multivariate statistical models to control for different patient risks (risk adjustment). In some states, these techniques are now highly advanced.

Of the nation's five largest states, only Texas has no statewide hospital report card. In the other four states, a state agency already gathers, analyzes, and distributes hospital mortality data.(2) All four states study deaths following acute myocardial infarction (AMI), coronary artery bypass graft (CABG) surgery, or both. In addition, California has published data on diskectomies (though not recently) and New York has published data on angioplasties. California, Florida, and Pennsylvania have published cesarean section rate data. Florida also has published mortality data for general surgery, gastroenterology, neurology, neurosurgery, pulmonary medicine, and a broad category called other medicine. In all four states, data are provided for all or almost all of the acute care hospitals in the state.(3)

The technical sophistication of the models developed in California, New York, and Pennsylvania is rather impressive. In all three states, risk factors are culled from the literature and from past experience and are tested for statistical significance. The technical merits of Florida's models are difficult to judge, because the methodological appendix (only seven pages long) is quite sketchy (Florida Agency for Health Care Administration, Guide to Hospitals in Florida 1996a). However, one limitation of Florida's models is that the same predictor variables are used, regardless of the dependent variable.

The c statistic, which measures the area under a receiver operating characteristic (ROC) curve,(4) suggests that three states have developed models that are reasonably good at discriminating between patients who die in the hospital and patients who survive to discharge. The c statistic for California's AMI study was .766 for one model, .844 for another (California OSHPD, Technical Appendix 1996b, 14-29). The c statistic for Pennsylvania's CABG study was .829 (Pennsylvania Health Care Cost Containment Council 1995, 18). The c statistic for New York's CABG study has hovered around .80 (Chassin et al. 1996, 397). Florida has not reported a c statistic for its statistical models because it was unable to achieve convergence when using logistic regression.

The c statistic is not the only appropriate measure of a model's validity. Another technique for assessing goodness of fit is to measure calibration or the ability to predict the probability of death for different levels of risk. A study of New York's CABG model found it to be well calibrated--the model adjusted correctly for patients in ten risk categories (Chassin et al. 1996, 396). California's models did not fare so well. The Hosmer-Lemeshow statistic, which compares observed with predicted outcomes across several strata of risk, was statistically significant for both Model A and Model B (California OSHPD, Technical Appendix 1996, 10-4), which raises questions about calibration. In particular, Model B could be better calibrated. Neither Florida nor Pennsylvania has published statistics on calibration.

In order to avoid human error, the New York Department of Health audits its data (Chassin et al. 1996). Although this is expensive, it helps to ensure data accuracy. California also has opted for extensive editing of data and has issued appropriate warnings and caveats based on validity checks. In this area, the New York and California report cards are superior.

In terms of comprehensibility, hospital report cards suffer from a basic problem noted by Hibbard and Jewett (1997), which is that consumers have trouble grasping so-called undesirable event indicators such as risk-adjusted mortality rates. Within these constraints, however, some hospital report cards are more user friendly than others. Each report card uses some set of symbols to signify hospitals that are better or worse than average. To facilitate understanding, California and Pennsylvania reproduce their respective coding schemes on every page on which hospitals are compared. In pursuit of the same goal, New York has managed to squeeze information about all of its hospitals that perform angioplasty onto a single page. In contrast, Florida uses a confusing notation system (a colored triangle is good, a colored square is bad, and a minus sign signifies no statistically significant difference) and reproduces the coding scheme only occasionally in a very bulky book.

In terms of relevance, consumers say that they care less about death than about other quality indicators but appear to weigh death more heavily when they actually make choices (Hibbard and Jewett 1997). Thus the relevance of mortality per se is not quite clear. However, it is clear that consumers care about cost (Edgman-Levitan and Cleary 1996). Both Florida and Pennsylvania provide data on cost and quality in the same publication--indeed, on the same page. In this respect, Florida's and Pennsylvania's report cards would appear to be more relevant to consumers' information needs.

In contrast to New York and Pennsylvania, which use clinical data gathered expressly for the purpose of performance measurement, California and Florida use administrative data (or discharge abstracts). Studies show that risk models that use clinical data are superior to those that use administrative data. For example, Hannan et al. (1992 and 1994) obtained a c statistic of .787 with clinical data, .74 with administrative data, for the same set of patients. Similarly, Iezzoni et al. (1992) found that models that utilized clinical data did a better job of predicting mortality than did models that utilized administrative data. In this respect, the Pennsylvania and New York report cards are superior.

However, Pennsylvania's approach imposes greater administrative and financial costs on participating hospitals than does Florida's approach. The Pennsylvania system, which uses the Atlas (MedisGroups) statistical package to do risk adjustment,(5) requires an actual person at each hospital to abstract information from the medical records.(6) This is time consuming, as it requires approximately forty-five minutes to abstract a single case. The cost of the MedisGroups package is also expensive--approximately $200,000 per hospital. New York has developed its own system for collecting clinical data for cardiac surgery and for angioplasty but has not extended these requirements to acute myocardial infarction (AMI) cases (Hannah 1997). Most of the affected hospitals have been able to meet New York's data requirements by allocating one or one-half full-time equivalent staff for that purpose.

In contrast, the administrative data used by Florida does not require any additional work by hospitals.(7) California, which uses its own administrative data base, also imposes fewer additional costs on hospitals.(8) Thus Pennsylvania has purchased greater validity at the expense of reasonableness; Florida has purchased greater reasonableness at the expense of validity. New York and California fall somewhere in between, with New York stressing validity a bit more, California stressing reasonableness a bit more.

Another measure of reasonableness is whether the state provides opportunities for hospitals to be heard, either ex ante (as contributors to report card design) or ex post (as critics of the final results). Though each of the largest states confers with hospital officials on issues of content and design, the states differ in their willingness to share hospitals' reservations with consumers. California is especially generous in this respect. The same publication that compares California hospitals, based on heart attack deaths, also includes letters from individual hospitals that wish to register objections (California OSHPD, Study Overview and Results Summary 1996a). Florida's concession to reasonableness is more mechanical and repetitive--on every other page Florida reprints a statement that warns readers about data limitations and urges them to confer with health care professionals to place the data in perspective.

Are hospital report cards functional? Do they encourage hospitals to improve their performance? Some evidence suggests that the answer is yes. For example, in New York state, risk-adjusted mortality associated with CABG surgery declined by 41 percent from the beginning of 1989 through the end of 1992, the period when the CABG report card was first introduced (Chassin et al. 1996, 397). These results are encouraging and consistent with some case study evidence from St. Peter's Hospital in Albany, where unfavorable report card results triggered significant improvements in surgical procedures and outcomes (Montague 1996) However, additional research needs to be done in this important area, particularly in view of studies that find substantial reductions in risk-adjusted mortality in states that lack report cards (Ghali et al. 1997).

Since 1990 U.S. News & World Report has published an annual report card on hospital quality, focusing on "America's Best Hospitals" in various medical specialties. Scholars who believe that death is not the only outcome of interest and who question the use of administrative data for risk-adjustment purposes have criticized the comprehensiveness and the validity of the magazine's research methodology, which essentially mimics the approach previously taken by HCFA (Green et al. 1997). But death's importance cannot be denied, and the breadth of medical specialties covered is impressive. The magazine presents a series of numbers for each hospital, as opposed to user-friendly symbols. A more serious drawback is the absence of information on bad or mediocre hospitals. In the final analysis, the U.S. News report card is more helpful to consumers who live in large metropolitan areas with some very good hospitals than to consumers who do not enjoy access to such hospitals.

In contrast to the largest states, none of the five smallest states (in terms of population) has a hospital report card. Health experts cite several reasons for this. One is the relatively small number of hospitals, which reduces consumer demand for systematic comparisons. For example, only one Alaska city (Anchorage) has more than one hospital (Whistler 1997). Where competition is limited or nonexistent, a report card's utility to consumers is more limited, though patients do occasionally travel substantial distances to seek out an accomplished surgeon (Pauly 1992, 177). A second factor is political opposition from hospital officials, many of whom regard report cards as a threat to their vital interests. As a North Dakota official (Garland 1997) puts it, "The political environment doesn't lend itself to making accusations about the quality of care in the medical community." A Vermont official (Davis 1997) expresses the same sentiment: "There have been power struggles over what the state should and shouldn't do. Providers are nervous that report card information may be abused or misinterpreted." A third factor is that many state officials have reservations about the state of the art and/or about their capacity to develop a valid statistical model, with appropriate risk adjustment, despite limited resources. North Dakota has reservations about data quality (Garland 1997) and Delaware is seeking to resolve some software problems before proceeding with a hospital report card (Berry 1997). As expected, the nation's smallest states lag far behind the nation's largest states in developing hospital report cards.


In contrast to hospital report cards, whose production is determined largely by state health agencies and whose content has become more uniform as a result of the diffusion of scientific practices, HMO report cards are much more diverse in sponsorship and in content. Although the National Committee for Quality Assurance (NCQA) is striving for greater comprehensiveness and greater standardization in this field (Corrigan 1995; Anders 1996), the HMO report card industry is at the moment fragmented and diffuse.

A nonprofit organization funded by employers, HMOs, and the federal government, the NCQA has drafted voluntary quality standards for HMOs, known as the Health Plan Employer Data and Information Set (or HEDIS). Those standards were criticized by the Foundation for Accountability (FAcct) and others for focusing too much on process and not enough on outcomes or voluntary status (Belkin 1996; Freudenheim 1996). In 1996 the NCQA unveiled HEDIS 3.0, which includes several new performance measures, such as whether patients hospitalized with a heart attack subsequently receive betablockers and whether health plans actively advise members who smoke to quit. In addition, the NCQA introduced a new functional status measure (SF36) for senior citizens who belong to Medicare HMOs. Beginning in 1997, the Health Care Financing Administration (HCFA) now requires all Medicare HMOs to submit audited HEDIS data, including SF36 data and a customer satisfaction survey.

These developments are extremely important and they could help to improve HMO report cards in the future. Nevertheless, several limitations should be kept in mind. First, except for Medicare HMOs, participation remains strictly voluntary. Of approximately 630 health plans in the United States, only 215 have submitted HEDIS data to the NCQA that may be released to the general public (J. Thompson 1997). Furthermore, if they wish, participants are free to submit data on some indicators but not on others. Second, the NCQA continues to focus more on process than on outcomes. Changes incorporated into HEDIS 3.0, though noteworthy, were incremental. Of 826 new measures proposed for HEDIS 3.0 by interested parties, the NCQA agreed to test thirty-three but adopted only six. The NCQA has made limited progress in dealing with risk adjustment, thus precluding the use of asthma indicators and other outcome measures that are known to be linked to demographic characteristics. Third, the NCQA really does not publish a regular report card, unless one thinks of its accreditation data base (available in print and on the Internet) as a report card. Instead the NCQA serves as a data repository and a data supplier. For the time being, HMO report cards are largely in the hands of state governments, magazines, and public-private partnerships.

The leading state in this field is probably California, where an alliance of health plans, providers, and purchasers (including the California Public Employees' Retirement System) has produced two HMO report cards since 1994. The alliance, spearheaded by the Pacific Business Group on Health, is officially known as the California Cooperative HEDIS Reporting Initiative (CCHRI), because its indicators are derived from HEDIS.

In its 1996 report, the CCHRI published data on six indicators: childhood immunizations, cholesterol screening, breast cancer screening, cervical cancer screening, prenatal care in the first trimester, and diabetic retinal examination. Although outcome measures would be welcome, these process measures represent a very good beginning. The use of standardized questions and responses permits meaningful comparisons across health plans. Also, the data have been validated by an independent third party auditor (MEDSTAT), which significantly boosts the trustworthiness of the results. For each indicator, the report uses circular icons (dark, light, half-and-half) to show whether each of twenty-four health plans is above average, below average, or average in comparison to other plans (California Cooperative HEDIS Reporting Initiative 1996). This facilitates understanding, as tests show that consumers prefer icons to raw numbers (Castles 1997). As for relevance, focus groups show that consumers demonstrate keener interest in preventive care (desirable event indicators) than in undesirable event indicators or disciplinary actions. As Hibbard and Jewett (1996, 45) put it, "Consumers have a definite preference for desirable-event indicators and for patient ratings."

To finance the CCHRI project, each participating health plan contributed at least $100,000 (Kertesz 1996). Despite some grumbling over cost, the overwhelming majority of California HMOs, representing over 95 percent of the state's commercial HMO membership, have chosen to participate (Hopkins 1997). Whether firms have improved their services as a result of the report card is not yet known. However, one study shows that report cards now rival friends and family as an information source (Smart 1996). Under such circumstances, HMOs have strong incentives to try to improve their performance, as measured by the preventive medicine indicators and a customer satisfaction survey scheduled for inclusion in the 1997 report.

The CCHRI report card is not the only publication of potential interest to California's HMO consumers. As required by law, the California Department of Corporations (1996) publishes annual data on the number of complaints filed against all licensed health plans in the state. The report also lists the number of complaints that fall within certain categories (accessibility, benefits/coverage/claims, and quality of care) and within certain subcategories as well (e.g., inadequate facilities, inappropriate physician care, inappropriate hospital care). Because consumers like to know what other consumers think (Hibbard and Jewett 1996), a complaint-based report card is potentially relevant.

The utility of this particular publication, however, is severely limited for two reasons. First, the report is not user friendly. The complaint data are organized alphabetically rather than by complaint/enrolee ratios, and it is somewhat difficult to determine whether a higher score is better or worse than a lower score (the key heading says quality of care/10,000 enrolees rather than complaints/10,000 enrolees). Also, the report draws no distinction between substantiated and unsubstantiated complaints. Thus a relatively good health plan with a large number of unsubstantiated complaints may be linked unfairly in the public's mind to a relatively bad health plan with a large number of substantiated complaints. Until this problem is rectified, the use of complaint data leaves much to be desired.

A better approach, adopted by the New York State Insurance Department, is to distinguish between "upheld' complaints and complaints that have been withdrawn or not upheld (New York State Insurance Department 1996). New York's data are organized conveniently by complaint ratio (the number of complaints upheld divided by the dollar value of the company's premiums), and the complaint ratio is clearly labeled, leaving little doubt that a higher number is worse. However, the New York State Insurance Department's report is unidimensional, with no breakdown by type of complaint.

More impressive is a recent report on managed care performance by the New York State Department of Health (1997). That report summarizes quality data for both commercial and Medicaid HMOs. The data cover a good range of indicators for commercial HMOs and an even wider range for Medicaid HMOs (though they do not yet feature customer satisfaction measures). Of special importance, the indicators include a risk-adjusted low birthweight measure for both commercial and Medicaid HMOs. The inclusion of an authentic outcome measure, with appropriate controls for demographic characteristics, is both unusual and commendable. The New York data are audited (by the NCQA), which enhances validity. Although six of forty-one plans flunked the audit, that still leave consumers with valid information on a substantial number of plans. Each commercial health plan must spend approximately $100,000 to meet the state's quality data requirements, and each Medicaid plan must spend about $30,000 more (Roohan 1998). However, some of these costs would have been incurred anyway, as most of New York's plans voluntarily submit HEDIS data to the NCQA. The New York State Department of Health, unlike California, has not created a user-friendly web site to facilitate consumer access. Also, its report is quite bulky, which could intimidate many consumers.

The Florida Agency for Health Care Administration has released some quality-relevant data on HMOs. In March 1995 and again in September 1996 Florida published a report on how well Medicaid prepaid health plans were complying with state standards. For each health plan with a state Medicaid contract, the report revealed the total number of deficiencies and the percentage of standards met. In recognition of the special importance of violations of quality of care standards, the report included a separate table on such violations.

Florida's report included no icons to signify unusually good or bad performance, only numbers. A HEDIS-based report featuring preventive medicine indicators would probably be more relevant to consumers. Despite these limitations, however, the Florida report appears to have had a positive impact. Following the release of the first report, the state terminated the contracts of three plans and issued a moratorium on further enrollment by plans with substantial noncompliance (Florida Agency for Health Care Administration 1996c, 2).

Thus far, Florida has not gathered or released performance data based on HEDIS indicators. In Texas, an HMO report card based on HEDIS indicators is being developed, but HMOs have argued against the public release of unaudited data while at the same time they have refused to pay for such audits (Genco 1997). Until this impasse is resolved, an HMO report card remains an unrealized goal.

In Pennsylvania, the state has thus far not developed an HMO report card. However, in western Pennsylvania, the Pittsburgh Business Group on Health teamed up with Health Pages, a New York-based firm, to produce an HMO report card in 1994 and 1995. The result was a highly comprehensible and user-friendly publication, featuring a readable summary of the scorecard's content and methodology and a helpful glossary. Despite a promising beginning, the Pittsburgh Business Group on Health encountered problems when it attempted to upgrade the validity of its preventive medicine indicators. During the first year, when unaudited data were used, five (of five) HMOs participated. Subsequently, when the Pittsburgh Business Group on Health insisted on audited data, only three of five HMOs participated. Whether the nonparticipants objected primarily to auditing costs ($30,000 per HMO) or to potentially embarrassing revelations is unclear (Whipple 1996). Regardless, the following year the Pittsburgh Business Group on Health abandoned auditing. Clearly, efforts to enhance validity through auditing often encounter formidable industry opposition.

Even without auditing, many HMOs balk at participating in report card projects. When Newsweek prepared the cover story "America's Best HMOs," it surveyed seventy-five large health plans across the country. Of these plans, only forty-three cooperated (Spragins 1996, 56), for a response rate of 57 percent. Newsweek's sample of plans is skewed, with a disproportionate number of weaker plans declining to participate.(9) The modest response rate makes it difficult for consumers to compare their health plans with others in the same market areas. However, Newsweek's report card is quite relevant to consumers, as it includes customer satisfaction and preventive medicine measures, among others. In terms of presentation, the Newsweek report card is impressive. In addition to several pages of sprightly prose and appealing photographs, Newsweek managed to squeeze valuable information on several quality indicators onto a highly readable two-page chart. A one dot to four dot ranking system for each indicator and an overall summary score permits consumers to focus on the bottom line or on an indicator of special interest, as they wish.

An HMO report card produced by U.S. News & World Report features less information on a larger number of HMOs (Rubin and Bettingfield 1996). Instead of conducting its own survey of HMOs, U.S. News relied on data already submitted by many HMOs to the NCQA, which is certainly less costly for all concerned. At the time, 174 HMOs provided some data to the NCQA and 132 provided data on all five prevention categories of particular interest to U.S. News. More than three hundred HMOs submitted no data to the NCQA. Thus U.S. News, like Newsweek, suffered from significant nonparticipation. Because it was dependent on the NCQA's data base, U.S. News was unable to provide information on customer satisfaction or complaint ratios, as Newsweek had. However, U.S. News, like Newsweek, did a good job of presenting its data. Information was organized by state and, within each state, by performance, with the best HMOs getting four stars and the worst getting a single star. The impact of a single U.S. News edition would be extremely difficult to determine. However, if U.S. News continues to rate HMOs on a regular basis, the impact could be considerable. With an average circulation of approximately 2.3 million, U.S. News has the potential to encourage HMOs to improve their performance.

None of the five smallest states has prepared an HMO report card. In Alaska, with no HMOs, the question is not even relevant. In North Dakota, HMOs constitute a small fraction of the health care market. In the other small states, political and technical factors have combined to discourage the production of an HMO report card.


The gap between large and small states is smaller for nursing home report cards than for other types of report cards. That is because even large states have made little progress toward developing systematic report cards. Nursing home report cards are both less common and less sophisticated than hospital or HMO report cards.

Of the five largest states, only one state publishes an authentic nursing home report card. Every year Florida publishes a report that rates each nursing home based on its most recent licensing inspection record (Florida Agency for Health Care Administration 1996c). Each nursing home is rated superior (if it exceeded minimum licensing standards), standard (if it met the minimum requirements), or conditional (if it failed to meet the minimum requirements).

The simplicity of the Florida rating system helps to ensure that its report card will be easily understood. However, Florida's report card suffers from two interrelated problems. The first is that it presupposes that nursing home quality is unidimensional. By failing to specify separate dimensions of performance (e.g., physical care vs. recreational opportunities vs. privacy rights), Florida deprives consumers of valuable information that could result in better choices. Second, the Florida report card never explains what the factors are that determine a facility's rating. If a facility has more than ten deficiencies or twenty deficiencies, does that guarantee a standard or a conditional rating? Does it depend on which deficiencies? Does it depend on whether the same deficiencies were cited in the previous inspection? Answers to these questions are not divulged in Florida's consumer guide.

A more subtle problem, not unique to Florida's nursing home guide, is that the implicit rating scheme, though invisible to consumers, may be transparent to the nursing home industry. If nursing homes know that certain transgressions will deflate their rating While others will not, they have incentives not just to reduce their transgressions (functional behavior) but to reallocate their transgressions (dysfunctional behavior) to create the illusion of progress. More broadly, nursing homes have incentives to pressure state regulators to relax their standards.

It is difficult to know whether this has happened, in Florida or elsewhere. However, it does appear that some "grade inflation" has occurred over time. In 1989 only 43 percent of Duval County's nursing homes received a superior rating; in 1996 that figure climbed to 74 percent. In 1989 only 22 percent of Dade County's nursing homes received a superior rating; in 1996 that figure climbed to 57 percent. These numbers suggest either that Florida's nursing homes have gotten better or that they have simply gotten better at playing the ratings game.

Unlike Florida, the state of California does not publish a nursing home report card. However, a nonprofit advocacy group, California Advocates for Nursing Home Reform, does publish a report card. Derived from state data, the California report card lists the fifty nursing homes with the most violations and the fifty nursing homes with the least violations (California Advocates for Nursing Home Reform 1996). The report card also records the number of complaints and the dollar amount of levied fines.

Although it is instructive, the California report card has some serious flaws. First, it does not present a full list of nursing homes (there are 1,460 in California), only the best and the worst. Second, it does not control for nursing home size in ranking homes. Thus while the number of beds per home is listed, nursing homes are rank ordered based on the raw number of deficiencies rather than the ratio of deficiencies to beds. Third, the report card does not distinguish between substantiated and unsubstantiated complaints. A nursing home that faced numerous false accusations is lumped together in the complaint category with nursing homes that were justifiably accused of similar offenses. Fourth, the report card's provocative cover, on which California's nursing homes are given three Fs, one D, and one C, is more inflammatory than informative and provides little basis for a constructive dialogue with the regulated industry,(10) To sum up, California's effort falls short on several dimensions, including validity, comprehensiveness, relevance, and reasonableness.

In the remaining large states (New York, Pennsylvania, and Texas), consumers do not have access to a nursing home report card. State officials in each state will send consumers some information on a particular nursing home, with a list of its deficiencies. In Pennsylvania, that information is quite specific (e.g., offensive odors were detected in rooms 304 and 305 on July 19, 1996). In New York and Texas, violations are grouped into categories, such as quality of life and quality of care, and subcategories, such as pressure sores and unnecessary drugs. A Texas health official concedes that this information is problematic: "In effect, it's like saying, `I got a speeding ticket.' But you don't know whether it was for going 90 miles per hour in a school zone or 70 miles per hour on the freeway" (Stowers 1996). Also, the information available to consumers does not include comparisons with other facilities. Thus consumers can only guess whether the number and severity of violations are below or above average.

An inquisitive and persistent consumer can obtain additional information from the federal government. Each regional office of the HCFA will, upon request, send out information on a particular nursing home's deficiencies in each of several broad categories and how that compares with other nursing homes in the state, the region, and the nation.(11) However, the information furnished by HCFA is incomplete and potentially misleading. For example, if the Pleasant Dreams Nursing Home has eight deficiencies, HCFA tells you what percentage of other nursing homes in the state/region/nation has each of the eight deficiencies, but HCFA does not tell you what percentage of other nursing homes has deficiencies other than the eight at Pleasant Dreams. Nor does HCFA tell you the average number of deficiencies in the state/region/nation. Without such information, it is impossible to appraise the relative merits of Pleasant Dreams.

Since 1990 HCFA has required all fifty states to obtain standardized information on the condition of all nursing home patients in facilities serving Medicare or Medicaid clients. That information assists states in determining appropriate subsidies for patients with different levels of risk. Known as the Minimum Data Set (MDS) system, these data were developed to assess patients rather than to assess nursing homes. In theory, MDS data could serve as the basis for a nursing home report card. Thus far, however, none of the five largest states has used MDS data for that purpose.

Considering the dearth of information on nursing home quality that is available in the largest states, it is not surprising to discover that scant information is available in the smallest states. In Alaska, with competing nursing homes (two) in only one location (Anchorage), a nursing home report card for consumers may not be needed. In the other smallest states, a stronger case for a report card can be made, but none of these states has developed one so far.

To help compensate for the shortage of good nursing home report cards, Consumer Reports published a special report on forty-three nursing home systems (for-profit chains and religious groups with multiple nursing homes) in August 1995. To rate each system, the magazine focused on sixty-nine quality standards and counted the number of code violations in the last four inspection reports for each nursing home that was examined. The system then received an overall score, based on the mean number of code violations. The magazine also reported the percentage of homes within each system that was much better or much worse than average ("When a Loved One Needs Care," August 1995, 518-27). Like other magazine report cards, Consumer Reports' review is comprehensible and user friendly. However, it fails to reveal crucial facts about its methodology--e.g., how many nursing homes per system were examined. Also, it treats nursing home quality as if it were a unidimensional variable. Instead of offering separate quantitative scores for broad quality categories such as food, patients' rights, and grooming, the magazine provides only a summary measure of quality. Consumers who are particularly interested in a specific quality dimension such as patients' rights cannot learn about it from this report.


The incidence and content of health care report cards varies sharply across states. Smaller states have fewer and less sophisticated report cards than larger states do, at least at the tail end of the size spectrum. Indeed, the nation's smallest states have yet to adopt their first health care report cards. Differences between the public and private sectors, though less dramatic, are also noticeable. Private-sector report cards are very comprehensible and usually feature user-friendly tables and charts. However, magazine report cards often suffer from sample size problems, and at least one advocacy group report card suffers from serious validity problems.

In some respects, the most interesting differences of all are across health policy domains. Without intending to, we have developed three distinctive prototypes of information systems for hospitals, HMOs, and nursing homes. Each prototype represents a different path to the future.

For hospitals, we have created a risk-adjusted performance system. The hallmark of hospital report cards is a striking emphasis on technical sophistication and risk adjustment to control for differences in patient characteristics. While some academic researchers have sharply criticized some of these models, others have collaborated in producing them. As a result of the criticisms and the collaborations, hospital report cards have improved significantly. Although flaws remain, they often are handled through appropriate disclaimers or, in some instances, through the simultaneous presentation of alternative models.

Another striking feature of hospital report cards is their emphasis on outcomes. While the producers of other report cards drool at the prospect of reporting on actual outcomes, hospital report card producers have been doing precisely that for years. Admittedly, death is not the only medical outcome of possible interest--a factor that may limit report card utilization by referring physicians (Schneider and Epstein 1996). The quality of life after surgery and other medical treatments also needs to be reported. Nevertheless, hospital report cards have done a very good job of modeling an outcome of considerable interest.

Although the federal government once dominated the hospital report card field by virtue of its annual hospital mortality reports, the federal government abruptly abandoned the field in 1993, passing the baton to state governments (Vladeck 1994). Since then state governments, especially in larger states, have demonstrated considerable ingenuity in solving technical problems, in achieving universal or nearly universal participation among hospitals, and in establishing quality control or auditing mechanisms. These successes are, of course, interrelated. As risk adjustment has improved and data quality has improved, state administrators have been able to assure hospital administrators that they are being treated fairly. They have also been able to help hospitals identify practices likely to improve their performance.

For HMOs, we have strikingly different arrangements, which can be described as a loosely coupled indicator system. The hub of the system is the National Committee for Quality Assurance, which accredits HMOs and which serves as a repository for quality-relevant data submitted by HMOs. The NCQA works cooperatively with providers and consumers (including both business and government consumers) in an effort to serve the needs of the latter without upsetting the former. As a private, nonprofit organization, the NCQA cannot compel health plans to submit any data whatever, although the NCQA can (and does) specify conditions for accreditation. Because the NCQA's bargaining power is quite limited, and because HCFA has been reluctant to exercise its own more formidable bargaining power, connections among HMO stakeholders remain loose and disjointed. Health plans differ dramatically in their choice of quality indicators, in their willingness to have their data audited, and, most fundamentally, in whether they submit any data at all.

Although many interesting indicators are used by those health plans that do participate, few, if any, of them measure actual health outcomes (Anders 1996; Belkin 1996). Instead structural and process indicators predominate--such as the number of board-certified physicians and the number of cholesterol screenings. The accent is on what is done and who does it rather than on whether it makes a difference. Thus we have an indicator system, not a performance system. The absence of risk adjustment discourages the use of authentic outcome measures, such as low birthweight and asthma measures, which are known to be linked to socioeconomic characteristics. The absence of risk adjustment may also taint customer satisfaction surveys (Gold and Wooldridge 1995), although that issue is still being debated (U.S. PPRC 1997, 153). With or without risk adjustment, customer satisfaction surveys are subject to misinterpretation because they focus primarily on the experiences of healthy persons. As a result of these limitations, health plans can legitimately complain that they are not being fairly assessed. The NCQA is acutely aware of these problems and would like to correct them, but it lacks the authority to do so.

HCFA, a more promising candidate for vigorous intervention, recently has taken some encouraging initiatives. For example, HCFA has funded the Foundation for Accountability (FAcct) and the Rand Corporation, in an effort to develop viable outcome measures for diabetes, depression, and breast cancer (Freudenheim 1996). Also, in December 1996 HCFA announced that all Medicare HMOs will have to submit a common set of HEDIS 3.0 indicators and will have to consent to auditing by designated peer review organizations (U.S. HCFA 1996). This is an important step forward that should permit Medicare consumers to make better managed care choices. Unfortunately, however, HCFA has declined to take similar steps with respect to Medicaid HMOs, arguing that it lacks the authority to do so. This is a puzzling position, because the federal government continues to spend more money on Medicaid than the states spend. Until HCFA reverses itself, or state governments develop better HMO report cards, Medicaid consumers (and their advocates) will continue to face an acute information shortage.

The weakest of the three systems is the one we have for nursing homes, which may be described as a semi-public compliance system. In this instance, data gathered for regulatory purposes by state surveyors (or inspectors) have been disseminated to consumers on a limited basis but without significant adaptation to take consumer needs into account. If they wish to do so, consumers may obtain raw data on the most recent deficiencies (or code violations) identified by state surveyors for particular nursing homes. The raw data may offer some useful insights for savvy consumers who understand whom to contact (HCFA's regional office), how to read the charts (known as OSCAR 4 reports), and which deficiencies are more worrisome than others. For most consumers, however, the raw data are inaccessible and incomprehensible.

To make regulatory compliance data more user friendly, some nursing home advocacy groups have published report cards based on such data for a metropolitan area or a state. Some newspapers have also produced report cards from time to time.(12) Although such efforts are commendable, they have not been regularized or routinized. Also, they are based on data gathered to determine whether individual facilities are complying with the law, not whether they are offering care of acceptable or superior quality. While the two concepts are undoubtedly related, they are not identical.

Thanks to federal law, all fifty states now have in place a patient assessment system, known as the Minimum Data Set (MDS) system. In six states, a subset of quality indicators has been tested and validated for more general use (Zimmerman et al. 1995). In principle, MDS data could be used to evaluate and compare nursing homes within or even across states. Alternatively, MDS data might be combined with OSCAR data to produce a more comprehensive report. Thus far, however, the states have made extremely limited progress in developing nursing home report cards, partly because of political opposition from the nursing home industry. As a Texas official (Wilson 1997) explains, "Everybody's comfort level goes down real quickly when you try to put a number on people."


Several years ago Relman (1988) argued that a new era of "assessment and accountability" has dawned in health care. Considerable progress has been made in measuring quality for hospital patients, health plan members, and nursing home patients among others. Some progress also has been made in controlling for variations in patient risk, at least with respect to hospital patients. Although cost containment remains a dominant goal, we are beginning to take quality seriously as well.

Yet many obstacles remain. A risk-adjusted performance system is beginning to emerge for hospitals, but it seems to be an elusive goal for nursing homes and HMOs. The nursing home sector's semipublic compliance system does not come close to meeting consumers' information needs. The managed care sector's loosely coupled indicator system is more promising but continues to suffer from inadequate participation, standardization, and validation.

Even the hospital sector has a long way to go. That was illustrated recently when the Joint Commission on Accreditation of Health Care Organizations (JCAHO) announced new "performance-based" accreditation standards for hospitals and other health care agencies (Moore 1997). It turns out that hospitals will be free to choose from as many as sixty information systems and that they will be free to select the indicators on which their performance is judged. Under such circumstances, hospitals can include in report cards only the information they wish consumers to know. Such selective reporting is likely to cast most hospitals in a highly favorable light and limit opportunities for sound consumer choices. While individual states have made considerable progress in developing sophisticated hospital report cards, the national picture is much bleaker.

As these struggles continue, private organizations have plugged some holes in our information safety net. The private sector has made some valuable contributions in demonstrating how user-friendly report cards can be produced and in preparing tailor-made report cards for local metropolitan areas. The private sector's role will continue to be important, because user-friendly formats affect use and because consumers are particularly interested in comparing health options within their metropolitan areas (Hanes and Greenlick 1996). Nevertheless, state governments will probably be the most important determinants of whether we develop viable risk-adjusted performance systems for hospitals, HMOs, and nursing homes throughout the United States.

Clearly, report cards have not solved the problem of information asymmetries in health care markets. Just as clearly, however, report cards have the potential to reduce the information gap between consumers and producers. Scientists, consumers, and providers expect different things from report cards. It is not easy to reconcile these expectations, but some report cards have managed to do so. These report cards might well serve as templates for public and private health care entrepreneurs who seek to promote quality through accountability.
Exhibit 1
Hospital Report Cards

                                Validity           Comprehensiveness
California, State               (****)             (***)
Florida, State                  (***)              (****)
New York, State                 (*****)            (***)
Pennsylvania, State             (****)             (***)
U.S. News                       (***)              (**)

                               Comprehensibility      Relevance
California, State              (***)                  (***)
Florida, State                 (**)                   (****)
New York, State                (***)                  (***)
Pennsylvania, State            (***)                  (****)
U.S. News                      (*****)                (**)

                               Reasonableness        Functionality
California, State              (****)                (***)
Florida, State                 (***)                 (***)
New York, State                (***)                 (****)
Pennsylvania, State            (***)                 (***)
U.S. News                      (****)                (**)

(*) = POOR

(**) = FAIR

(***) = GOOD

(****) = VERY GOOD

(*****) = EXCELLENT
Exhibit 2
HMO Report Cards

                                Validity           Comprehensiveness
CCHRI, California               (****)             (****)
California, State               (*)                (***)
N.Y. State. Insurance           (***)              (*)
N.Y. State, Health              (****)             (****)
Florida, State                  (***)              (**)
Health Pages, PGH               (***)              (****)
Newsweek                        (***)              (***)
U.S. News                       (***)              (**)

                               Comprehensibility      Relevance
CCHRI, California              (*****)                (*****)
California, State              (*)                    (**)
N.Y. State. Insurance          (***)                  (**)
N.Y. State, Health             (***)                  (****)
Florida, State                 (***)                  (***)
Health Pages, PGH              (*****)                (****)
Newsweek                       (*****)                (*****)
U.S. News                      (*****)                (****)

                               Reasonableness        Functionality
CCHRI, California              (***)                 (****)
California, State              (*****)               (*)
N.Y. State. Insurance          (*****)               (***)
N.Y. State, Health             (***)                 (***)
Florida, State                 (*****)               (***)
Health Pages, PGH              (***)                 (***)
Newsweek                       (***)                 (***)
U.S. News                      (*****)               (***)

(*) = POOR

(**) = FAIR

(***) = GOOD

(****) = VERY GOOD

(*****) = EXCELLENT
Exhibit 3
Nursing Home Report Cards

                     Validity   Comprehensiveness

Florida, State       (***)      (***)
CANHR, California    (*)        (**)
Consumer Reports     (***)      (**)

                     Comprehensibility   Relevance   Reasonableness

Florida, State       (***)               (**)        (*****)
CANHR, California    (***)               (**)        (**)
Consumer Reports     (****)              (**)        (***)


Florida, State       (**)
CANHR, California    (*)
Consumer Reports     (***)

(*) POOR

(**) FAIR

(***) GOOD

(****) VERY GOOD


(1) For a fuller discussion of these perspectives and the criteria discussed below, see Gormley and Weimer (forthcoming).

(2) Some of these tasks are contracted out in particular states. For example, California contracts out certain statistical analysis tasks to the University of California Davis and the University of California San Francisco.

(3) Pennsylvania excludes hospitals with fewer than thirty (AMI, CABG) cases per year. California excludes hospitals that list diabetes as a complicating factor in an unusually low number of cases. The concern is that record keeping is poor at such hospitals (California OSHPD 1996b).

(4) The c statistic in logistic regression is roughly analogous to the R squared statistic in ordinary least squares regression (or the coefficient of determination). The c statistic measures the percentage of instances in which a patient who dies in the hospital is assigned a higher probability of death by the model's statistical predictors than a patient who survives to discharge.

(5) Atlas (formerly known as MedisGroups) is a software system for medical outcomes analysis developed by MediQual Systems Inc., a proprietary company based in Westboro, Mass.

(6) The coding is done by persons known as medical records technicians.

(7) Florida uses the APR-DRG system developed by 3M Health Information Systems to classify cases. This system was already in use before Florida decided to publish a report card.

(8) California's administrative data base is similar to, but not identical to, the MEDPAR data base used by HCFA in developing its hospital mortality models. Unlike the MEDPAR data base, which encompasses only senior citizens, California's administrative data base includes younger patients as well.

(9) To verify this, I compared the NCQA accreditation status of Newsweek's forty-three HMOs with the accreditation status of all HMOs. Of Newsweek's sample, 56 percent were fully accredited in 1996, as opposed to 15 percent of all HMOs during the same year. This dramatic gap may be due in part to the fact that Newsweek surveyed relatively large HMOs, which are probably more likely to seek and receive (full) accreditation. However, it is also reasonable to suppose that HMOs with poorer track records were more likely to decline Newsweek's invitation to participate.

(10) The Fs are for residents' rights, use of restraints, and government/industry response; the D is for quality of care; and the C is for enforcement.

(11) Some regional offices charge a fee for this information, while others do not.

(12) See, for example, Young (1996), which systematically assesses all of Michigan's nursing homes.


Anders, George. 1996 "New Rules Press HMOs to Disclose Data." Wall Street Journal, (July 16):3.

Belkin, Lisa. 1996 "But What About Quality?" New York Times Magazine, (Dec. 8): 68-106.

Berry, Donald. 1997 Delaware Department of Health and Social Services. Telephone interview, Jan. 6.

Brennan, Troyen, and Berwick, Donald. 1996 New Rules: Regulation, Markets, and the Quality of American Health Care. San Francisco: Jossey-Bass.

California Advocates for Nursing Home Reform (CANHR). 1996 1995 Report Card. San Francisco: CANHR.

California Cooperative HEDIS Reporting Initiative. 1996 Report on Quality of Care Measures. San Francisco: SSHRI.

California Department of Corporations. 1996 Health Care Service Plan Enrolee Complaint Data. Sacramento: Dept. of Corporations. Aug.

California Office of Statewide Health Planning and Development (OSHPD). 1996a Report of the California Hospital Outcomes Project, Acute Myocardial Infarction, vol. 1, Study Overview and Results Summary. Sacramento: OSHPD. May.

1996b Report of the California Hospital Outcomes Project, Acute Myocardial Infarction, vol. 2, Technical Appendix. Sacramento: OSHPD. May.

Castles, Anne. 1997 Pacific Business Group on Health. Telephone interview, June 30.

Chassin, Mark; Hannan, Edward; and DeBuono, Barbara. 1996 "Benefits and Hazards of Reporting Medical Outcomes Publicly." New England Journal of Medicine 334:6:394-98.

Corrigan, Janet. 1995 "How Do Purchasers Develop and Use Performance Measures?" Medical Care 33:1:JS18-JS24.

Corrigan, Janet, and Rogers, Lisa. 1995 "Comparative Performance Measurement for Health Plans." In Vahe Kazandjan, ed. The Epidemiology of Quality, 84106. Gaithersburg, Md.: Aspen.

Davis, Michael. 1997 Vermont Health Care Authority. Telephone interview, Jan. 6.

Edgman-Levitan, Susan and Cleary, Paul. 1996 "What Information Do Consumers Want and Need?" Health Affairs 15:4:42-56.

Epstein, Arnold. 1995 "Performance Reports on Quality Prototypes, Problems, and Prospects." New England Journal of Medicine 333:1:57-61.

Florida Agency for Health Care Administration. 1996a 1996 Guide to Hospitals in Florida. Tallahassee: Agency for Health Care Administration.

1996b Florida Medicaid Prepaid Health Plan Review. Tallahassee: Agency for Health Care Administration, Sept.

1996c Guide to Nursing Homes in Florida. Tallahassee: Agency for Health Care Administration.

Freudenheim, Milt. 1996 "The Grading Becomes Stricter on HMOs." New York Times. July 16, D1.

Garland, Gary. 1997 North Dakota Department of Health. Telephone interview, Jan. 6.

Genco, Frank. 1997 Legislative Assistant to Texas State Rep. Glen Maxey. Telephone interview, Jan. 15.

Ghali, William; Ash, Arlene; Hall, Ruth; and Moskowitz, Mark. 1997 "Statewide Quality Improvement Initiatives and Mortality after Cardiac Surgery." Journal of the American Medical Association 277:5:379-82.

Gold, Marsha, and Wooldridge, Judith. 1995 "Surveying Consumer Satisfaction to Assess Managed-Care Quality: Current Practices." Health Care Financing Review 16:4:155-76.

Gormley, William Jr., and Weimer, David. forthcoming Organizational Report Cards. Cambridge, Mass.: Harvard University Press.

Green, Jesse; Winfield, Neil; Krasner, Mel; and Wells, Christopher. 1997 "In Search of America's Best Hospitals." Journal of the American Medical Association 277:14:1152-55.

Hanes, Pamela, and Greenlick, Merwyn. 1996 Oregon Consumer Scorecard Project: Final Report. Portland: Oregon Health Policy Institute.

Hannan, Edward. 1997 SUNY-Albany. Telephone interview, Jan. 13.

Hannan, Edward; Kilburn, Harold Jr.; Lindsey, Michael; and Lewis, Rudy. 1992 "Clinical versus Administrative Data Bases for CABG Surgery: Does It Matter?" Medical Care 30:10:892-907.

Hannan, Edward; Kilburn, Harold Jr.; Racz, Michael; Shields, Eileen; and Chassin, Mark. 1994 "Improving the Outcomes of Coronary Artery Bypass Surgery in New York State." Journal of the American Medical Association 271:10:761-66.

Hibbard, Judith, and Jewett, Jacquelyn. 1996 "What Type of Quality Information Do Consumers Want in a Health Care Report Card?" Medical Care Research and Review 53:1:28-47. 1997 "Will Quality Report Cards Help Consumers?" Health Affairs 16:3:218-28.

Hopkins, David. 1997 Pacific Business Group on Health. Telephone interview, Feb. 25.

Iezzoni, Lisa; Ash, Arlene; Coffman, Gerald; and Moskowitz, Mark. 1992 "Predicting In-Hospital Mortality." Medical Care 30:4:347-59.

Isaacs, Stephen. 1996 "What Do Consumers Want to Know?" Health Affairs 15:4:31-41.

Kertesz, Louise. 1996 "California Report Card Finds HMOs Just Average." Modern Healthcare (July 1):2.

Montague, Jim. 1996 "Report Card Daze." Hospitals and Health Networks 5:33-38.

Moore, J. Duncan Jr. 1997 "JCAHO Tries Again." Modern Healthcare (Feb. 24):2-3.

New York State Department of Health. 1997 "1995 Quality Assurance Reporting Requirements." Albany.

New York State Insurance Department. 1996 "1995 Annual Ranking of Health Insurance Complaints." Albany.

Pauly, Mark. 1992 "Effectiveness Research and the Impact of Financial Incentives on Outcomes." In Stephen Shortell and Uwe Reinhardt, eds. Improving Health Policy and Management, 151-93. Ann Arbor, Mich.: Health Administration Press.

Pennsylvania Health Care Cost Containment Council. 1995 "Coronary Artery Bypass Graft Surgery, Technical Report." vol. 4, June.

Relman, Arnold. 1988 "Assessment and Accountability: The Third Revolution in Medical Care." New England Journal of Medicine 319:18:1220-22.

Roohan, Patrick. 1998 New York State Department of Health. Telephone interview, Mar. 13.

Rubin, Rita, and Bettingfield, Katherine. 1996 "Rating the HMOs." U.S. News & World Report (Sept. 2):52-63.

Schneider, Eric, and Epstein, Arnold. 1996 "Influence of Cardiac Surgery Performance Repons on Referral Practices and Access to Care." New England Journal of Medicine 335:4:251-56.

Smart, Dean. 1996 California Public Employees Retirement System (CALPERS). Telephone interview, Dec. 9.

Spragins, Ellyn. 1996 "Does Your HMO Stack Up?" Newsweek (June 24):56-63.

Stowers, Charlene. 1996 Texas Department of Human Services. Telephone interview, Dec. 20.

Thompson, Frank. 1994 "The Quest for Quality Care: Implementation Issues." In John DiIulio and Richard Nathan, eds., Making Health Reform Work, 85-113. Washington, D.C.: Brookings.

Thompson, Joseph. 1997 National Committee for Quality Assurance. Remarks at Georgetown University, Apr. 24.

U.S. General Accounting Office (GAO). 1994 Health Care Reform: `Report Cards' Are Useful but Significant Issues Need to be Addressed. GAO/HEHS-94-219. Washington, D.C.: GAO, Sept.

1995 Health Care: Employers and Individual Consumers Want Additional Information on Quality. GAO/HEHS-95-201. Washington, D.C.: GAO, Sept.

U.S. Health Care Financing Administration (HCFA). 1996 "Operational Policy Letter #47." Washington, D.C.: HCFA Office of Managed Care, Dec. 23.

U.S. Physician Payment Review Commission (PPRC). 1997 Annual Report to Congress, 1997. Washington, D.C.: PPRC.

Vining, Aidan, and Weimer, David. 1988 "Information Asymmetry Favoring Sellers: A Policy Framework." Policy Sciences 21:4: 281-303.

Vladeck, Bruce. 1994 "The Consumer Information Strategy." Journal of the American Medical Association 272:3:196.

Weisbrod, Burton. 1988 The Nonprofit Economy. Cambridge, Mass.: Harvard University Press.

Whipple, Christine. 1996 Pittsburgh Business Group on Health. Remarks at the annual meeting of the Association for Public Policy Analysis and Management. Pittsburgh, Nov. 1.

Whistler, Bradley. 1997 Alaska Public Health Office. Telephone interview, Jan. 6.

Wilson, Sue. 1997 Texas Department of Human Services. Telephone interview, Aug. 13.

Young, Alison. 1996 "Good Care is Possible." Detroit Free Press. Oct. 11, p. 1.

Zelman, Walter. 1996 The Changing Health Care Marketplace. San Francisco: Jossey-Bass.

Zimmerman, David; Karon, Sarita; Arling, Greg; Clark, Brenda; Collins, Ted; Ross, Richard; and Sainfort, Francois. 1995 "Development and Testing of Nursing Home Quality Indicators." Health Care Financing Review 16:4:107-27.

The author would like to thank Jean Mitchell, Mark Peterson, and David Weimer for helpful comments and suggestions.
COPYRIGHT 1998 Oxford University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Gormley, William T., Jr.
Publication:Journal of Public Administration Research and Theory
Date:Jul 1, 1998
Previous Article:Legislatures: rational systems or rational myths?
Next Article:The impact of federal grants and other funds on general fund expenditure decisions: a detailed analysis of one city.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters