Printer Friendly

The Effects of Disseminating Performance Data to Health Plans: Results of Qualitative Research with the Medicare Managed Care Plans.

Objective. To assess the information needs and responses of managed care plans to the Medicare Managed Care Consumer Assessment of Health Plans Study (MMC-CAHPS[R]).

Data Sources/Study Setting. One hundred sixty-five representatives of Medicare managed care plans participated in focus groups or interviews in the spring of 1998, 1999, and 2000.

Study Design. In 1998 focus groups were conducted with representatives of managed care plans to develop and test a print report of MMC-CAHPS results. After the reports were disseminated focus groups and interviews were conducted in 1999 and 2000 to identify perceptions, uses, and potential enhancements of the report.

Data Collection/Extraction Methods. The study team conducted a total of 23 focus groups and 12 telephone interviews and analyzed the transcripts to identify major themes.

Principal Findings. In 1998 participants identified the report content and format that best enabled them to assess their performance relative to other Medicare managed care plans. In 1999 and 2000 participants described their responses to and uses of the report. They reported comparing the MMC-CAHPS results to internal surveys and presenting the results to senior managers, market analysts, and quality-improvement teams. They also indicated that the report's usefulness would be enhanced if it were received within six months of survey completion and if additional data analysis was presented.

Conclusions. Focus group results suggest that the MMC-CAHPS report enhances awareness and knowledge of the comparative performance of Medicare managed care plans. However, participants reported needing additional analysis of survey results to target quality-improvement activities on the populations with the most reported problems.

Key Words. Health plans, managed care, Medicare, quality improvement, report cards

In recent years the public reporting of health care performance data has become more prevalent and prominent through the efforts of accreditation agencies, public and private purchasers, and provider organizations such as state hospital associations and health plans (Epstein 1998, 2000; Leatherman and McCarthy 1999; Smith, Rogers, Dreyfus, et al. 2000). Public reporting is thought to lead to quality improvement in many ways. First, these reports provide information that allows consumers and other purchasers to select a health plan or provider on the basis of quality as well as cost. Thus, the reports stimulate health care organizations to improve their services to capture more market share. For this reason public reporting of performance information is recommended as a strategy for mobilizing managed competition in the health care market (Enthoven 1993). Similarly, public reporting can foster quality improvement through market accountability. Consumers and purchasers can use the information as leverage to inf luence providers and health care organizations to improve quality of care and be more responsive to patient needs and desires (Daniels and Sabin 1998). Yet another way that public reporting enhances and supports quality-improvement efforts is by providing health care organizations with standardized measures for monitoring and evaluating their performance relative to others. These standardized measures then facilitate the comparison of outcomes across organizations and allow for the identification of benchmarks and best practices (The President's Advisory Commission on Consumer Protections and Quality in the Health Care Industry 1998). Thus, public performance reports can be thought of as an "information intervention" whose intended outcome is an increased awareness of health care quality and a subsequent increase in quality-improvement activities in health care organizations.

Research on public reporting of health care quality information has primarily focused on the needs of consumers and on its use by purchasers.

Research that explores its effects on the hospitals and health plans being evaluated is limited in comparison. A recent review of published research on the topic found evidence that health care organizations are indeed responsive to the public release of performance data, especially in competitive markets (Marshall et al. 2000). For example, one study assessed the response of hospitals in Pennsylvania to a report about the outcomes and charges for coronary artery bypass graft surgery. This study found that the public release of performance information by a state agency encouraged hospitals to make changes in marketing, governance, and clinical care (Bentley and Nash 1998). However, the present review of literature revealed that the research conducted to date has focused on the response of hospitals to reports about performance in a limited geographic area. There has been no published research in peer-reviewed journals about how other health care organizations, such as managed care plans, respond to informatio n that is collected and reported at a national level.

One project that collects and publicly reports comparative health plan performance data is the Medicare Managed Care Consumer Assessment of Health Plans Survey (MMC-CAHPS). Initiated in 1997 by the Health Care Financing Administration (HCFA), MMC-CAHPS is a national survey that assesses the quality of Medicare managed care plans from the beneficiaries' perspective. Administered annually, the survey evaluates different aspects of outpatient care including access, patient-provider communication, and service quality (Schnaier, Sweeny, Williams, et al. 1999). One objective of the MMC-CAHPS project is to provide health plans with comparative information about members' experiences that will help plans "identify problems and improve the quality of care and services they provide to beneficiaries" (HCFA 1998). The purpose of the study described here was to solicit the input of health plans on the design and format of the report and obtain their feedback on its perceived usefulness for quality improvement.

Merely collecting performance information and making it available is not enough to ensure that the results will be used as intended. The information must be presented in a clear and effective manner to maximize the likelihood that the findings will influence and guide quality-improvement efforts (Morris, Fitz- Gibbon, and Freeman 1987). Indeed, some reports have presented data so poorly that their audiences ignored them when making the very decisions that the reports were supposed to influence (Tumlinson, Bottigheimer, Mahoney et al. 1997). As one researcher said, "The way in which plan performance on prevention or consumer satisfaction is presented may be as influential in decisions as the actual level of performance" (Hibbard, Slovic, and Jewett 1997). Graphical displays of data, for example, can help audiences to focus on major findings and identify important trends. Similarly, the narrative content provides the context that enhances the comprehension and credibility of the results. Both components need t o be tested to ensure that they do not overwhelm the intended audience with unnecessary details or leave out essential information. For these reasons it is critical to conduct audience research to develop effective reports that provide comparative performance information.

STUDY DESIGN AND RESEARCH OBJECTIVES

As shown in Figure 1, the study involved three rounds of report development and testing through focus groups and interviews with representatives of Medicare managed care plans. The first round of focus groups (N = 9) was conducted between February and June 1998 to assess the plans' information needs and report preferences regarding the MMC-CAHPS results. The results of these focus groups were used to design the "information intervention," a print report of the MMC-CAHPS results that was then distributed to all Medicare managed care plans in August 1998. The second round of research comprised seven focus groups and five telephone interviews and was conducted between March and April 1999. The purpose of this research was to assess the effectiveness of the MMC-CAHPS report as an information intervention and solicit feedback on how to modify the next year's MMC-CAHPS report to make it more useful. After updating the data and improving the report design the second MMC-CAHPS report was distributed to Medicare manag ed care plans in July 1999. Finally, a third round of focus groups (N = 7) and interviews (N = 7) was conducted between February and March 2000 to assess the plans' responses to and uses of the modified report.

In year one the objectives of the study were to assess health plan information needs and to test alternative reporting formats. Research questions addressed during this phase included the following: Who are the likely report users within the health plan? What are their preferences in terms of report content, format, and delivery?

In years two and three the research objectives were to assess plan responses to and uses of the MMC-CAHPS reports distributed in the previous year. Questions addressed in this phase included:

Report Content. Was the content clear, comprehensible, and relevant to the intended audiences within the plan? Did the report include an appropriate level of detail? Did the report include enough background information and narrative explanation to aid comprehension and interpretation of the results? Did the report contain the information health plan representatives were interested in?

Data Analysis and Displays. Was the presentation of the data easy to understand and use? Is there a perceived need for additional analysis of the MMC-CAHPS data? If so, what analyses were perceived as most useful for identifying problems and opportunities for quality improvement? Which geographic comparisons (state, regional, or national) were most relevant?

Report Distribution and Use. Which persons in the plan received the report? Did they receive the report in a timely manner or did they have difficulty locating the report? Did the report recipients review and use the results? If so, how? Did the MMC-CAHPS report enhance health plan knowledge of and interest in beneficiary perceptions of health plan quality? Did the public report lead them to identify problems with their quality of care and services and to develop quality-improvement plans?

METHODS

Two trained moderators conducted focus groups with health plan representatives using a structured discussion guide. This discussion guide included open-ended questions about participants' perceptions of the report as well as detailed questions about specific issues of concern such as preferred formats and data analysis. Research assistants observed the groups and took notes. Because of schedule conflicts some plan representatives were not able to participate in the focus groups. These individuals were offered the opportunity to provide comments about the MMC-CAHPS report by telephone.

All participants were assured of the confidentiality and anonymity of their comments, and the discussions were tape recorded. Focus group participants were offered refreshments, travel expenses, and a $50 honorarium as incentives. In year two participants in telephone interviews received no financial incentives, but in year three they were offered $25 for a one-hour telephone interview.

Participant Recruitment

To ensure that the focus group participants reflected the diversity of Medicare managed care plans, the researchers selected one state from each often HCFA regions, usually the state with the largest number of Medicare managed care plans. The researchers then sent a letter to one or more representatives from each of the Medicare managed care plans in that state inviting them to participate in a focus group at a central location in their state or region. Those unable to attend a focus group were invited to participate in a telephone interview.

The research team identified potential focus group participants by telephoning all health plans in the state and asking for the person in charge of the Medicare managed care product. In year one health plan contact information was obtained from a variety of sources: HCFA records, information from the Association of Health Maintenance Organizations, the 1997 Directory of Managed Care Organizations, and directory assistance. Multiple sources were necessary because of a high percentage of out-of-date staff names and telephone numbers. In years two and three the research team selected potential participants from the list of Medicare health plan representatives who had received the previous year's MMC-CAHPS report These report recipients were asked to identify someone responsible for reading and using the MMCCAHPS report. The persons most commonly identified were managers of government programs, quality improvement, or market research.

Participant Characteristics

Many of the health plan representatives who were invited to participate did not attend a focus group or participate in an interview. For example, in year two, of the 104 representatives contacted only 43 were able to attend a focus group or schedule a telephone interview. In year three, of the 159 representatives contacted 50 were willing to participate and 37 actually completed an interview or focus group. Lack of participation was often caused by logistic issues such as scheduling conflicts and difficulty identifying a focus group facility that was close to all health plans in a state. Also, the deadline for report production meant that all research needed to be concluded within a short period. Finally, it is probable that those who did not participate found the report to be less important and less interesting than those who did participate.

In all three years focus group participants represented a variety of occupations and departments within managed care plans including managers responsible for quality improvement, government programs, and market research. In the first year of the project nine focus groups were conducted involving a total of 80 representatives from 48 Medicare managed care plans in eight states (California, Florida, Illinois, Maryland, Massachusetts, New York, Rhode Island, and Washington state.) In the second year of the project seven focus groups and five interviews were conducted involving a total of 43 representatives from 33 Medicare managed care plans in ten states (California, Colorado, Florida, Georgia, Massachusetts, New York, Oregon, Pennsylvania, Rhode Island, and Washington state.) In the third year of the project seven focus groups and seven interviews were conducted from February to March 2000 involving a total 37 representatives from 27 Medicare managed care plans in ten states (Arizona, Colorado, Florida, Louis iana, Massachusetts, Missouri, New York, Ohio, Oregon, and Pennsylvania.)

There was some overlap in the states, health plans, and persons that participated in the three rounds of focus groups. For example, 13 health plans were represented in two of the three project years and nine health plans participated in all three years. Part of this overlap was because of the fact that a few states contain the majority of Medicare managed care enrollees and a few large national chains dominate the Medicare managed care market. However, the researchers also actively recruited some of the previous year's participants to assess whether the report addressed the needs that they had identified in the previous year. In this sense participants could validate whether the modifications made the report more effective. In addition new health plans and states were recruited each year to assess whether the report met the needs of health plans that had not had the opportunity to participate in previous years.

Characteristics of the Participating Medicare Managed Care Plans

Representatives from more than 100 Medicare managed care plans participated in the focus groups or interviews over the three project years. There are a number of ways to define "health plan" as a unit of analysis. In this study a health plan was defined as an organization with a Medicare managed care contract or contracts within a single state. By this definition an organization that has Medicare managed care contracts in California and in New York would be considered two plans, and one that has two contracts in Northern California and three in Southern California would be considered one plan. As displayed in Table 1, plans ranged in size from approximately 250 enrollees to more than 600,000 enrollees with a mean of 46,000 enrollees. Half were not-for-profit plans. Half had an independent practice association model, about a quarter had a group model, and the remainder had a staff model. All but one of the plans had been established in the 1980s or 1990s. The health plans that were represented in the focus gro ups or interviews were more likely to be nonprofit plans. In addition there were many participants from large national chains.

Analysis and Interpretation of Results

The moderators, observers, and note takers analyzed the results of the focus groups and interviews independently by reviewing their notes and the transcripts to identify themes and quotations related to the major study questions described above. In subsequent discussions any differences in interpretation were discussed and resolved by reviewing the transcripts. The research team wrote a summary of results after each focus group and interview, and the key findings were then presented to and discussed with the funding agency, HCFA, and other collaborators on the MMC-CAHPS project. This panel of experts also reviewed the final report and provided comments and questions about how to interpret the results. In year three, as a further step to validate the results, the final focus group report was also sent to all participants for their review and comment.

Qualitative research methods such as focus groups and interviews are effective techniques for identifying and exploring attitudes, perceptions, and information needs (Debus 1986). They are also useful in report development for testing key messages and alternative report formats and data displays (U.S. Department of Health and Human Services 1999). When assessing report effectiveness one can use focus groups and interviews to observe how participants with similar backgrounds respond differently to reports as an information intervention. In focus groups the interaction between participants can generate new and unexpected insights. The discussion can yield rich findings about the ways in which a report affects (Or does not affect) the audience's awareness of, trust in, and use of that information. On the other hand, focus group results can be affected by bias. First, people who do not find the report important or relevant enough to warrant a two-hour discussion are unlikely to participate in the focus group. Se cond, the views of more extroverted and articulate members are more likely to be represented. Dominant participants may also sway the opinions of others in the group. Although focus group results may not be not representative and generalizable to the population of interest, they yield useful feedback from potential report users that can help guide the report-design process. They can also reveal how readers respond to the information and thereby indicate whether the report elicits the intended reaction.

In contrast to focus groups, telephone interviews provide less opportunity for bias or influence from other participants. In addition some participants may be more responsive in a one-on-one discussion, and the interviewer can explore one person's perceptions in depth without fear of losing the attention of other participants (Debus 1986). However, interviews do not generate as many spontaneous discoveries or ideas as focus groups often do. There may also be more distractions and interruptions on a telephone interview, and respondents may therefore be less attentive and engaged than when in a focus group. Nevertheless, in this project conducting telephone interviews with participants who were unable to participate in focus groups allowed the research team to capture information from plans located in more remote areas in states where managed care penetration was lower and where health plans were geographically far away from each other. In this way the telephone interviews helped to increase the diversity of h ealth plans represented in the results.

PRINCIPAL FINDINGS

The MMC-CAHPS report for health plans was developed through an iterative process that included extensive testing of its components with its intended audience. The final product, a detailed report for each of the three survey rounds, reflects the suggestions of the health plans that participated in the focus groups and interviews. In contrast to the consumer report, which displays six survey measures on the Medicare web site, the report for health plans is a lengthy analytic report in print format. The report begins with a short executive summary comparing the performance of all Medicare health plans in the state on nine summary measures. This summary and the other data displays in the report highlight results that were better or worse than the state or regional average. (See Figure 2 for an example of such a display.) To meet health plan requests for detailed explanation of the results, the report for health plans also includes the results for all 87 survey questions as well as an extensive methodology sectio n describing the development of the survey instrument and the methodology used for data collection, validation, and analysis. The appendix of the report includes a copy of the survey instrument, a glossary, and a section addressing frequently asked questions.

The health plan representatives that participated in the focus groups and interviews had different perceptions of and responses to the MMCCAHPS report, and those reactions changed over time. The following section describes the results of the three rounds of focus groups and interviews in terms of five major themes: the credibility of the report, concerns about public reporting, preferred displays of comparative performance, information to support quality improvement, and the logistic challenges of producing effective reports.

Credibility of the Report

In the first year when the report was being designed, many participants had questions about how the MMC-CAHPS was developed and implemented. While most expressed no doubts about the reliability and validity of the survey, a few noted that Medicare beneficiaries are a diverse population that faces a number of linguistic, cultural, and cognitive challenges that might affect their interpretation of the questions and their ability to evaluate health plan quality. In addition a few had concerns about the validity of the survey for assessing comparative performance across different types of health plans:

One of the concepts behind the development of CAHPS... was being able to administer it across types of products and populations and come up with something that would put everyone on a level playing field. My opinion is that you can't take a small HMO in Brooklyn and compare it with [a large national chain like] US Healthcare.

The number of participants expressing such doubts diminished substantially by the third year of the project, and the few who continued to question the results did so less fervently. In contrast most participants expressed a greater acceptance and trust in the results: "It helps that [CAHPS] is a nationally syndicated survey and that plans across the country accept it." Another participant noted:

I felt [the MMC-CAHPS results] were probably very credible. I think we all did at our plan. We knew that we had had some problems, even the first year when [the results] were so bad. If we did not think it was credible, we would not have started investigating what could have been causing some of those below-average scores.

These comments indicate some of the reasons why participants' reactions to the MMC-CAHPS changed over time. First, participants may have learned more about the survey development and data validation by reading the methodology section in the MMC-CAHPS report. Second, many participants found that the MMC-CAHPS results were consistent with results from internal surveys. Third, many acknowledged the results as important because the National Committee on Quality Assurance (NCQA) was using some MMC-CAHPS measures for accreditation purposes.

However, participants from managed care plans that serve populations with a history of low response rates (such as non-English speakers and those enrolled in both Medicare and Medicaid) continued to have concerns about the methods used to reach those populations. As a result they continued to question whether the views of these vulnerable subgroups were adequately represented in the survey results. As one said, "I just didn't think it represented our true population membership because of the very low Hispanic response rate." In response to these concerns the MMC-CAHPS survey team intensified their efforts to reach the Hispanic population in subsequent years.

Concerns About Public Reporting

During the first year of the project many participants expressed apprehension about the imminent release of the survey results. A persistent theme in the first year was concern about how external audiences, such as the press and public, would use the MMC-CAHPS results. A number of participants expressed fear that the results would be misinterpreted and misused by journalists and politicians who are biased against managed care: "The press will interpret [the report] however they want to interpret it, and anything that can be twisted is going to be." Further,

In today's market, anything that is turned in to HCFA can be leaked or handed to the press at any given time by anyone who works there. So everything we see here, whether we like it or we don't like it, we have to ultimately look at how it's going to affect [us if the results are] printed on the front page of The San Francisco Chronicle.

In year two representatives from managed care plans in areas where the media published some summary scores were disturbed by the potentially confusing presentation of MMC-CAHPS results. Plans in Colorado, for example, complained that the Denver Post only reported the percentage of people who rated their health plan a perfect ten on a scale of one to ten. This yielded reported scores ranging from 44 percent to 54 percent, which plans thought would have a negative connotation with consumers accustomed to thinking of anything below 70 percent as a failing grade. As one participant said, "It's not fair. It's not a complete picture. It allows the consumer to assume the worst." Another representative from a managed care plan in California described her plan's response to the media coverage as "totally reactionary and defensive"

We're really diverted from what the purpose is supposed to be, which is [quality improvement], to dealing with a hostile media... All the top-level people are scrambling, saying "What the heck is going on? You [in the quality-improvement department] are telling us we're doing okay because of the internal [studies], and we're getting killed because this is what just came out on the LA Times front page."... All this negative stuff is out there that the company is having to divert resources to deal with.

By the third year of the project, however, very few health plan representatives expressed concerns about the misuse of publicly reported MMC-CAHPS results. In fact, a few commented that the public reports actually enhanced their image with beneficiaries:

Seniors actually did go out on the [Medicare Compare] web site when [MMC-CAHPS results] first came out, and they called us and they said, "We see you've scored well here.... You must be the best health plan and provide the best health care."

Reactions to Comparative Performance Displays

The vast majority of participants indicated that the most important part of the MMC-CAHPS report was the analysis and display of the comparative performance data. In the first year a considerable amount of effort was spent on developing and testing displays that clearly distinguished performance that was above or below average without drawing attention to differences that were not statistically significant:

I think it's important to get across that some of these differences mean something and some don't. People really jump to conclusions fast if they see even the slightest bit of a difference. They need help understanding that the difference that they see on the page is not always meaningful.

In years two and three several focus groups participants noted that senior management teams were particularly responsive to the graphical displays of comparative performance: "The board of trustees is very concerned about those arrows [depicting significantly higher or lower performance compared to regional averages]."

Participants also had a number of suggestions about how to improve the comparative performance displays. Many noted that the most relevant benchmark for them was the average performance of plans in their market area or state rather than the average for the HCFA region that was used in the first two MMC-CAHPS reports: "We're concerned about the area in which we do business, and what [plans] the people in those areas are exposed to." Another noted:

Florida markets are so different from one to the other, that if you begin to merge Miami with Tampa and Jacksonville you lose too much information. Down here the benefits in the Miami market are totally different than what's offered by plans in Jacksonville. Market penetration within the state even is very different. I would say you would have to be almost market specific in Florida for the reports to be meaningful.

In addition some participants took issue with the presentation of the data obtained from the ten-point scale items. As noted earlier some felt that this display, which aggregated the scale into zero to seven, eight to nine, and ten, exaggerated poor performance and underestimated average and reasonably good performance. Furthermore, there were questions about why NCQA, which uses MMC-CAHPS data for accreditation purposes, aggregated the same items into segments of zero to six, seven to eight, and nine to ten. [1]

Finally, some participants wanted more information in the report about the managed care plans against which they were being compared--including plan type, enrollment, service area, and information on mergers or continuing Medicare participation. They felt that this would help them to understand the differences in their performance. [2]

Information to Support Quality Improvement

Participants noted that the MMC-CAHPS report supported quality improvement within their health plans in a number of ways. Many indicated that they had presented the MMC-GAHPS results to several internal audiences including the board of directors and staff in senior management, marketing research, and quality improvement. In addition a number of managed care plans had used the MMC-CAHPS report to validate the results of internal beneficiary satisfaction surveys. Some reported using the report to meet federally mandated requirements:

[The MMC-CAHPS report] is one of the many other tools or reports that we can use when we are doing things like selecting quality improvement projects.... There are a lot of [quality improvement system for managed care] standards, and you base it off member input or member feedback and this is just one of those sources. So it helps us in our compliance activities.

Although most participants reported preferring simple reports highlighting comparative strengths and weaknesses many also required detailed results so they could better analyze their performance and target specific areas for quality improvement. There were numerous requests in all three years for access to the raw survey data, which HCFA could not release because of concerns about protecting the confidentiality of survey respondents. As one quality-improvement manager commented, "I don't like prepackaged information. I prefer the raw data." Another concurred, saying, "I need the raw data that goes behind this report. This [print document] is beautiful, but it's fixed."

The primary reason health plans wanted raw data was to conduct a detailed analysis of performance by beneficiary characteristics so that they could target quality-improvement activities to the subpopulations with the most reported problems. Similarly, a few participants noted that they would like to link the MMC-CAHPS to specific providers to identify those that require attention or intervention to improve the quality of their services:

I think that if we have information that helps us understand what the variations are within plan, say from one provider to the next, it helps us prioritize. If there's a big range in satisfaction across different providers, then we know which provider to start with to find out what's going on. So provider-level data would be great.

To respond to these requests the report released in 2000 provides information about how MMC-CAHPS varied for beneficiaries in vulnerable subgroups such as those who are nonwhite, have less than a high school education, or are frail or disabled. In addition more analysis of the MMC-CARPS results will be made available on HCFA's web site, and HCFA is setting up a mechanism to handle health plan requests for special analysis.

A final barrier to using the MMC-CAHPS results for quality improvement was competition among Medicare managed care plans. Many participants noted that the most effective way to use these results for quality improvement was to collaborate with other plans to identify and share best practices, but because plans were all competing for market share representatives from other plans did not want to share information:

I suppose if we weren't all so competitive then it would give us an opportunity to go to a fellow plan that you saw in [the MM C-CARPS report] and list something that you did poorly on and another plan did well on and say, "How did you guys do so great at this?" Of course we're too competitive so we don't do that.

In some cases the market in which the health plans operate changes so rapidly that they have difficulty identifying their colleagues and competitors:

One of the exciting things about this is it's supposed to allow us to compare ourselves with competition in the marketplace. But the industry, especially in New York, is consolidating with HMOs, begetting HMOs, dying and merging... to the point where it was very hard for me to see who was the competition.

Logistic Challenges to Producing Effective Reports

For reports to be useful tools for quality improvement they must be received by the right person at the right time. One recurring theme in the focus groups and interviews was how logistic issues related to report production and distribution impinged upon the effectiveness of the MMC-CAHPS report as an information intervention.

One frequently mentioned shortcoming of the report was that the managed care plans received the data more than a year after the reported experiences had taken place. Many said that data that are more than six months old have lost most of their usefulness and relevance for quality improvement purposes because of the rapid changes within the managed care marketplace such as plan withdrawals, new benefit packages, mergers, and acquisitions:

I think a lot of people at our plan didn't find it very useful because it was so old, and we already knew most of this information.... [We already have] initiatives around some of the areas where we aren't doing so well.

We are hesitant to use it when the data may not be up to date. That is a huge, huge issue.

It's very hard to take data that's a year and a half old and do something with it. In the industry, the way it is now and the way the regulatory/accrediting things are working, you need to show frequent data points to show improvements for service or clinical issues such as access or courteousness. By getting your report a year later, there's not a lot that you can do with that in terms of showing improvement.

Although HCFA did shorten the time between data collection and report dissemination some of the lag time cannot be avoided. The MMC-CAHPS asks respondents about experiences in the previous six months, and it takes time to validate and analyze the data and produce reports.

Another logistic difficulty was identifying the most appropriate report recipient. Because of frequent changes in personnel at managed care plans often the health plan contact person identified in HCFA's records was no longer the person mostly likely to review and use the report. When the health plans were asked to identify the appropriate recipient many had difficulties selecting one because the report could be used by staff in many different departments including government programs, customer service, and quality improvement. Sometimes one person at a health plan's national headquarters would be the recipient for more than 30 reports for several local offices. In these cases the health plan staff in the local offices often did not know who was the designated recipient for the report. [3]

CONCLUSIONS

The results of extensive focus group research with more than 150 health plan representatives suggest that the MMC-CAHPS reports did increase readers' awareness and knowledge of quality of care and services of Medicare managed care plans. Participants stated that the report was useful for assessing their performance relative to their competitors and spoke of using the reports to inform and educate senior managers and quality-improvement teams and validate internal satisfaction surveys.

Similar to the findings noted by Marshall et al. (2000) and references therein, initial apprehension about public reporting yielded to more acceptance and use of the results for monitoring performance. There are a number of reasons that participants' reactions may have changed over time. First, they may have become more familiar with the survey and learned more about how it was developed and tested. This may have increased their confidence in the validity of the results. Second, some participants found that the MMC-CAHPS results were consistent with the findings from internal beneficiary surveys. Third, the fact that media coverage was not as extensive or unfavorable as they anticipated may have increased their acceptance of the results. Indeed it seemed that the participants who continued to question the validity of the results in the third year were those with lower-than-average performance.

Participants also suggested ways to enhance the report's effectiveness as a tool for quality improvement. For example, they noted that additional analysis of the survey results by demographics and disease condition would help target quality-improvement activities on the populations and programs with the most reported problems. They also highlighted the importance of disseminating the report within six months of data collection so that the results could be used to inform quality improvement.

Finally, some participants noted that competition between health plans acts both as a stimulus and a barrier to quality improvement. On one hand, health plans report intensifying their quality-improvement efforts when they realize that their performance is below the reported average or benchmark. On the other hand, competition can inhibit the sharing of best practices among plans operating in the same market.

While focus group results are not necessarily representative and generalizable, they can help researchers assess participants' perceptions and reactions to an intervention such as the MMC-CAHPS report. In this study the focus groups and interviews yielded important information about how health plan representatives responded to a report that compares the performance of Medicare managed care plans. It is probable that the health plan representatives who did not find the MMC-CAHPS report important would not participate in a focus group or interview. Nevertheless, the respondents' candid critiques of the perceived shortcomings of the MMC-CAHPS report suggest that the focus group findings were not unduly affected by courtesy bias.

The user feedback obtained in the focus groups and interviews with Medicare managed care plans led to several enhancements to the content, design, and distribution of the MMC-CAHPS report. These revisions included using the state rather than regional average as a benchmark, disseminating results on CD-rom to facilitate the reproduction and dissemination of the results within health plans, and providing more information about how the survey results vary for different subpopulations. In this way the study results were used not only to evaluate the report as an information intervention but also to enhance its effectiveness as a tool to guide quality improvement.

While this study suggests that public reporting can stimulate interest in comparative performance, it does not provide evidence that public reporting leads to improved health care quality. To assess the long-term effects of public reports such as MMC-CAHPS additional research will be needed to understand exactly how health plan representatives use the information in their quality-improvement efforts. There are many unanswered questions about how comparative performance information is transformed into action plans and then into improved health care quality. How is the information diffused throughout the organization? Which measures are most useful, to whom? What additional analysis is needed to identify problem areas, and what information is needed to suggest possible solutions? To understand this complex process would require a number of case studies, observations, and in-depth interviews with the many report users within health plans. In addition more research needs to be conducted to assess the effect of p ublic performance reports. Several years of performance data need to be analyzed to determine whether the health plans that receive these reports improve their performance over time. Are plans with lower-than-average results more likely to improve than those with above-average results? Do the health plans attribute their improved performance to the MMC-CAHPS report? Thus, additional research is needed both on the process in which these reports are used for improving health care quality and on the long-term effect of these reports on health plan performance.

The MMC-CAHPS project has major implications for health care policy. It is a new initiative predicated on the assumption that collecting and publicly reporting information from a national consumer survey will stimulate Medicare health plans to increase the quality of care to compete for market share. This study provides some evidence that this strategy is having the intended effects. Health plans are concerned about how they are performing relative to others, and some are taking steps to improve their scores. Although it is not yet clear exactly how they use this information and whether it actually leads to significant improvements in the quality of care as perceived by Medicare beneficiaries, the fact that it has stimulated awareness and interest in health care quality is important in and of itself.

ACKNOWLEDGMENTS

The authors wish to thank the representatives of the Medicare managed care plans that participated in the focus groups and interviews.

NOTES

(1.) The zero-to-ten scale rating items were aggregated in this way because data analysts have found that this clustering of responses better distinguishes variations in the performance of Medicare managed care plans. Because older patients tend to report more favorably on their care important variations in responses among the Medicare population are lost when the middle and higher categories are made more inclusive.

(2.) This information was provided in the report distributed in 2000.

(3.) This problem was remedied by calling all health plans to confirm contact information one month before distributing the report and by including a cover letter noting that the report should be distributed to staff in quality improvement, market research, or government programs. In addition, in the third year a copy of the report was made available on CD-rom to facilitate the dissemination of the report to the multiple audiences within the health plan.

REFERENCES

Bentley, J. M., and D. B. Nash. 1998. "How Pennsylvania Hospitals Have Responded to Publicly Released Reports on Coronary Artery Bypass Graft Surgery." Joint Commission Journal on Quality Improvement 24 (1): 40-49.

Daniels, N., and J. Sabin. 1998. "The Ethics of Accountability in Managed Care Reform." Health Affairs 17 (5): 50-64.

Debus, M. 1986. Methodological Review: A Handbook for Excellence in Focus Group Research, pp. 3-10. Washington, DC: Academy for Educational Development.

Enthoven, A. C. 1993. "The History and Principles of Managed Competition." Health Affairs (Suppl.): 24-48.

Epstein, A. M. 2000. "Public Release of Performance Data: A Progress Report from the Front." Journal of the American Medical Association 283 (14): 1884-1886.

-----. 1998. "Rolling down the Runway: The Challenges Ahead for Quality Report Cards." Journal of the American Medical Association 279 (21): 1691-96.

Health Care Financing Administration (HCPA). 1998. "Implementation of Medicare CAHPS." Contract No. 500-95-0057, Task Order No. 4, p.4. Washington, DC: HCFA.

Hibbard, J. H., P. Slovic, and J. J. Jewett. 1997. "Informing Consumer Decisions in Health Care: Implications from Decision-Making Research." Milbank Quarterly 75 (3): 395-414.

Leatherman, S., and D. McCarthy. 1999. "Public Disclosure of Health Care Performance Reports: Experience, Evidence, and Issues for Policy." International Journal for Quality in Health Care 11(2): 93-95.

Marshall, M. N., P. G. Shekelle, S. Leatherman, and R. H. Brook. 2000. "The Public Release of Performance Data: What Do We Expect to Achieve? A Review of the Evidence." Journal of the American Medical Association 283 (14): 1866-74.

Morris, L. L., C. T. Fita-Gibbon, and M. E. Freeman. 1987. How to Communicate Evaluation Findings, p. 9. Newbury Park, CA: Sage Publications.

The President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry. 1998. "Quality First: Better Health Care for All Americans. Final Report to the President of the United States," pp. 73-87. Washington, DC: U.S. Government Printing Office.

Schnaier, J. A., S. F. Sweeny, V. S. L. Williams, B. Kosiak, J.S. Lubalin, R.D. Hays, and L.D. Harris-Kojetin. 1999. "Special Issues Addressed in the CAHPS Survey for Medicare Managed Care Beneficiaries." Medical Care 37 (3): MS69-MS78.

Smith, D. P., G. Rogers, A. Dreyfus, J. Glasser, B. G. Rabson, and L. Derbyshire. 2000. "Balancing Accountability and Improvement: A Case Study from Massachusetts. "Joint Commission Journal on Quality Improvement 26 (5): 299-3 12.

Tumlinson, A., H. Bottigheimer, P. Mahoney, E. Stone, and A. Hendricks. 1997. "Choosing a Health Plan: What Information Will Consumers Use?" Health Affairs 16 (3): 229-38.

U.S. Department of Health and Human Services (DHHS). 1999. "Writing and Designing Print Materials for Beneficiaries: A Guide for State Medicaid Agencies," pp. 266-75. Washington, DC: DHHS.
Table 1: Characteristics of the Medicare Managed Care Plans Whose
Representatives Participated in the MMC-CAHPS Focus Groups from
1998 to 2000
Characteristic Number
No. of enrollees
Range 246-602,474
Mean 46,263
Profit status % not-for-profit 50
Managed care plan model (%)
Independent practice association 50
Group 27
Staff 11
Missing information 12
Maturity (%)
Plan established in the 1970s 1
Plan established in the 1980s 35
Plan established in the 1990s 51
Missing information 13
COPYRIGHT 2001 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Consumer Assessment of Health Plans Study
Author:Smith, Fiona; Gerteis, Margaret; Downey, Nikora; Lewy, Jennifer; Edgman-Levitan, Susan
Publication:Health Services Research
Article Type:Statistical Data Included
Geographic Code:1USA
Date:Jul 1, 2001
Words:7428
Previous Article:The Role of Performance Measures for Improving Quality in Managed Care Organizations.
Next Article:Context and Community.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters