Printer Friendly

Putting Research to Work: Reporting and Enhancing the Impact of Health Services Research.

When research findings go from the printed pages of professional journals to everyday application by health care institutions, clinicians, and patients, one of our most important missions as health services researchers is realized. This kind of research translation and implementation embodies the theme of the 2001 Academy for Health Services Research and Health Policy annual meeting-Research to Action.

John E. Porter, former chairman of the House of Representatives subcommittee responsible for funding the Agency for Healthcare Research and Quality (AHRQ) provided me with the best reminder of this mission. Three years ago, when I testified before his subcommittee regarding AHRQ's FY 1999 budget, [1] the Illinois congressman asked several tough but fundamental questions about the impact of government-funded research. Asking about the results of the Patient Outcomes Research Teams (PORTs), Mr. Porter commented, "This is the point at which we really want to provide some focus because, in the past, and not necessarily applying to this agency, the measurement might be how many PORTs did you do, how many reports were generated? Well, it does not matter how many reports are out there if nobody ever reads them or does anything with them.... What we really want to get at is not how many reports have been done, but how many people's lives are being bettered by what has been accomplished. In other words, is it being u sed, is it being followed, is it actually being given to patients?"

Mr. Porter's challenging question has become the touchstone at AHRQ for measuring the impact of research that we sponsor and conduct. In fact, it is so ingrained in what we do on a daily basis at AHRQ that we refer to it as "the Porter question." Ours is a user-driven research agenda, and we must remember our constituency--people who need information to make decisions about health care.

At the heart of Congressman Porter's question is one word--impact. He wanted illustrations of how our research directly affected patient outcomes or at least influenced clinical practice. To respond effectively to the Porter question, AHRQ has been tracking and recording illustrations of research that has had documented impact.

For example, using the Comprehensive Health Enhancement Support System (CHESS), which was developed through AHRQ-supported research, researchers at the University of Wisconsin at Madison showed, through two AHRQ-funded projects, that HIV-positive patients who were provided with home-based computer systems experienced lower health care costs, fewer hospitalizations, and shorter hospital stays than patients without access to CHESS. Treatment costs in one study were reduced by about $400 per month, and the patients spent 15 percent less time in the doctor's office (Gustafson et al. 1994).

THE DIFFICULTY OF TRACKING IMPACT

Establishing this type of impact takes time, and it takes considerable resources. The question, "What is the impact of AHRQ-funded research?," might seem--to the world outside health services research--easy to answer. But in health services research, impact is rarely immediate, nor does it necessarily unfold in a direct and linear fashion whereby one article leads quickly to one important change in health outcomes.

Demonstrating that research has led to tangible effects in identifiable individuals is admittedly difficult, and it can be costly. As a member of an Agency-funded biliary tract disease PORT at the University of Pennsylvania, we studied the relative effectiveness and cost-effectiveness of different surgical approaches to treating gallbladder disease. We had established a network of hospitals within the state to test ways of using these outcomes results to improve practice patterns. Unfortunately, the project's budget had to be cut, and our efforts to translate research into practice and to test the effectiveness of the dissemination and implementation program could not be supported.

Furthermore, while AHRQ proactively seeks updates on how Agency sponsored research is being used, we often hear about the implementation of research by serendipity. For example, during my first few weeks at AHRQ, I learned about the results of the University of Pittsburgh pneumonia PORT from a flyer I received because of my previous capacity as chief of the medical service at Georgetown University Hospital and as a participant in a number of managed care plans. Several Washington-area plans sent out flyers encouraging doctors to learn about the PORT results so that we could care for patients with community-acquired pneumonia more effectively. But to address a question as important as the one John Porter asked me in my first Appropriations Subcommittee hearing, we need more than serendipity. We need systems in place to track and report impact, and we need help from the researchers we fund.

Although the impact of research may be cumulative and indirect, I believe we still have the responsibility to demonstrate how, as the theme of this year's Academy meeting implies, research in action does shape our health care future.

A WAY TO SHOW IMPACT

As we at AHRQ try to stimulate the translation of research findings and search for evidence that shows an established effect on health outcomes, we may learn that a health care organization has adopted a policy based on AHRQ-funded research. Often, however, several years will pass before we can know what effect it has had on patient care, and ascertaining its effect on patient outcomes is even more difficult. Similarly, we may know that some clinicians are changing their practices based on evidence about effectiveness from an AHRQ-sponsored study, but that is different from knowing how overall practice patterns are being influenced and what the effect is on clinical outcomes.

To address this need to demonstrate the impact of research on people's health, we can use a model that shows different levels of the impact of research. This model was developed by AHRQ staff and consultants who conceived of a pyramid of outcomes that included four different levels of impact (Figure 1), beginning (at the pyramid's bottom) with impact on knowledge and further research (level one) and ascending (at the pyramid's peak) to impact on health outcomes (level four). In between are impact on policies (level two) and impact on clinical practice (level three).

The first level of impact represents research that contributes to the health care knowledge base, leads to future research, or both. Level one also includes tools and methods for research, instruments and techniques to assist clinical decision making, and studies that identify areas in which scientific knowledge is absent but needed. For example, an important building block of the knowledge base for improving patient safety is an AHRQ-funded study by Bates and colleagues, who found that adverse drug events occur in 6.5 percent of hospital admissions and result in average additional lengths of stay of 2.2 days and costs of $3,244 (Bates, Spell, Cullen, et al. 1997). This study was one of those that launched the nation's concern about medical errors and AHRQ's patient safety research agenda.

The second level of impact is research that results in the creation of a policy or program (for example, by professional organizations, health plans, hospitals, legislative bodies, regulators, or accrediting organizations). For example, a chlamydia screening measure developed by an AHRQ-funded effort was included in the National Committee on Quality Assurance (NCQA) draft 2000 edition of the Health Plan Employer Data and Information Set (HEDIS).

At level three, impact is defined as research that results in a change in what clinicians or patients do, or changes in a pattern of care. For example, Tenet Health Systems adopted the prediction rule from the pneumonia PORT as part of an effort to use quality indicators for process improvement in its 111 acute-care hospitals. Tenet case managers record the risk factors, and the program automatically calculates the patient's severity index and probability of death. This information is used to take patients' risk into account in Tenet's pneumonia quality of care reports, and these reports are used by physicians working in process improvement teams.

Examples of level four, that is, impact on actual health outcomes, include reduction of treatment costs and waiting room time for HIV-positive patients through use of the CHESS system and the following example on stroke prevention. AHRQ-sponsored research on the use of warfarin in patients over 65 with atrial fibrillation was put into practice by the Medicare peer review organization (PRO) in California, and use of the drug was estimated to have prevented 70 strokes and saved $2.6 million annually. When put into practice through 26 PRO projects, the percentage of patients receiving warfarin appropriately increased 23 percent, and they were 20 percent more often adequately anticoagulated (AHQA 2000).

CAHPS [R] AS AN EXAMPLE OF LEVELS OF IMPACT

Another way to understand the pyramid of outcomes is by focusing on an example of AHRQ-sponsored research that works at each level: the Consumer Assessment of Health Plans, or CAJIPS[R].

Designed as an instrument to give people information to help them assess and choose among health plans, CAHPS resulted from a 5-year grant from the Agency to Harvard Medical School, the Research Triangle Institute, and RAND. Together, these investigators studied which qualities are important to people when choosing among health plans and learned how to measure those qualities in ways that yielded valid, reliable results. CAHPS has had an effect at each level of impact.

Level 1: Impact on Further Research. In addition to its use as a survey instrument for research, CAHPS itself has been the subject of research. Approximately 100 articles have been published by investigators on the development and use of CAHPS data. [2] Currently, the National CAHPS Benchmarking Database (NCBD) project is supporting ten research projects, including studies on the impact of race and ethnicity.

Level 2: Impact on Policies. CAHPS data are used by policymakers at both the federal and state levels. At the national level, the NCQA, which accredits health plans covering 40 million Americans, incorporated CAHPS into its HEDIS Member Satisfaction Survey to evaluate and accredit managed care plans.

Level 3: Impact on Practice. Our research portfolio includes numerous examples of plans and employers that use CAHPS to measure patient and employee experiences with their health plans. For example, The Alliance, an employer health care purchasing cooperative in Madison, Wisconsin, recently used the second CAHPS survey to produce its report, QualityCounts, Medical Group Report. Results of the evaluation have shown that overall readership was 74 percent and recall was 35 percent.

Level 4: Impact on Outcomes. Today, CAHPS data--through the various health plans incorporating them--are available regarding the experience of 90 million Americans through the Medicare program, states, and private employers. Through the feedback we continually receive about CAHPS, we have evidence that it is making a difference in people's decision making. For example, in a laboratory experiment conducted by RAND, subjects, when given the choice among inexpensive plans, chose those plans with the highest CAHPS scores. In a CAHPS demonstration project in the state of Washington, those persons facing a switch in plans chose plans with higher CAHPS scores.

CAHPS also illustrates the point that the levels of impact are interrelated (i.e., impact at one level may be a prerequisite for impact at another level). For example, the feedback we receive from users at each of these levels informs our research and in turn leads to improvement of CAHPS.

CONVERGING EVIDENCE OF IMPACT

Often, the connection between a particular research project and health outcomes is indirect. To understand the impact of health services research on outcomes, we may need a new way to make these connections. It may be that the connection can be understood best by recognizing how the body of research sometimes comes together to tell a story that no individual project can tell. In this way, we may think of research as converging to tell the story of impact. This involves identifying a family of research, of converging journal articles: that is, articles whose research findings, when considered together, establish the connection between research and outcomes.

For example, research may show that a change in a clinical practice leads to better outcomes. Other research may show that an intervention designed to change clinical practice patterns, such as a quality improvement program, can cause that change in practice. In this case, as Figure 2 shows, the two studies converge to make the connection between research that developed and tested the quality improvement intervention and research that demonstrated that the improved practice led to improved outcomes. This is, of course, a syllogism; if A leads to B, and B leads to C, then A must lead to C.

Therefore, the idea of converging research, and of converging articles, is that the link can be made by marrying research results. In this illustration, two streams of research converge and make the link between quality improvement, improved practice, and improved outcomes. As a result, impact can been demonstrated for the research that led to the quality improvement program, and the Porter question can be answered for the research on quality improvement, but only because it converged with other research that showed the improved outcomes from improved practices.

An example of converging research is the marriage of outcomes research indicating that aspirin, beta blockers, and thrombolytic drugs can improve the health of heart disease patients with an AHRQ-supported study designed to increase adherence in 37 Minnesota hospitals to American College of Cardiology/American Heart Association guidelines recommending use of aspirin, beta blockers, and thrombolytic drugs in eligible patients. The study found that, in hospitals using local medical opinion leaders, the proportion of eligible elderly heart attack patients given aspirin or beta blockers increased by 21 percent for aspirin use and by 33 percent for beta blocker use. Taken by itself, this opinion leader research cannot demonstrate an improvement in outcomes, but its convergence with the outcomes data on the effect of this practice pattern shows how the quality improvement exercise can be linked to better outcomes (AHQA 2000).

A CALL TO ACTION

As health services researchers, particularly those of us who are stewards of the public purse, we must keep in mind that the ultimate purpose of our work is to improve health care and, as a result, improve the health of the public. One of the ways research helps us achieve that goal, as well as improve access to care and to more affordable care, is to demonstrate what works in improving quality, outcomes, costs, and access. By doing so, we can test not only the outcomes of clinical services, but also the outcomes of organizational and financial efforts to improve the cost, access, and outcomes of those services. The question posed by former Chairman Porter is timeless. To continue addressing it, and address it more fully, we need your help.

The challenge of answering the question about whether health services research has made a difference is one we can take on together. AHRQ-funded researchers should keep the Agency informed of how their work is being used. We will continue to develop systems to track impact, so keep us posted on how your research is being used and provide us with any leads you may hear about regarding other Agency-funded research. Please send information to the Division of Public Affairs, Office of Health Care Information, Agency for Healthcare Research and Quality, 2101 East Jefferson Street, Rockville, MD, 20852, or e-mail them to: kmurray@ahrq.gov.

John M. Eisenberg, M.D. is the Director of the Agency for Healthcare Research and Quality.

NOTE

(1.) At that time, 1998, AHRQ was the Agency for Health Care Policy and Research.

(2.) For a list of these articles, readers are welcome to contact the Division of Public Affairs, Office of Health Care Information, AHRQ by sending an e-mail message to kmurray@ahrq.gov.

REFERENCES

American Health Quality Association (AHQA). 2000. A Measure of Quality--Improving Performance in American Health Care. Washington, DC: AHQA.

Bates, D. W., N. Spell, D. J. Cullen, E. Burdick, N. Laird, L. A. Petersen, S. D. Small, B. J. Sweitzer, and L. L. Leape. 1997. "The Costs of Adverse Drug Events in Hospitalized Patients. Adverse Drug Events Study Group." Journal of the American Medical Association 277 (4): 207-311.

Center for Health Policy Research and Education, Duke University. 1995. "Secondary and Tertiary Prevention of Stroke Patient Outcomes Research Team." Seventh Progress Report, AHCPR Contract No. 282-91-0028. Durham, NC: Duke University.

Gustafson, D. H., R. P. Hawkins, E. W. Boberg, and E. Bricker. 1994. "Impact of Computer Support on HIV-Infected Individuals--Final Report of Grant No. HS06177."

Hearings Before a Subcommittee of the Committee on Appropriations, House of Representatives. 1998. One Hundred Fifth Congress, Second Session, Part 3, Department of Health and Human Services, Public Health Service, Washington, DC, March 4.

Report on Medical Guidelines and Outcomes Research, February 18, 1999.

Stryer, D., S. Tunis, H. Hubbard, and C. Clancy. 2000. "The Outcomes of Outcomes and Effectiveness Research: Impacts and Lessons from the First Decade." Health Services Research 35 (5, Part 1): 977-93.

The Alliance. QualityCounts--Medical Group Report. Madison, WI: The Alliance.
Figure 1: Levels of Impact
Level 4 IMPACT ON
 HEALTH CARE
 OUTCOMES
Level 4 IMPACT ON
 CLINICAL PRACTICE
Level 2 IMPACT ON POLICIES
Level 1 RESEARCH FINDINGS
Note: Stryer et al. (2000).
COPYRIGHT 2001 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Eisenberg, John M.
Publication:Health Services Research
Geographic Code:1USA
Date:Jun 1, 2001
Words:2840
Previous Article:Conducting Research on the Medicare Market: The Need for Better Data and Methods.
Next Article:Annual Report to Our Readers and the Field: September 1, 1999--August 31, 2000.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |