Printer Friendly

Peer review: a new tool for quality improvement.

In this article ...

Consider the basic prescription for adopting the quality improvement model of peer review and use a self-evaluation to rate your current peer review program.

What is the value of peer review in your organization? Is it likely that the program makes a significant ongoing impact on the quality, safety and patient-oriented outcomes of care? Or do you take it for granted that, in an era when health care organizations have embraced quality improvement principles, peer review has become an anachronism?

Based on my research and industry experience, I've found that many physician executives would like to obtain greater value and would say the program could be vastly improved.

For this reason alone, it may be premature to declare the death of peer review. In fact, I have come to view peer review as a new frontier for quality improvement: both in terms of a field that is ripe for the systematic application of quality improvement principles; and as a refurbished tool to assist in the unending quest for better patient care.

Finding a dearth of literature on how to improve peer review, I undertook to assess the current state of the field together with Evan Benjamin, MD, chief quality officer at Baystate Health in Springfield, Mass.

With support from the ACPE, the University HealthSystems Consortium, Premier, Inc., and seven state hospital associations, we did an online survey of physician executives and hospital leaders. The survey covered 39 items related to peer review program structure, process, governance and outcomes, guided by the framework shown in Figure 1 (view the complete survey at www.wilson-edwards.com/survey./htm).

[FIGURE 1 OMITTED]

We ultimately acquired data from 339 institutions spanning the full spectrum of size and location, including 61 major teaching hospitals. The study offers a roadmap to territory where none previously existed.

We observed wide variation in practices. There were also some constants. Peer review is synonymous with, but not limited to, retrospective medical record review. Among other methods, cases are identified through generic screens for adverse events. Peer review is conducted in committees.

Important decisions are generally made by consensus, although in major teaching hospitals decision making by the department chair runs a close second. The overall staffing commitment to support peer review activity is relatively small: a median of 1.1 FTE per 100 beds, a mere 0.2 percent of the average hospital staffing. Most peer reviewers (80 percent) are not compensated.

But our key finding was the predictive value of specific practices on the level of belief that peer review has a significant, ongoing impact on the quality and safety of care. Primary among these are:

* Recognition for outstanding clinical performance

* Standardization and governance of peer review process

* Integration with hospital performance improvement activity

* Timeliness

* Identification of contributory clinician to clinician issues during case review

The package makes sense only in the context of a quality improvement framework. Therefore, some additional background on the history of peer review might help to put this finding in perspective.

History

Medicine has long relied on peer review to protect its professionalism and the quality of patient care. In 1979, the Joint Commission rolled out standards calling for an organized program of quality assurance (QA) that significantly shaped practice. (1)

As incorporated into peer review process, the QA model presumed that problems in care delivery could be minimized through inspection of untoward events to identify the underlying failure of professional judgment. As a result, peer review came to be conducted with the primary objective of determining whether substandard care occurred and, if so, who was responsible.

This was tantamount to endorsing the false belief that simply trimming one tail of the distribution will substantially improve overall performance (see Figure 2). Today, punitive processes are felt to be antithetical to quality improvement efforts.

[FIGURE 2 OMITTED]

The QA model was also wasteful. It captured only a small fraction of clinical performance data available to the review process. Moreover, it missed the opportunity to:

* Identify and correct problems in care processes and interfaces, which pose an ongoing threat to care quality and safety and which are at least twice as frequent as practitioner error

* Leverage data from aggregate reporting and trend analysis to promote improvement

* Deal with the gray zone of performance below the threshold for adverse action, before a serious adverse event occurs, while the personal and organizational cost for correction is low

Over the past decade, hospitals have adopted the quality improvement (QI) principles that successfully transformed other industries. Through a variety of tools and techniques aimed at standardizing and improving processes, and by providing performance feedback, QI seeks to "shift the curve" toward higher performance (see Figure 3). The potential for contemporary quality improvement methods to improve the quality of care is now widely accepted.

[FIGURE 3 OMITTED]

Within the domain of peer review, however, the tension between the QA and QI models has yet to be resolved. Peer review still seems to be widely practiced as a binary judgment of competence rather than as an assessment of clinical performance. We found that the tools commonly employed are simply not capable of measuring performance.

A minority of programs surveyed make structured ratings on multiple elements of performance. Only three use measurement scales that satisfy basic standards for reliability. The Joint Commission requirements for focused and ongoing professional practice evaluation (FPPE/OPPE) have been a mixed blessing. While they upped the ante for better performance measurement, they have inadvertently perpetuated the confusion between performance and competence.

Competence vs performance

Competence is an enduring quality that is unlikely to change quickly in the absence of a physician health problem. On the other hand, QI principles warn us that performance is context-sensitive and is likely to vary when processes are not well-controlled.

This is not to say that peer review is never about competence. Rather, it is to suggest that the principal task should be to evaluate performance. If performance consistently or egregiously falls short of expectations, then the cause needs to be investigated: whether stemming from system issues, individual competencies, or both together.

Thus, peer review naturally serves as a major input to the FPPE/OPPE process, which in turn results in an assessment of competence to perform requested privileges.

So what would a QI model for peer review look like? A peer review program self-evaluation tool that I created, based on the findings from our survey will allow you to rate your own program. By design, the tool uses a 100-point scale and can be scored without a calculator.

As a reference, I estimated the total score for each survey respondent. Based on the assumptions used, the maximum achievable score was 91. The distribution of Total Scores ranged from 0 to 86 with a mean of 45, and satisfactorily approximated a normal curve.

Higher total score is strongly associated with a higher level of perceived quality impact and explains 49 percent of the variance. Figure 4 graphically demonstrates the relationship of the mean total score to the estimates of quality impact.

[FIGURE 4 OMITTED]

A 10-point increase in total score is associated with a three-fold likelihood of higher quality impact and a doubling of the likelihood of higher reported medical staff satisfaction with the program.

The self-evaluation tool effectively differentiates hospitals on key parameters of peer review program structure, process and governance. The observed mean score of 45 on a scale of 100 reveals the large gap between current practice and the potential of the QI model, at least in the survey population. This tool should enable leaders to persuasively communicate improvement opportunity and monitor progress.

The self-evaluation items comprise a set of practices that form a working-draft QI model for peer review. I believe that additional factors will also prove important.

Quality improvement work is built on goals, performance measures, and tests of process change. Therefore, to achieve better outcomes from peer review, physician and hospital leaders will need to promote experimentation and sharing of results; and will also need to think more deeply about measuring outcomes of peer review effectiveness.

There is likely room for improved efficiency in peer review case identification. Generic screens have low specificity for negligence and substandard care. (1) We know nothing about their value in identifying learning opportunities, both individual and organizational. This is an important area for further study.

Graber found that broadening the scope of peer review quadrupled the number of problems identified and greatly magnified the number of quality improvement projects initiated. (2)

One way to reset the frame for identifying cases for peer review would be to statistically monitor care process or outcomes measures (e.g., core measures, surgical complication rates). Such explicitly defined and relatively objective measures also serve to evaluate performance.

So do the relatively subjective assessments made in peer review. The methods are complementary. Here's one more reason to adopt structured peer review forms with reliable rating scales. They will not only provide better performance measurement, but will offset any inefficiency in case identification by extracting useful data from every review.

I have found that, if well-designed, such tools can easily capture 10 times the information from peer review, with virtually the same reviewer effort. When aggregated, as few as five to eight repeat measures can reliably compare performance among physicians. (3)

As an added bonus, structured review offers a more balanced and palatable platform for constructive feedback in comparison to an evaluation focused solely on the presence or absence of deficiencies.

Case volume was a factor contributing to perceived quality impact. This may seem surprising in relation to QI principles that seek to minimize the cost of inspection by designing quality into the process. A benefit was apparent above 1 percent of hospital inpatient volume. The explanation may be that clinical care is largely a poorly controlled process.

Adverse event rates typically exceed 3 percent, with physician management issues contributing to about a third of these. Thus, programs with review volume lower than 1 percent are likely passing over a lot of performance improvement opportunities.

Our study found merit in active medical staff governance of peer review process and from communication to trustees. It would be difficult to systematically standardize and improve peer review processes without attentive leadership.

Where the medical executive committee is challenged to manage all its responsibilities, oversight of peer review might well be delegated to a sub-committee with representation from the MEC and key review committee chairs. The governing agency must promulgate the vision, maintain the standards, facilitate linkages, create accountability, and otherwise act to assure success for the program.

In summary, my basic prescription for adopting the QI model of peer review includes these key ingredients:

* Distinguish between performance and competence

* View peer review as clinical performance measurement and improvement

* In each review, seek out whatever can be learned to improve both individual performance and the system of care

* Standardize review committee processes

* Use structured case review forms with reliable scales

* Aggregate data and analyze trends on a regular basis

* Provide timely and balanced performance feedback, including the recognition of excellence

* Monitor the outcomes of peer review

* Govern the process effectively

References

(1.) Sanazaro PJ, Mills DH. A critique of the use of generic screening in quality assessment. JAMA, 265(15), 1991.

(2.) Graber ML, Physician participation in quality management: Expanding the goals of peer review to detect both practitioner and system error. Jt Comm J Qual Improv, 25(8), 1999.

(3.) Sanazaro PJ, Worth RM. Measuring clinical performance of internists in office and hospital practice. Med Care, 23(9), 1985.

RELATED ARTICLE: Peer Review Program Self-Evaluation Tool

Rate the following 13 aspects of your peer review program against the criteria provided. The maximum allowable points for each item are indicated. On items for which the response is unknown or indeterminate, score 0 points. The maximum possible Total Score is 100 points.
Standardization of Process (up to 10 points)

Criteria                                           Points

Peer review process is highly standardized.
The oversight committee approves all variation.      10

The process is greatly standardized, but there
may be some unapproved variation                      6

The process is standardized, although there may
be significant variation                              4

The process may be somewhat standardized, but
variation is substantial                              0

Score

Structured Review (up to 10 points)

Criteria                                                          Points

Case review is documented by rating multiple elements of
performance on a template selected to match the specific type of
clinical activity being reviewed, possibly including an
overall score, a case analysis, etc.                                10

Case review is documented single template to rate
multiple elements of performance common to all
medical care, possibly including an overall score, etc.              6

The only rated element is an overall score                           2

Case review is generally unstructured or undocumented                0

Score

Recognition of Excellence (up to 10 points)

Criteria                                                Points

We have a method to identify and regularly provide        10
recognition for outstanding clinical performance

We occasionally recognize outstanding performance          5

Seldom or rarely, if ever, do we recognize outstanding     0
performance

Score

Effective Governance of Process (up to 10 points)

Criteria                                                       Points

An oversight committee regularly reviews data involving the
peer review process and its outcomes, with meaningful
discussion directed towards ongoing improvement of the
process (irrespective of discussions about individual
performance issues)                                              10

There is regular review of data involving the process and its
aggregate outcomes, with little or no discussion                  5

There is little or no attention to the process and its
aggregate outcomes                                                0

Score

Rating Scale Reliability (up to 10 points)

Criteria                                                     Points

We rate elements of an individual's clinical performance
using scales with seven or more intervals
from best to worst                                             10

We use scales with five or six intervals from best to worst     5

Rating scales are either not part of our process, have less
than five intervals, or only score deviation from
the standard of care                                            0

Score

Reviewer Participation (up to 10 points)

Criteria                                                   Points

We have excellent participation by reviewers in process
the peer review                                              10

We have very good participation by reviewers in the
peer review process                                           8

We have good participation by reviewers in the
peer review process                                           4

At best, reviewer participation is only fair                  0

Score

Integration with Performance Improvement Activity (up to 10 points)

Criteria                                                Points

Peer review is highly interdependent with the
hospital's Performance Improvement
(Quality/Safety Improvement) process                      10

Peer review is at least fairly well-connected to the
hospital's PI process                                      5

At best, peer review is only somewhat connected
to the hospital's PI process                               0

Score

Identification of Improvement Opportunities (5 or 0 points)

Criteria                                                     Points

In each review, we took for process improvement
opportunities including clinician to clinician issues, in
addition to evaluating individual clinical performance          5

In each review, we do little more than ask, "Was the
standard of care met?"                                          0

Score

Board Involvement (5 or 0 points)

Criteria                                                  Points

Trustees periodically receive information about
peer review activity beyond that which would be
reported in relation to an adverse action                   5

Trustees are only provided information in relation to
adverse actions                                             0

Score

Timely Performance Feedback (5 or 0 points)

Criteria                                                         Points

Cases are reviewed and opportunities for improvement are           5
communicated on average within three months of an occurrence

On average, more than three months is required                     0

Score

Case Review Volume (5 or 0 points)

Criteria                                                    Points

The total annual volume of cases reviewed is at least 1%       5
of hospital inpatient volume

The total annual volume is less than 1% of                     0
hospital inpatient volume

Score

Pertinent Diagnostic Studies (5 or 0 points)

Criteria                                                     Points

Pertinent diagnostic studies are routinely examined along       5
with the medical record

Only the medical record and the relevant diagnostic             0
reports are reviewed

Score

Monitoring Adverse Event Trends (5 or 0 points)

Criteria                                                      Points

Trends in adverse event rates (either globally or                5
by event type) are monitored as an outcome measure of peer
review activity by committees, departments or governance

Trends in adverse event rates are not monitored in the           0
context of peer review outcomes

Score


By Marc T. Edwards, MD, MBA

Marc T. Edwards MD, MBA

Wilson-Edwards Consulting, West Hartford, CT

marc@wilson-edwards.com

[ILLUSTRATION OMITTED]
COPYRIGHT 2009 American College of Physician Executives
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Peer Review
Author:Edwards, Marc T.
Publication:Physician Executive
Geographic Code:1USA
Date:Sep 1, 2009
Words:2683
Previous Article:Establishing patient- and family-centered care in a non-academic pediatric intensive care unit.
Next Article:The chief medical officer: a critical success factor.
Topics:


Related Articles
A refresher course in peer review.
Peer reviews and misconceptions: a refresher course.
Auditing your association.
Chair's corner.
Enhancing public confidence: the GAO's peer review experience: even auditors need to be audited.
The psychology of sham peer review.
Sham peer review: the unjust "objective test".
Editorial: sham peer review: the Fifth Circuit Poliner decision.
Introducing the new principles-based peer review standards: enhancing the clarity and integrity of peer review reports.
Guidelines for disciplinary action from peer review.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters