Printer Friendly

Performance measures in ambulatory care. (Perspectives in Ambulatory Care).

AMBULATORY CARE ORGANIZATIONS face many challenges in the new millennium and are encountering increasing pressure to improve the quality of care, treatment, and service they provide while reducing costs. Multiple sites, geographic logistics, finite resources, and a variety of care delivery models challenge ambulatory care organizations to operationalize financially sound practices that ensure survival. Despite a shift in care delivery from inpatient to ambulatory care, performance measurement systems in the ambulatory setting such as freestanding or hospital-based physician group or individual practices have not been widely developed. According to Roski and Gregory (2001), the Health Plan Employer Data and Information Set (HEDIS), the most widely used performance measurement set in the United States, includes a number of measures that evaluate preventive and chronic care delivery in ambulatory care settings. The HEDIS measures incorporate areas such as effectiveness, access, availability of care, utilization of services, and satisfaction with the experience of care. In addition, other quality measures/indicators can be derived from sample datasets, such as:

* Healthy People 2010 (DHHS)

* ANA Advisory Committee for Community Based Nonacute Care

* AHRQ: Agency for Healthcare Research and Quality (formerly AHCPR)

* NLHI: National Library of Healthcare Indicators

* Centers for Disease Control (CDC)

* American Academy of Pediatrics (AAP)

Ambulatory care organizations can achieve higher quality at lower costs by implementing an effective performance improvement (PI) program. An overview of a performance improvement model, sample performance measures, and resources to create a credible and sustainable ambulatory care performance measurement program will be provided.

Balanced Scorecard/Report Card

A need exists for organizations to assess and monitor performance in a variety of areas. In 1992, departmental operational indicators were first developed as a means of identifying efficiency opportunities within the professional revenue cycle at the Cleveland Clinic Foundation (CCF) Division of Medicine (Kaatz, Sargeant, Kay, Ahmad, & Stoller, 2000). Kaatz et al. (2000) explained that these first-generation tabular displays were "busy" and did not permit identification and display of acute points or review, leading to the development of a second-generation performance review instrument known as a balanced scorecard. Katz et al. (2000) show how inquiry into several operational aspects of health care delivery at CCF related to financial operations, service/quality, utilization/productivity, and access has effected change and improvement. The data are displayed on a performance wheel that summarizes the identified indicators, allowing easy comprehension of performance in various operational aspects and a visual appreciation of the interaction among these operational aspects.

A trend for report cards became popular with the increase in managed care. Castaneda-Mendez, Mangan, and Lavern (1998) state that health care organizations should develop a balanced scorecard with measures that reveal the interdependency of the business values, employee values, and patient values. The balanced scorecard or dashboard concept can assist leaders in developing and monitoring a strategic plan by creating a vision of the future behavior, performance, and perception of the organization using performance-improvement principles (Castaneda Mendez et al., 1996). Management must review the balanced scorecard periodically to evaluate and take corrective and preventive action on planned initiatives that do not contribute to the strategic plan. Felix and Pyle (2001) note that "tools that document data on the health status, which can be affected by nurses, will begin to illuminate the value nurses add to the health care equation. These data, in turn, will provide recognition for the role nurses can play" (p. 419). The dashboard or balanced scorecard, which is commonly presented in a graphic format, is a way to view relationships among separate indicators of cost and quality that together, give an indication of overall performance. Varying types of report cards have been created by organizations with specific indicators or measures identified by management that are considered important to track or to identify information important to the consumers they serve.

Performance measures commonly identified by organizations in the literature are generally classified into the following categories: access to care, utilization and productivity, financial operations and quality/service (see Table 1).

National Organizations and Quality

Accreditation from a national organization such as the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO), Accreditation Association for Ambulatory Health Care (AAAHC), the National Committee for Quality Assurance (NCQA) and the Community Health Accreditation Program (CHAP) provide a framework for assessing performance-improvement systems within ambulatory care organizations aimed at evaluating the administrative and clinical aspects of their operations. By requiring the application of a performance-improvement system, accredited organizations are evaluating their key systems in a routine, systematic, and continuous manner and that ownership of this must be at all levels in the organization. There must be active participation at the physician, executive, and line staff levels for the program to be effective.

The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) defines performance measurement in health care as representing what is done and how well it is done. The goal is to accurately understand the basis for current performance so that better results can be achieved through focused improvement actions. Performance improvement ensures that the organization designs processes well and systematically monitors, analyzes, and improves its performance to improve patient outcomes (JCAHO, 2002). The customer we serve is always concerned and increasingly more aware of the basic structures that exist to measure quality, as well as the health care organizations' reports cards of quality that become public record.

UCSF Performance-Improvement Model

The University of California, San Francisco (UCSF) is an academic medical center that includes more than 900 physicians, a 500-bed hospital, and outpatient clinics in more than 75 specialties. A nationally designated Comprehensive Cancer Center and the UCSF Children's Hospital are recognized centers of excellence. The mission, vision, and values at UCSF are integral to staff performance and the development and implementation of all PI activities.

The PI program at UCSF Medical Center measures performance indicators using an aggregate quality dashboard. Performance indicators are measurable characteristics of products, processes, services, and systems used to track and improve performance. A dashboard is a tool that consists of the indicator, the actual performance, a performance benchmark, an indicator value, a data source, a benchmark source, trends, and comments. Information is collected and reported on a quarterly basis.

A problem or process in need of improvement may be noted from the dashboard's trended data or from other sources in the organization. The process for addressing these issues is the UCSF Medical Center's PI Model, called IMADIM, which stands for: Identify, Measure, Analyze, Design, Implement, and reMeasure. A summary of the IMADIM PI model is provided in Table 2. An example of an ambulatory PI project utilizing clinical-performance measures in the IMADIM format is shown in Figure 1.

The PI process is crucial for health care leaders to improve outcomes and adapt to change as well as to deliver cost-effective and high-quality patient care. Regulators, payers, and patients will continue to demand performance-based data that documents compliance with quality standards and benchmarks. Using the dashboard as a tool to track organizational performance measures and the progress of critical indicators as well as having a strong PI program can assist organizations to cope with the changing health care environment.
Figure 1.
Performance Improvement Program

Ambulatory Pediatric Dialysis Clinical Performance Measures
Performance Improvement Program
created by James D. Cooke, RN, CNN, Manager and Anthony A. Portale, MD,
Chief of Pediatric Nephrology

Overview    Clinical Performance Measures 2002 is the performance
            improvement (PI) program of the Ambulatory Pediatric
            Dialysis Unit (PDU). This PI program is a comprehensive,
            continuous process that is adapted quarterly to the
            clinical needs of the PDU's patients. The goals of this
            program include:
              1. Provide adequate, effective and safe treatment for
                 End Stage Renal Disease (ESRD)
              2. Provide accurate monitoring of patient laboratory
              3. Provide effective nutrition counseling to improve
                 growth and nutritional status and to promote general
              4. Provide effective patient education to improve
                 adherence to the dialysis prescription
              5. Vaccinate all ESRD patients for hepatitis B infection

Identify    The Pediatric Dialysis Unit has identified 7 major
            components of medical, nursing and nutritional care for
            continuous monitoring and intervention:
              1. Anemia status
              2. Bone health
              3. Adequacy of dialysis treatment
              4. Nutritional status
              5. Cardiac/PVD status
              6. Growth status
              7. Hepatitis B Vaccination

Measure     Clinical measurements (indicators) for both the
            hemodialysis and peritoneal dialysis programs of the PDU
            are collected monthly and include:


              1. Hemoglobin
              2. Ferritin
              3. Transferrin saturation
              4. Ca/PO4 product
              5. Parathormone
              6. Potassium
              7. Albumin
              8. Hepatitis B vaccination
              9. Urea reduction ratio
              10. Urea kinetics
              11. Interdialytic weight gains
              12. Diastolic blood pressure
              13. Growth status (indicators)
              14. Access/blood infection rate

              Peritoneal Dialysis

              1. Hemoglobin
              2. Ferritin
              3. Transferrin saturation
              4. Ca/PO4 product
              5. Parathormone
              6. Potassium
              7. Albumin
              8. Hepatitis B vaccination
              9. Urea kinetics
              10. Growth status (indicators)
              11. Access/blood infection rate

            Measurements are taken of each indicator monthly for each
            patient eligible to participate in the PI program. Most
            chronic kidney disease (CKD) patients admitted to the PDU
            for more than 60 days participate. Ineligible patients
            include: first 30 days of CKD admission, acute renal
            failure patients temporarily dialyzing in PDU while
            awaiting recovery of renal function, CRF patients with stay
            less than 60 days, and guest CKD patients less than 60
            days. Data, along with factors for individual outliers, are
            collected by hemodialysis and PD registered nurses.

Analyze     Raw data is collected by clinical staff, provided to the
            patient care manager (PCM) and plotted on a spreadsheet.
            Line and bar graphs illustrate trending of the data. Most
            data is organized by the monthly mean/median and the number
            of the number of patients achieving the established goal.
            This varies by the indicator, see attached graphs and
            charts. The PCM analyzes the data and outliers to create
            recommendations for consideration by the PDU PI Committee.

Design      PI Committee membership consists of the PCM, Chief of the
            Division of Pediatric Nephrology, Medical Director of
            Pediatric Renal Transplantation, Pediatric Nephrologists,
            Renal Nutritionist and any other interested party. The PI
            Committee meets quarterly and considers year-to-date (YTD)
            data including: trending, patient outliers, staffing
            issues, equipment, and medical supplies. The PI committee
            recommends changes in PDU interventions, policies,
            procedures, protocols, equipment and medical supplies. PI
            reports are shared with the Medical Center Department of
            Infection Control and their input is integrated into the
            design and implementation phase of the PI program.

Implement   Possible strategies for responding to negative trends and
            outliers include:
              1. Revision of policies, procedures and protocols
              2. Review of patient education program
              3. Revision of and/or creation of new patient education
              4. Review of validity and applicability of targets and
              5. Review of updated national benchmarks for adjustment
                 of targets and goals
              6. Creation of incentive based programs for pediatric
                 patients in which adherence to the treatment
                 prescription is the critical factor for certain PI

            PI committee recommendations are implemented by the PCM,
            physicians clinical staff, and the nutritionist.

Measure     Clinical performance measures (indicators) will continue to
            be collected monthly and analyzed quarterly. The PI
            committee will review the YTD PD data quarterly and will
            make recommendations to address negative trends and

Table 1.
Key Performance Measures in Ambulatory Care

                            Utilization and
Access to Care                Productivity         Financial Operations

Appointment availability;   Space/Exam room        Charge timeliness
for example, next vs 2nd      utilization          and accuracy
or 3rd available            Visits per exam room     Charge lag

Bumped/rescheduled          Number of specialty    Co-payment and cash
appointment rate              referrals            collection rate

No show appointment         Number of ED visits    Rejection/Denial
  rate                                               rate

Wait time                   Number of visits       Accounts receivable
  Exam room                   conforming to CPT      days
  Waiting room                codes

Cancellation rate           Staff mix per visit    Insurance/

Referral request            Staff turnover rate    Total visit volume
turnaround                                           New patient
                                                     % change to prior
                                                   visit variance to

Consult request             Support staff FTEs     Total direct cost
turnaround                  per MD                   per visit

Availability of urgent or   Expenses per visit     Revenue and expense
walk-in appointments                               per visit

Telephone access            Relative Value Units   Billing timeliness
  Abandonment rate          (RVUs)                 and accuracy
  Average time before         RVUs per visit         Billing lag
    answered                  Total RVUs
  Total number of calls       RVU variance to
    by agent                    budget
  Response time for
    clinical triage

Access to Care              Quality and Service

Appointment availability;   Patient satisfaction
for example, next vs 2nd
or 3rd available

Bumped/rescheduled          Staff satisfaction
appointment rate

No show appointment         PI projects; site specific

Wait time                   Sedation outcomes
  Exam room
  Waiting room

Cancellation rate           Immunization rates

Referral request            Diabetes compliance
turnaround                    Hgb A1C
                              Annual eye exams
                              Foot care

Consult request             Point of care testing
turnaround                  compliance

Availability of urgent or   Population-specific
walk-in appointments        guidelines, e.g. CHF,
                            COPD, CF

Telephone access            Prevention of tobacco use

Table 2.

IMADIM: Performance Improvement Model at UCSF Medical Center

I = Identify the Problem or Process for Improvement

1. Define the problem being addressed by the project. What trended data or anecdotal evidence do you have to indicate a need for improvement or suggests concerns over quality of care/service (report baseline data)?

2. How/why was this problem selected?

a. Targeted population is high dollar, high volume/high risk, low volume/high risk, and/or problem prone.

b. Project is consistent with mission, values, goals, and patient priorities of the organization.

c. Issue is a key theme for accreditation or other regulatory body e.g. JCAHO, NCQA.

d. Sentinel event or near miss incident identified a process for improvement.

e. Other practical considerations: Costs: Can you pay for it? How much time will it take? Is it under your organization's control? Are staff members resistant to change? Are staff overloaded?

3. Who are the members of the PI team? Ensure adequate representation from all involved disciplines.

M = Measure the Current Performance of the Problem

1. Was the baseline data sufficient to understand and/or further define the problem or did you need to collect additional data?

a. Are your data Quantitative: expressed in time, dollars, patient expectations?

b. Are your data Accurate: record keeping must be organized?

c. Are your data Informative: can it be translated into information that will guide change?

d. Are your data Graphic: can it be expressed in data run charts, control charts, histograms, dashboards?

e. Are your data Comparative: are you able to compare performance using internal benchmarks, comparable organizations, standards, and best practices?

2. What, if any, additional data were collected to further define the problem? Provide/attach results.

3. Indicators you use must be SMART: Specific, Measurable, Attainable, Relevant, and Time based.

A = Analyze the Current Process for Improvement

1. Examine all the steps in the process you have selected for improvement.

2. Make certain that input from all disciplines involved in the process are represented.

3. Describe the method(s) used to analyze the process or processes of care that you are trying to improve? (Use flowcharts, fishbone diagrams, or other graphic tools).

4. What did you learn from this analysis? Summarize distinct points that can be addressed in a redesigned process. What conclusions did you draw to help further refine your problem statement and provide a basis for targeting improvement strategies?

D = Design the Improved Process

1. Based on your analysis and baseline data, what areas were targeted for improvement and/or redesign and how did you select them?

2. Describe improvement strategies or the new process using a flow chart.

3. Highlight new process steps.

I = Implementation of the Improved Process/ Improvement Strategies

1. How are you planning to implement the improvement strategies? What are the steps involved and timeline for implementing the new process? Use Gantt chart, MS project task list, or activity network diagram to display. Who is responsible for each of the steps? Do you have "buy in" from the key stakeholders?

2. If you have already implemented improvement strategies, how are things going and when will the outcomes be reviewed (for example, frequency)?

3. Did you build in a "pilot period" to check your group's assumptions?

M = Measure Performance after Implementation

1. How do you plan to measure the effectiveness of your improvement strategies and monitor response to your implementation, including additional costs, if any? Provide a timeline for completion.

2. If you have remeasured, what are your results (show trends)? What conclusions can you draw from your analysis of the remeasurement process?

3. What modifications have you made, if any, to your original plan?

4. How will you sustain the improvements? What are the project team's next steps?

Performance Measurement Publications

Joint Commission on Accreditation of Healthcare Organizations

Cost Effective Performance Improvement in Ambulatory Care. (2003).

Tools for Performance Measurement in Health Care. (2002).

Using Performance Improvement Tools in Ambulatory Care. (2001).

For more information on these publications visit: or call (630) 792-5800.

Organizations and Web Sites

* Agency for Healthcare Research and Quality

* American Academy of Ambulatory Care Nursing (AAACN)

* American Nurses Association

* American Society for Quality

* Center for Clinical Effectiveness Loyola University

* Institute for Healthcare Improvement

* Institute of Medicine

* International Council on Nursing

* Joint Commission on Accreditation of Healthcare Organizations

* Leap Frog Group

* Medical Group Management Association

* National Association for Healthcare Quality

* National Center for Patient Safety (NCPS)

* National Committee on Quality Assurance

* National Guideline Clearinghouse

* National Institutes of Health

* National Institute of Nursing Research

* President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry

* University Healthsystems Consortium

NOTE: This column is written by members of the American Academy of Ambulatory Care Nursing and edited by REBECCA LINN PYLE, MS, RN, Regional Nursing Guidelines Coordinator, Kaiser Permanente, Denver, CO. For more information about the organization, contact: AAACN, East Holly Avenue, Box 56, Pitman, NJ 08071-0056; (856)256-2300; (800)AMB-NURS; FAX (856)589-7463; E-mail:; Web Site:


Castaneda-Mendez, K., Mangan, K., & Lavern, A. (1998). The role and application of the balanced scorecard in healthcare quality management. Journal of Healthcare Quality, 20(1), 10-13.

Felix, K., & Pyle, R.L. (2000). Clinical performance improvement. In J. Robinson (Ed.), Core curriculum for ambulatory care nursing (pp. 419). Philadelphia: W.B. Saunders Company.

Joint Commission on Accreditation of Healthcare Organizations. (2002). Tools for performance measurement in healthcare: A quick reference guide. Oakbrook, IL: Author.

Kaatz, T., Sargeant, M., Kay, R., Ahmad, M., & Stoller, J. (2000, September/October). Balancing the perfect scorecard. MGM Journal, pp. 30-40.

Roski, J., & Gregory, R. (2001). Performance measurement for ambulatory care: Moving towards a new agenda. International Journal of Quality Healthcare, 13(6), 447-453.

SHIRLEY M. KEDROWSKI, MSN, RN, is Consultant/Interim Patient Care Director, Ambulatory Services (2002-2003), UCSF Medical Center, San Francisco, CA; and Past President (2000-2001) of the American Academy of Ambulatory Care Nursing. For more information, e-mail

CINDY WEINER, RS, RN, is Manager, Endoscopy Department, UCSF Medical Center, San Francisco, CA; and Masters Student, Nursing Administration, UCSF School of Nursing.
COPYRIGHT 2003 Jannetti Publications, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kedrowski, Shirley M.; Weiner, Cindy
Publication:Nursing Economics
Geographic Code:1USA
Date:Jul 1, 2003
Previous Article:Evolving a nursing collegial relationship into a successful consulting business.
Next Article:Teaming to a new dimension: cost and quality go hand in hand. (Success Stories).

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |