Printer Friendly

An integrated model for measuring management performance.

The terms of national debate having swung from whether to whence national health reform,(1,2) we are faced by a plethora of potential pathways. Witness the 26 proposals before the 102nd Congress.(3) Yet, while there is tremendous discontent with the status quo,(4) we remain deeply divided on fundamental reform values.(5) In addition to political equivocation,(6) there is a lack of clear scientific information on "value added" by even large health care organizations.(7) As one listens to the diverse voices of national health debate,(8-14) one is struck by a common need--a single system by which the needs of patients, payers, and providers can be projected, analyzed, and more predictably satisfied. What is needed is a model of organizational performance that compels attention to the proper balance between the sometimes competing and synergistic forces of quality, cost, and access; takes into account patient perceptions; produces clear targets for continuous quality improvement (CQI); and yields a display easily understood by professionals and laymen alike.

Choosing to use existing U.S. civilian and military databases rather than start anew, we set out to design such a model. This project was undertaken to find a methodology capable of satisfying the immediate needs of payers, providers, and patients. It was also initiated to approach a more basic issue, one amenable to exploration within the context of our military health delivery system. To our knowledge, today's health management databases tend to focus on single or at best paired parameters and thus fail to convincingly capture the way in which health care organizations operate, which is across the functions of cost, quality, and access simultaneously. As soon as any element is left out, someone with an interest is disenfranchised. Thus, to the degree that existing databases have failed to express the totality of care, they have failed as well to inspire significant practice pattern changes and/or management efficiencies. In talking on the project, we recognized potentially enormous difficulties in finding enough commonality of definition, purpose, and data to achieve a meaningful product. We were spurred on by the belief that without some new perspective on how quality, cost, and access interrelate, medical managers, opinion leaders, and legislators might never be able to break "the American health policy gridlock."(15) Or worse, in the absence of such information, legislative solutions might be imposed that would ultimately prove far more costly than those they replaced.

Methods

Measuring Health Care Performance

In 1990-91, quality, cost, and access data were collected from Department of Defense (DOD) and civilian contract sources. The data were generated by the 22 medical treatment facilities (MTFs) assigned to the Strategic Air Command (SAC). Functioning as staff-model HMOs, they provide care to 800,000 patients across the United States, have a real property value of $700 million, and have an operations and maintenance budget of $160-190 million annually.

These data proved adequate for development and employment of the model. Starting with the premise that the ideal would be high-quality, low-cost, and high-access care at each medical facility, consensus panels began to quantify the three factors. The panels reasoned that, in a CQI milieu, anything to the good side of current normative (mean) values merited favorable placement on a scale, provided the scale had been leveled by such modifiers as severity, case mix, and patient perceptions. After incorporating these variables in the model, distribution curves and tests of statistical validity showed that, for each factor, data from across the facilities were comparable. The use of all databases cost the command $5,000/facility/year. The incremental cost of implementing the model was 20 percent of this figure.

A Three-Dimensional Representation

The presence of the three interrelated factors suggested the use of a three-dimensional model. Because the measure of each factor might be considered a continuous function, the use of response surface displays might have been elected. Instead, using the logic of the consensus panels, values for the three factors at the treatment facilities were displayed as high or low relative to a known norm. Each factor was given equal weight, because, at a given moment, quality, cost, or access might be paramount to a user. The factors were assembled in a cube projection of eight octants, so that it was possible to depict quality, cost, and access in a simultaneous, integrated relationship (figure 1, below). MTFs placing in the most favorable octant would be viewed as meeting or exceeding established criteria for high-quality, low-cost, high-access care. Conversely, those deficient in one or more measures would place elsewhere in the "management cube." By color coding the octants of the cube (all three factors favorable = "green," one unfavorable = "yellow," two or more unfavorable = "red"), the relative performance of the MTFs became apparent at a glance. For those MTFs not "in the green," the model provided dear quantification of what would move them from one octant to another. MTFs "in the green" became empirical models of successful staffing, resource, and administrative combinations for other facilities to emulate.

Mathematical Relationships

The positioning of a treatment facility in the cube is indicative only of relative rating. But, within the context of CQI, this is by no means disadvantageous. In fact, as an inherent mathematical function, should aggregate scores improve over time, arithmetic means will change and the graphical display will become a self-adjusting CQI tool--provided an institution is interested not only in its own progress, but also in its progress in relation to peer facilities.

Quality

Quality of medical care has been defined as a structure, a process, and an outcome.(16) Quality also has a strong perceptual element on the part of the patient. The consensus panels agreed that both objective and subjective elements deserved to be addressed but that greater weight should fall to objective outcome data. First, therefore, normative values were obtained. Because the Air Force was already a subscriber, the 10 indicators of the Maryland Quality Indicator Project (QIP) and its national database were used.(17) To these indicators (table 1, right) were added adjustments for severity of illness(18-22) and patient perceptions,(23) yielding our Quality Equation.

[MATHEMATICAL EXPRESSION OMITTED]

where

i = QIP indicator

[Beta] = an adjustment factor based on an average severity indexing for a particular treatment facility

I = the QIP indicator value at a particular treatment facility

[Micro] = the mean value of the QIP indicator for all treatment facilities

[Sigma] = the standard deviation for a QIP indicator based on all treatment facilities

[epsilon] = a patient perception adjustment factor (-.1,0,.1)

The basic equation is a summation over all 10 quality measures. The severity index applied was that of Gonnella,(19) generated for us under a contract. Quality indicator systems on the market today that have inherent severity indexing would not need a separate data source.

Shown in figure 2, right, is the process whereby the adjustment factor is actually calculated. Each treatment facility receives a score, denoted by "I" in the equation, for each quality indicator used. The value "[Micro]" represents the average score for all hospitals for that indicator.

In order to move from a generic quality indicator to a severity-indexed product, the indicator has to be associated with a diagnosis-related group (DRG), a medical diagnosis code (MDC), or another medical grouping. Then, an all-hospital average (represented by "[Beta]I" in the Quality Equation) can be calculated for each quality indicator, permitting comparisons between a single hospital and the aggregate norm.

If, as was our case, severity values follow a Gaussian or normal distribution, the area under the curve can be used as an adjustment factor, moving a facility's product toward or away from the mean, depending on whether its patients were less ill than, as ill as, or more ill than the mean. In setting up the calculations, each facility would have an [Alpha] and a [Beta] value for each indicator. The basic equation would not change under conditions of other than a normal distribution, except that it would need to be expressed mathematically in its proper non-Gaussian standardized value for each indicator used.

By converting all quality indicator values to a standard normal value and treating all 10 indicators of equal importance, all 10 standardized values can be added together. As we set up the equation, if the result is a negative value, then, overall, the hospital is performing better than the norm and achieves a "high" quality rating to this point.

Finally, patients' perceptions of quality are brought into play. For our purposes, Air Force survey data were used.(23) While the proper magnitude of this adjustment might be argued, the consensus panel decided that it should be of sufficient numerical strength to move performers that are near an octant interface into the octant or out of it, but not powerful enough to dislodge performers that are solidly within an octant. Because our data yielded quality values ranging from about -2 to +2, patient perception additives were assigned that would account for about 5 percent of the measured distribution and satisfy the panel's wishes:

+.1, if the patient surveys rate poor or very poor.

0, if the patient surveys indicate a neutral response.

-.1, if the patient surveys rate good or very good.

Cost

The same basic methodology is reflected in the overall Cost Equation.

[MATHEMATICAL EXPRESSION OMITTED]

where

i = the inpatient/outpatient indicator

[Beta] = an adjustment factor for case weight, severity, and the ratio of direct military cost to total military cost

I = direct military cost per catchment area employee

[Micro] = the benchmark cost against which facility costs are compared

For purposes of this article, the normative figure ([Micro]) in this case is a single value, one provided in a Mayo Clinic "benchmark" study.(24) As with the Quality Equation, there is a summation, this time of inpatient and outpatient costs. And there are [Alpha] and [Beta] compilations, now for three separate elements--case mix, severity index, and a "target recapture" goal. For the Strategic Air Command, the last element represents the portion of total costs not borne by the direct military care system (CHAMPUS insurance costs). Calculation of [Beta] is shown in figure 3, above.

Recognizing that case mix and severity indexing are not the same (e.g., a terminally ill cancer patient might require few resources for care), the panels believed both elements should be included in this equation. They likewise believed that the relative success or failure of an institution to achieve a major cost recapture goal should be included. In this regard within the military system, both for Congress and the Department of Defense, a major interest item is the CHAMPUS insurance bill. In a civilian system, it could be virtually whatever executive management wished.

Once again, cost data followed a normal distribution, allowing calculations to proceed in a fashion similar to those for the quality adjustment. And as before, had the distribution not been normal, the basic setup would apply but would use a mathematical expression suitable for non-Gaussian distribution.

As with the quality adjustment, cost values are subtracted from 1.5 (the mean of a standardized normal distribution), permitting up to a 50 percent shift in direction. Because costs in the military system are relatively nominal, no patient perception adjustment ([Epsilon]) was used. In the civilian market, one would add an epsilon to this equation.

Access

For our purposes, access reflects the actual ability of military beneficiaries to get desired outpatient/inpatient treatment at a military treatment facility. It is also a product of inpatient bed capacity and outpatient appointments, specialty mix, and willingness of patients to use the system. Lack of access is reflected in care sought through CHAMPUS and, thus, "lost opportunity" for care is somewhat proportional to CHAMPUS costs. In a civilian setting, it might just as easily equate to a competitor's share of a desired market.

Development of a quantitative measure for access does incorporate the idea of "lost opportunity." Regarding inpatients, it consists of those patients opting for CHAMPUS when room in an MTF is available, up to an (optimal) 85 percent bed occupancy rate.(25) Lost opportunity for outpatients is derived from the number of patients seeking outside care up to an optimal 98 percent appointment fill.(26) The full Lost Opportunity Access Equation is expressed below.

[MATHEMATICAL EXPRESSION OMITTED]

where

i = inpatient or outpatient

C = CHAMPUS average daily patient load or outpatient visits

D =military hospital average daily patient load or outpatient visits

G = the goal for bed utilization or outpatient appointments

R = the number of beds in the military hospital or the number of available appointments

[Epsilon] = an adjustment for patient perception of access to military direct care (-.1,0,.1)

There is summation across outpatient and inpatient arms for both direct military care (substitute any civilian institution) and the measured portion of it that goes to CHAMPUS (substitute any civilian competitor). Patient perceptions are again taken into account, the strength of the shift this time being 10 percent of the distribution. Succinctly stated, this equation formalized the expression of access as a function of lost opportunity and patient perception.

As set up, the equation can be used to examine inpatient access as follows. If the measured value for access is close to unity, the facility is reaching its goal of 85 percent bed utilization. If, despite that finding, CHAMPUS use (C) is large, additional beds may be needed, either through construction or increased staffing. If, on the other hand, the access value is low, something (real or perceived quality issues, specialty mix, available operating room times, etc.) has motivated patients to seek inpatient care outside the observed facility.

In the outpatient arena, if appointments are not filled and measured values are low, staffing may be in excess of need. Conversely, if actual figures exceed 100 percent of predicted capacity, a well-motivated but overworked staff may be the reason, and extra help might be in order.

Results

Three MTFs were excluded from study. Two are clinics having no inpatient costs and the third was destroyed by a tornado and was operating under radically different circumstances during the study. For the remaining 19 MTFs, all the comments that follow pertain to integrated performance across the parameters of quality, cost, and access. First, and as might be expected from the basic high/low approach, when one looked at systemwide data, there was a spectrum of hospital performance displayed. Nevertheless, military facilities all registered remarkably low per-capita costs when compared to the "Olmstead County benchmark."(24)

Adjusted through the Consumer Price Index to 1991 dollars, the "benchmark" yielded an average $2,130/employee figure. By comparison, direct care in the military system averaged $590/employee, and total per-capita costs in the Strategic Air Command system (direct care plus CHAMPUS) averaged $911/employee. ("Employee" was taken to mean active duty member, retiree, and dependent of retiree but not active-duty dependent, because this category of beneficiary was, to our understanding, not included in the Olmstead County capitation scheme.) Therefore, all MTFs qualified for a "low-cost" octant of the management cube. Four facilities (figure 4, below) also showed favorable results across the quality and access arms, earning an "in the green" rating. Seven had only one unfavorable measurement (either quality or access), giving them a "yellow" rating. None of the MTFs had all three measures to the unfavorable side. But, nine had unfavorable results for both quality and access, placing them "in the red" (figure 5, page 36). For the quality arm, the proportion of "in the red" facilities would not have been predicted from unintegrated civilian-external peer review, Maryland QIP, military inspection, or JCAHO reports.

The model also yielded facility-specific information, down to the clinical service and provider levels. For example, when one MTF was examined in detail, some targets for CQI became immediately apparent. To achieve "in the green" overall performance matching that of the top four SAC facilities, the MTF needed to reduce hospital-acquired infections, unplanned hospital readmissions, and unplanned admissions following ambulatory procedures by 6 percent each over its current rates. Outlier services were OB/GYN and general surgery for the diagnoses caesarian section and postoperative wound infection. Also, for access to care, the MTF needed to recapture an average of only two more CHAMPUS outpatients and one CHAMPUS inpatient per day in order to move it into "the green."

Discussion

It has long been known that interrelationships exist between quality, cost, and access. But, to our knowledge, there has been no systematic exploration of the simultaneous performance of health care organizations across all three elements. Ideally, a model attempting to do so would incorporate the features enumerated at the outset of this article. Were these features achieved, the information coming from such a model would be of immediate use to providers, payers, and patients alike. In prototype, the "management cube" meets the objective of this project and begins to address the objections that inevitably follow when the measure of any one function is left out.

Because the model is prototypical, this article has been longer on methodology than on detailed results. While the model is largely the product of a military database, we've made an effort to show applicability in civilian settings. Across the various measures, the relative weights assigned to epsilon--the patient perception value--are among the most contentious. These might easily be differently weighted, depending on consumer interest and/or expert panel deliberations in nonmilitary settings.

The cost data are intriguing. While full exploration of them would be the subject of another article, it is obvious that there are patient population differences and accounting methodologies to take into account. Regarding the former, our patients seek care in the direct military system seven times more frequently than do their age and sex-matched civilian counterparts. Yet, when they are ill, our data show that their severity index is half that predicted for their counterparts-and, at least against "the benchmark," costs for their care are very much lower. Yet it must be cautioned that the billed per-capita cost to civilian patients or payers may not be at the same baseline as is the billed per-capita cost to taxpayers through the military system, despite our efforts to match methodologies.

We chose the "lost opportunity" approach to access because we realized that demand may be infinitely fungible but an organization's capacity is not. It is much more directly measured, and, when set against some portion of care delivered elsewhere in the community, it is a real reflection of a facility's ability to meet a region's demand for either extra or a different mix of care. Our model thus deals with real access, not just the ability to pay.

Regarding the nine "in the red" facilities, the "low access" findings are probably a direct reflection of true difficulties patients face in getting into the system, with "lost opportunity" data and patient perceptions reinforcing one another. With respect to quality, the case is different. Two hospitals had even raw (unadjusted by our indexing) outcomes for Caesarian sections and complications of surgical procedures that were too high, so they clearly had intrinsic problems to resolve. But, the other seven actually had initial values a bit to the good side of national norms. However, adjustments for severity (their patients weren't as ill), and for patient perceptions (they weren't as good) moved them out of "high-quality" octants relative to their Air Force peers. For these seven MTFs, the message really was to work on patient perceptions and recapture the care of somewhat sicker patients (which, by a combination of severity index and case weight, we call "intermediate" illness). As mentioned earlier, we had not anticipated from other monitors that the seven facilities would display as they did on the quality arm of the cube. Upon reflection, we concluded that cube placement was actually quite equitable in a CQI sense, and the display provided insight that would otherwise have been lacking.
 Table 1. The Maryland Quality Indicolor Project

 Quality Indicators

 I. Hospital-Acquired Infections
 II. Surgical Wound Infections

 III. Infant Mortality

 IV. Neonatal Mortality (1801 grams only)

 V. Perioperative Mortality

 VI. Cesarean Section
 VII. Unplanned Readmissions

VIII. Unplanned Admissions

 IX. Unplanned Returns to Special Care Unit

 X. Unplanned Returns to Surgical Suite



References

1. Coddington, D., and others. "Health Care Reform: How Hospitals, Physicians Should Prepare." Health Care Strategic Management, 10(4):15-8, April 1992.

2. Lundberg, G. "National Health Care Reform--The Aura of Inevitability Intensifies." JAMA 267(18):2521-4, May 13, 1992.

3. AMA Group on Health Policy. "Health System Reform Proposals During the 102nd Congress."AMA Delegates' handout, June 1992.

4. Iglehart, J. "The American Healthcare System. Introduction." New England Journal of Medicine 326(14):962-7, April 2, 1992.

5. Goldfield, N. "Why We Cannot Agree on the Direction of Health Reform: An Exploration of American Values." Physician Executive 18(4):16-22, July-Aug. 1992.

6. Brown, L. "The National Politics of Oregon's Rationing Plan." Health Affairs 10(2):29-51, Summer 1991.

7. Lawrence, D. "Fulfilling the Potential." Healthcare Forum Journal 35(2):31-7, March-April 1992.

8. Health Access America, 2nd Ed. Chicago, Ill.: American Medical Association, 1992.

9. Todd, J., and others. "Health Access America--Strengthening the U.S. Health Care System." JAMA 265(19):2503-6, May 15, 1991.

10. Wilensky, G., and Rossiter, L. "Coordinated Care and Public Programs." Health Affairs 10(4):62-77, Winter 1991.

11. Reinhardt, U. "An All-American Health Reform Proposal." Roll Call, April 19, 1992.

12. Lawrence, D. "The High Cost of Health." GAO Journal 13:14-5, Summer/Fall 1991.

13. Ball, J. "Health Care Reform and the Practice of Medicine." Montgomery-Dorsey Symposium, Vail, Colo., July 24, 1992.

14. Rother, J. "Health Care America-- Meeting America's Health Care Needs." Washington, D.C.: Legislation and Public Policy Division, American Association of Retired Persons, 1992.

15. Reinhardt, U. "Breaking American Health Policy Gridlock." Health Affairs 10(2):97-103, Summer 1991.

16. Lohr, K. "Outcome Measurement: Concepts and Questions." Inquiry 25(1):37-50, Spring 1988.

17. Quality Indicator Project, developed by the Maryland Hospital Association's Council for Quality Health Care, 1301 York Road, Suite 800, Lutherville, Md. 21903-6087.

18. Gross, P., and others. "Description of Case-Mix Adjusters by the Severity of Illness Working Group of the Society of Hospital Epidemiologist of America (SHEA)." Infection Control and Hospital Epidemiology 9(7):309-16, July 1988.

19. Gonnella, J. Disease Staging: Clinical Criteria. Santa Barbara, Calif.: McGraw-Hill, 1986.

20. Aronow, D. "Severity-of-illness Measurement: Applications in Quality Assurance and Utihzation Review." Medical Care Review 45(2):339-66, Fall 1988.

21. Thomas, J., and Longo, D. "Application of Severity Measurement Systems for Hospital Quality Management." Hospital and Health Services Administration 35(2):221-43, Summer 1990.

22. Markson, L., and others. "Clinical Outcomes Management and Disease Staging." Evaluation in the Health Professions 14(2)201-27, June 1991.

23. U.S. Air Force Patient Care Survey, Directorate of Professional Affairs and Quality Assurance, Office of the Surgeon General, Bolling AFB, D.C., May 1990.

24. Campion, M., and others. "The Olmstead County Benchmark Project: Primary Study Findings and Potential Implications for Corporate America." Mayo Clinic Proceedings 67(1):5-14, Jan. 1992.

25. Reeves, P., and others. "Estimating Requirements." In Introduction to Health Planning. Washington, D.C.: Information Resources Press, 1979, p. 185.

26. Anderson, H. "Outpatient Planning: Still More Art Than Science?" Hospitals 64(24):26-32, Dec. 20, 1990.

BG (Ret.) Michael J. Torma, USAF, MD, was Command Surgeon, Strategic Air Command, when this article was written. He now is Chair, surgical Services, Presbyterian Healthcare System, Dallas, Tex. LTC (Ret.) Bernard W. Galing, USA, PhD, and COL (Ret.) Merton A. Quaife, ISAFR, MD, are consultants in Omaha, neb. CAPT Robert J. Palmer, USAF, MS, MBA; LCDR Suzanne K.S. West, USN, MS; MAJ Deborah C. Brown, USAF, MS; COL David K. Kentsmith, USAFR, MD; COL Patricia Chappell, USAFR, MS; and COL David C. Schutt, USAF, MD, are all on active or reserve duty with their respective military services.
COPYRIGHT 1993 American College of Physician Executives
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Schutt, David C.
Publication:Physician Executive
Date:Sep 1, 1993
Words:3962
Previous Article:The ethicist.
Next Article:Medical practices: hot properties of the 90s.
Topics:


Related Articles
Measuring Service Quality in the Networked Environment: Approaches and Considerations.
Using the Balanced Scorecard as a performance management tool.
Integrating systems engineering with earned value management.
Paying physicians in advanced managed care markets.
The structural linkages between TQM, product quality performance, and business performance: preliminary empirical study in electronics companies.
Cartesis.
Campaign variables: mixing and matching your marketing programs.
A custom mix: a growing number of insurers are adding product liability considerations to create a stronger investment management process.
Using economic capital as a tool: principles-based management tools can help life insurers make better decisions.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters