Printer Friendly

Analyzing and reducing laboratory quality costs; you can actually improve quality while cutting quality costs; here's how to do it.

Analyzing and reducing laboratory quality costs

Many laboratory professionals are convinced that diagnosis related groups diminish the quality of clinical lab testing. They believe cost-effectiveness and quality are inversely related, and thus any policy that cuts quality control costs must also cut quality. Unfortunately, this viewpoint delays the implementation of much needed changes in laboratory quality assurance programs.

The purposes of this article are to define and examine the relationship between quality, quality costs, and quality control, and to suggest ways to increase quality while decreasing quality costs.

What is quality? Juran, the industrial quality and productivity guru, defines quality as producing a product that is "fit for use," This definition is user-oriented because fitness is determined by the consumer of the product or service. That is, quality, like beauty, is in the eye of the beholder.

Quality can also be defined in technical terms. In this context, quality means providing a product or service that conforms to certain tolerances or specifications. This definition is technical because technicans and engineers, not the consumer, set the tolerances. This latter definition is the one most widely accepted by the laboratory community.

What are the costs of quality? In the laboratory, quality costs are generally perceived as the money spent to produce clinically meaningful test results. In industry, though, quality costs are defined as the costs "of making, finding, repairing, or avoiding defects." The distinction is important: Industry associates quality costs with defective products, whereas the lab connects them with good products. Laboratorians believe you get what you pay for, so the more you spend, the better the quality.

By industry's definition, if a patient specimen is analyzed on a precise and accurate instrument by an individual who never makes a mistake, each test result is perfect. If all test results are error-free, zero quality costs are generated. It follows that changes in the test system that lead to fewer errors reduce quality costs and improve quality.

But laboratorians may be slow to take advantage of this because of their quality price tag mentality. They run the same number of controls and reagent blanks on a new, stable instrument as on the old one, and they set tight standards of performance in the mistaken belief that this will produce a better product. So a change in attitude is also necessary to reduce quality costs.

With this perspective, quality costs are divided into four broad categories: prevention, internal failure, appraisal, and external failure. Preventive quality costs include the expenses of training new personnel and providing continuing education for existing employees. The expenses of developing, implementing, and testing new procedures are preventive costs, as are the costs of maintaining and servicing instruments.

Appraisal and internal failure costs are closely related. The latter are the costs that disappear if there are no errors in the testing process. If tests are not repeated due to instrument malfunction, operator error, etc., the laboratory incurs zero internal failure costs.

Appraisal or inspection costs are the costs of detecting internal failures. These include the expense of running controls, calibrators and standards, and the cost of participating in interlab proficiency surveys.

External failure costs result when a defective product escapes internal detection and reaches the consumer. In a business setting, consumers usually return defective products to the company, which must then fix the product or refund the consumer's money. In either case, the company suffers a definite cost.

In the laboratory, the alternatives are not so cut-and-dried. If an erroneous test result reaches the physician, two things could happen. The physician may realize the result is incorrect because it doesn't fit in with the clinical circumstances and repeat the test. If he asks the laboratory to repeat the test at no charge, or the lab is operating in a prospective payment system, the laboratory incurs the extra cost. If, however, the laboratory bills the patient for the repeat test, then the cost is transferred to the patient or his insurer.

Alternatively, the physician could act on the erroneous test result, and in so doing harm the patient. Clearly this sequence of events leads to a cost, sometimes a very dear one.

Some quality costs are easy to calculate. Education, instrument maintenance, quality control materials, and lab proficiency surveys are line items on most lab budgets. But other less tangible quality costs, such as labor time and wear and tear on instrumentation, are more difficult to quantitate and are usually underestimated. A lab accounting system is necessary to track these expenses accurately and pinpoint trouble areas.

Table I displays test volume and quality cost data for a 400-bed, non-teaching hospital laboratory. The difference between total tests and billable procedures is the number of controls, duplicate specimens, standards, calibrators, and patient repeats. Direct costs are the costs of the resources used in the testing process, i.e., labor and supply expenses.

This laboratory spent $5,375,035 on testing last year. It performed 402,231 billable procedures and 278,003 controls, standards, calibrators, and patient repeats for a total of 680,234 tests. Because the testing process is not perfect, $1,520,065 or about 28 per cent of total direct costs are spent preventing and detecting errors. These are the quality costs. Chemistry, the largest department, accounts for 63 per cent of all quality costs.

Figure I displays the cost per test for each department. For the laboratory overall, the cost per billable procedure is $13.36. Of this amount, $9.58 is the analytical cost and $3.78 is the quality cost.

Once quality costs are quantitated, the next step is to trim wasteful expenses. This is accomplished by setting realistic accuracy and precision standards for each laboratory assay, by choosing the most efficient test methodologies, and by deciding how frequently and how closely to monitor those methodologies for conformance to the standards.

In order to set reasonable standards, it's necessary to define "fit for use" criteria. Since physicians are the consumers who use lab results to diagnose, monitor, and treat diseases, logically they and not the laboratorian should set these criteria for the lab assays.

Recently, Skendzel, Barnett, and Platt conducted a physician survey on lab utilization. Table II lists the precision requirements for selected lab tests as determined by this survey. The tolerance range is the degree of precision a physician expects from a test, or in other words, the performance standard of the assay. For example, if a physician feels a significant change between two sequential blood glucose measurements is 30 mg/dl, then the tolerance range for a glucose measurement is 30 mg/dl, and the assay must be able to discern a 30 mg/dl difference in the analyte.

The process capability of a test methodology must fall within this tolerance range in order to provide the required degree of precision. The process capability defines the maximum level of precision the methodology can achieve when operating in a state of statistical control. It is the best the process can do. Process capability reflects inherent random variation and is defined as six times the assay's standard deviation.

For example, a blood glucose assay with a process average of 100 mg/dl and a standard deviation of 2.8 mg/dl has a process capability of 16.8 mg/dl (6 times 2.8). If the true glucose value of a blood specimen is 100 mg/dl, then 99.75 per cent of the time all measurements made on the specimen by this method fall between 91.6 mg/dl and 108.4 mg/dl ([plus-or-minus] 3 SD). In other words, the physician can be 99.75 per cent confident that the given laboratory value for glucose is within [plus-or-minus] 8.4 mg/dl of the true patient value. Similarly, the physician can be 95 per cent sure that the lab value is within [plus-or-minus] 5.6 mg/dl ([plus-or-minus] 2 SD) of the true value.

The capability ratio describes the relationship between the physician-defined tolerance requirements and the process capability of an assay or method. The capability ratio is calculated by dividing the process capability by the tolerance range. Using the glucose example, if the process capability is 16.8 and the tolerance is 30, the capability ratio is .56 (Figure II).

Capability ratios can be used to divide laboratory assays into two broad categories: assays that provide more precision than required (capability ratios <1); and assays that lack the precision to meet tolerance needs (capability ratios >1).

Although quality control strategies differ for these two groups, their object is the same: to identify and eliminate assignable variation from the testing process. As a general rule, assignable variation shifts the process average of instrument-dominant methodologies such as automated chemistry assays because instruments tend to err consistently, whether high or low. In operator-dominant methodologies such as RIA chemistry, blood banking, and microbiology, assignable variation increases the amount of variation from the process average because human beings make random mistakes.

A laboratory test result is an intangible measurement. Unlike manufacturing in industry, a laboratory test doesn't produce a concrete product to inspect and sort. So quality control efforts must be aimed at monitoring the testing process. Monitoring is accomplished by placing biologic controls on each test run. If the controls fall within the tolerance range, one can assume the assay is performing properly and the patient results are fit for use.

Detection of assignable variation depends on the placement of quality control limits on each assay and on the frequency that each assay is monitored. If the QC limits are set too loosely or the assay is infrequently monitored, the lab risks releasing bad test results. This increases external failure costs. If the limits are set too tightly or the assay monitored too often, then many valid results are not released. This increases internal failure and appraisal costs. Obviously if an assay delivers more precision than the user requires and remains stable for prolonged periods, it makes economic sense to loosen the controls and eliminate the cost of rejecting fit-for-use tests.

The ideal clinical laboratory assay has a low capability ratio and a reliable performance record. If the capability ratio is .50 for example, then the precision of the assay is tight relative to the tolerance requirements. Iron and glucose are some examples of very precise assays. Even if assignable variation enters the system and shifts the process average two or three standard deviations, fit-for-use results are still produced. If the control limits are set at [plus-or-minus] 3 SD, then out-of-control situations can be investigated and corrected without the necessity of holding up or repeating patient results. This greatly reduces internal failure costs.

If the assay is stable and remains in control over several days, it's not necessary to monitor its performance on a run-by-run, shift-by-shift basis. This significantly reduces appraisal costs.

This concept has been in practice in industry for a long time. In the Japanese car industry, the capability ratios on metal casting processes are all below .60. The result is that Japanese manufacturers never shut down an assembly line; even when the process starts to drift, they can correct it before they produce bad cars. American capability ratios are much higher, which is one reason our factories are less efficient.

At the other end of the spectrum are the assays with high capability ratios and poor or unpredictable performance records. These assays lack the precision to meet the tolerances and go out of control at unpredictable intervals.

The only way to improve the precision of these assays is to change the process or repeat the measurement. Changing the process means either tinkering with the method or choosing a new process with less random variation. Adding more or less reagent or lengthening the incubation times are examples of changing a process; occasionally through trial and error the tinkerer gets lucky and performance improves. Some of the random variation becomes assignable and is eliminated from the system. But it's much easier and usually less expensive to simply pick a new method with a lower standard deviation than to struggle trying to improve an existing process.

Making multiple measurements of the same patient specimen is the second way to improve the precision of a result. The use of multiple measurements is based on the fact that the distribution for specimen means is narrower than the distribution curve of individual values. It is possible to halve the imprecision of a method by quadrupling the number of measurements. If the glucose sample is run four times and the results are averaged, the imprecision decreases from [plus-or-minus] 16.8 mg/dl to [plus-or-minus] 8.4 mg/dl per cent. Of course, there is an economic trade-off. Does the increased precision justify the laboratory's higher processing costs?

It is possible to maintain quality in a cost-efficient laboratory while minimizing quality costs. A lab accounting system can identify the areas that need attention. Internal failure costs can be significantly reduced if the lab uses physician input to set realistic performance standards for each assay. Appraisal costs can be reduced by selecting test methodologies with low capability ratios and reliable performance records. Higher precision may necessitate a greater immediate investment, but in the long run it pays for itself in quality cost savings.
COPYRIGHT 1986 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1986 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sharp, James W.; Warshaw, Myron M.; McLaughlin, Susan B.
Publication:Medical Laboratory Observer
Date:Oct 1, 1986
Previous Article:Creating job standards for a merit pay plan.
Next Article:Strategic management in blood banking.

Related Articles
The impact of DRGs after year 1: first steps toward greater lab efficiency.
Moving lab revenues and costs outside the hospital.
The impact of DRGs after year 2: consolidating the changes.
Cut costs, not quality.
Labs hit with cuts under final budget accord.
Congress to open hearings on laboratory testing quality.
Quality management: watchword for the '90s.
Laboratory fee roll-in studied by HHS.
GAO urges further cuts in lab fee schedule.
A practical approach to reducing your costs in clinical chemistry testing.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters