Printer Friendly

Problem solving and decision making with proficiency data; analysis of proficiency surveys can help a lab troubleshoot snags in its procedures.

Problem solving and decision making with proficiency data

Have you looked at your laboratory's proficiency survey results lately? If you're like most laboratorians, you probably glanced down the columns quickly to make sure there were no asterisks or other symbols signaling problems, and then promptly filed the report for later reference.

If you did note some problems, you might have studied the report a little longer--perhaps grumbling about something being wrong with the proficiency sample you received--but then dismissed the bad news as a one-time fluke.

If that's all you generally get out of proficiency survey results, you are overlooking a valuable resource in such areas as trouble-shooting laboratory procedures and making purchasing decisions.

Approximately 7,000 laboratories in the U.S. contribute to the College of American Pathologists' Proficiency Survey data base. We will describe how the survey can serve as a technical and managerial tool, but first it's important to call attention to the CAP publications that help you use it.

"Interlaboratory Comparison Program' booklets come at the start of each year's program. They provide guidance on how to evaluate your results (and on how to fill out the worksheets and submit your data in the first place).

A publication called "Summing Up,' produced four times annually and included with returned survey data, reviews broad trends in past results and reports on upcoming survey changes. It often yields clues for improving future performance.

Finally, there are participant summaries that recap how laboratories performed on various analytes with different methods. The sheets give means, standard deviations, coefficients of variation, and ranges of values found or percentages of positives and negatives for each specimen. These data furnish an industry wide context when you want to assess your laboratory's proficiency and the relative accuracy and precision of your lab's methods.

Let's look at the possible uses you might make of proficiency survey results. We'll start with troubleshooting of procedures.

In most cases, if a problem exists with a procedure or instrument, it will be identified in the summaries of performance by all participants. For example, in recent slide tests for rheumatoid factor, more than 97 per cent of the participants reported a survey specimen negative. But most labs using one manufacturer's procedure reported the specimen positive, disagreeing with all other procedures and the consensus. The conclusion can be drawn that there is a difficulty with this particular procedure.

A laboratory's own unacceptable results should be compared with results on previous surveys to see if the problem is an ongoing one. The unacceptable results should also be compared with daily quality control records to determine whether a systematic error exists. Because the CAP often includes values from very low normals to very high normals for each constituent, the lab should verify the linearity of each procedure. The protocol given as an example in Figure I lists other steps that should be taken when trouble-shooting unacceptable results in chemistry.

Survey data can spotlight a number of specific problems. For example, evaluations of most urinalysis dipstick results are based on consensus of survey participants. Since the ketone portion of the dipstick is the first to become insensitive to trace amounts of positive constituent, false-negative results in a particular laboratory may indicate the sticks have been exposed to too much humidity and should be replaced.

Laboratory personnel have difficulty interpreting the dipstick color of positive protein and bilirubin results. Survey results consistently indicate that automated instruments are much more accurate in reading the dipsticks.

Several of the chemistry constituents --including potassium, calcium, sodium, and chloride-- are evaluated on the basis of arbitrary acceptable performance criteria rather than actual standard deviation calculations. Other tests are evaluated according to mean results obtained by particular methods. A uric acid evaluation, for example, would summarize survey results for the uricase, phosphotungstate, and iron reduction methods. Alert participants monitor the evaluations closely and may change a procedure in their lab if it does not compare favorably with another type of procedure.

For the past few years, the CAP has sold, as an option, duplicate vials of its chemistry survey specimens. These usually come in lyophilized form and can be stored in a freezer. Knowing what values your lab and others obtained for the original survey specimen enables you to use the duplicate as a check against daily quality control material. In addition, the extra specimen can serve as a nonbiased control when you try out a new procedure or instrument. It's better than relying on a control supplied for demonstration by the manufacturer.

We have also been able to use data from the surveys to hold inservices on slide identification and problem solving. The source material for the workshops comes from several proficiency series that include urine or blood cell slides. In one survey, the CAP received the following five different identifications on a hematology slide: segmented neutrophil, Auer rod, toxic granulation, Barr body, and Dohle inclusion body. Only one--toxic granulation--was correct. If a lab had another answer, an in-service could explore factors contributing to the error.

No other resource provides as much information as the CAP's quarterly participant summaries about how a given method or instrument has performed in the hands of so many laboratories. For this reason, they are an excellent shopping guide. A lab can use them to compare its current methods or instruments with products under consideration for purchase. Mean results give you a reasonable idea of an instrument's future accuracy, and coefficients of variation will tell you about its future reliability or precision.

One chemistry analyzer consistently demonstrated good accuracy and precision for BUN when the proficiency sample was in the normal range. When an abnormal BUN of 51 mg/dl was tested, however, the acceptable range of responses ( 2 standard deviations) was 36-64 mg/dl. That's quite a variance.

Asking sales representatives how their product performs on the CAP survey is the right intention but wrong approach. Let the buyer beware: If you're planning a purchase, you should personally check the product's track record against others in the survey.

Besides the benefits already cited, proficiency surveys can play a major role in standardizing your lab with others throughout the country, maintaining accreditation, and documenting quality.

There is little doubt that proficiency surveys have had an impact on the standardization of laboratory testing. Perhaps the most obvious example has been the consensus that labs have reached on glucose results by way of interlaboratory comparisons. Regardless of whether a glucose test is performed in North Platte, Columbus, New York City, or Los Angeles, results have been standardized for accuracy and reliability. Consequently, if a patient travels across the country, his or her glucose values will show minimal fluctuation due to differences in laboratory testing.

Although the CAP has said that a lab's results on the proficiency surveys are not to be used as the sole criterion for CAP accreditation, most states do want to monitor proficiency surveys. Proficiency testing, whether by CAP or another recognized agency, is a requirement for Medicare certification.

The Summer 1982 issue of "Summing Up' stated the CAP's philosophy: "The primary mission of this program has been to follow and to document the state of the art of laboratory medicine as it evolves. We strive toward uniformity by consensus . . . not standardization by regulation.' This important premise allows laboratory practitioners in the private sector to do for themselves what the Government might otherwise mandate through legislation.

The final and perhaps most obvious role of proficiency survey testing is documenting quality. Even if indications of a problem seldom appear, proficiency surveys represent money well spent to corroborate how well a laboratory is performing. Most hospital labs could take a lesson from reference labs that use this documentation of quality as a marketing tool.

Table: Figure I Troubleshooting an unacceptable chemistry survey result
COPYRIGHT 1986 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1986 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Snyder, John R.; Glenn David W.
Publication:Medical Laboratory Observer
Date:Feb 1, 1986
Words:1298
Previous Article:Case history of a lab consolidation; a merger of two hospital labs and an independent lab provides combined testing capabilities and strong potential...
Next Article:AIDS and the lab: infection control guidelines; with recommended precautions, laboratory personnel face no risk of contracting AIDS through...
Topics:


Related Articles
Tailoring clinical chemistry instruction to the job market; a survey of area labs pinpointed needed changes in this MT curriculum.
Meeting the challenge of bedside testing; technology will continue to push patient testing beyond the bounds of the laboratory.
Needed: consultants to physicians' office labs.
On the road with a consulting technologist.
A primer for proficiency testing.
CDC floats uniform proficiency testing plan.
The technologist's role in quality management of off-site testing.
The technologist's role in quality management of off-site testing.
Internal proficiency testing for hematology.
Are you a multidisciplinary leader?

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters