# Keeping two automated cell counters in calibration synch.

A minor variation in calibration between instruments or a slight
drift can produce major discrepancies in results. This quality control
approach overcomes the problem.

Several years ago, with workload climbing toward the current volume of 600 blood specimens per day, our hematology laboratory acquired its second Coulter S Series cell counter. That helped ease the work flow, but it created another challenge. Since either instrument or both could be in use at any time, we wanted to be able to assure physicians in our 852-bed hospital that a change in patient results reflected a change in patient status-not a change in instrumentation.

This meant more than insuring that each instrument maintained calibration, accuracy, and precision. Now we would have to see that the counters were calibrated as closely as possible and that their confidence limits were nearly the same.

Why develop side-by-side quality control for identical instruments that are already calibrated correctly? Because even a minor variation in calibration or a slight drift can lead to major discrepancies in test results. We needed a simple QC program that would quickly identify problems before they jeopardized an entire day's workload.

The plan we devised in 1983 uses the tools at hand-commercial controls and calibrators, patient specimens, and the laboratory computer for statistical analysis. We employ a commercial calibrator for the cell counters in intially. Then we perform our side-by-side studies of calibration and precision with patient specimens. Since commercial controls are costly, we decided to reserve them for accuracy checks.

Actual specimens are inexpensive, readily available, and representative of the hospital's patient population. The decision to let the laboratory computer handle the statistical analysis was mostly a matter of convenience-a simple calculator could be used instead.

Incidentally, the same basic principles of our multiple-instrument quality control plan hold whether the instruments are identical or come from different manufacturers. Here's how the QC plan works:

* Selecting QC specimens. Results falling at the extreme ends of the range are likely to bias the statistical calculations, so it's best to choose patient specimens that are not too abnormal. We do not exclude abnormals altogether, however, because quality control specimens should reflect the patient population.

As soon as possible in the morning, a technologist gathers 10 specimens whose values on one of the cell counters meet the laboratory's quality control criteria (Figure 1). We set up the criteria based on our experience with these counters.

* Obtaining QC data. The technologist records values for the white blood count, the red blood count, hemoglobin, mean corpuscular volume, and the platelet count (any other parameter can be monitored if one wishes). Then the technologist runs the same 10 QC specimens on the second analyzer and again records values for the five parameters.

The two sets of values are used to compute the mean differences between cell counters for each parameter, along with the confidence limits for these differences. If the instruments have been properly calibrated and running without problems, their values should be very close.

* Using the QC data. The easiest way to determine confidence limits for our Coulter QC plan is to compute the standard deviation of duplicate specimens for each parameter being monitored. (Figure 11 shows the WBC calculations.) Our laboratory uses the common @ 2 SD or 95 per cent confidence limits.

We built a database over 30 consecutive days, computing and recording the standard deviation for the five parameters on 10 specimens daily, tested with both instruments. That seemed long enough to us; no significant changes will occur over a more extended period.

If unreasonable values had turned up during the 30 days, though, we would have eliminated them from the database and compensated by collecting data for a few more days. Figure Ill lists examples of unreasonable standard deviation values.

Frequent broad differences between the instruments indicate a possible calibration problem. The problem should be resolved before further data are recorded.

After collecting 30 reasonable 2 SD values for each parameter, we find the mean 2 SD value and the + 2 SD around that mean. To do this, we use the formula for computing standard deviation of replicates (see Figure IV for a 10-day WBC calculation).

Once we obtain the mean standard deviation, the confidence interval around that mean, and ft 3 SD as an outlier limit, we can prepare Levey-Jennings graphs for the five Coulter parameters, charting each dally 2 SD value as a point. These graphs provide a practical means for monitoring instrument calibration and precision on a daily basis. Figure V illustrates how the graphs help us check day-to-day precision of WBCs and platelets.

When instrument calibration has been maintained, and the precision between instruments is good, charted points should bounce back and forth across the mean. The closer a point is to zero, the more closely calibrated are the two instruments. Points generally remain within + 2 SD; occasionally one may fall between + 2 SD and + 3 SD . We investigate trends or shifts that appear on the graph, particularly when a point falls beyond + 3 SD, as they may indicate calibration drift or loss of precision on one or both instruments. Examining results obtained with commercial controls helps determine which instrument has a calibration or precision problem.

* Updating the QC data. We update the Levey-Jennings charts every six months. Using the last 30 to 60 data points on the graph for each parameter, we compute standard deviation of replicates to establish the mean, and + 2 SD and + 3 SD limits for the next six-month period. In general, these numbers don't vary much from one updating to the next. For the purpose of determining the limits, points falling outside + 3 SD on the current chart are eliminated as outliers.

Our hematology volume has continued to grow, and today we have four Coulter S Series cell counters-three in the main laboratory and another in an outpatient Stat lab at a nearby clinic. After our side-by-side quality control on the first two cell counters, the 10 QC specimens are passed along for a calibration check on the other instruments.

We have found our quality control program to be a simple and workable way to monitor the calibrationand precision of more than one cell counter. Inspection agencies agree. The College of American Pathologists, for one, has accepted

the approach.

The author is senior clinical technologist in quality assurance for hematopathology at the University Hospital, Ann Arbor, Mich.

Several years ago, with workload climbing toward the current volume of 600 blood specimens per day, our hematology laboratory acquired its second Coulter S Series cell counter. That helped ease the work flow, but it created another challenge. Since either instrument or both could be in use at any time, we wanted to be able to assure physicians in our 852-bed hospital that a change in patient results reflected a change in patient status-not a change in instrumentation.

This meant more than insuring that each instrument maintained calibration, accuracy, and precision. Now we would have to see that the counters were calibrated as closely as possible and that their confidence limits were nearly the same.

Why develop side-by-side quality control for identical instruments that are already calibrated correctly? Because even a minor variation in calibration or a slight drift can lead to major discrepancies in test results. We needed a simple QC program that would quickly identify problems before they jeopardized an entire day's workload.

The plan we devised in 1983 uses the tools at hand-commercial controls and calibrators, patient specimens, and the laboratory computer for statistical analysis. We employ a commercial calibrator for the cell counters in intially. Then we perform our side-by-side studies of calibration and precision with patient specimens. Since commercial controls are costly, we decided to reserve them for accuracy checks.

Actual specimens are inexpensive, readily available, and representative of the hospital's patient population. The decision to let the laboratory computer handle the statistical analysis was mostly a matter of convenience-a simple calculator could be used instead.

Incidentally, the same basic principles of our multiple-instrument quality control plan hold whether the instruments are identical or come from different manufacturers. Here's how the QC plan works:

* Selecting QC specimens. Results falling at the extreme ends of the range are likely to bias the statistical calculations, so it's best to choose patient specimens that are not too abnormal. We do not exclude abnormals altogether, however, because quality control specimens should reflect the patient population.

As soon as possible in the morning, a technologist gathers 10 specimens whose values on one of the cell counters meet the laboratory's quality control criteria (Figure 1). We set up the criteria based on our experience with these counters.

* Obtaining QC data. The technologist records values for the white blood count, the red blood count, hemoglobin, mean corpuscular volume, and the platelet count (any other parameter can be monitored if one wishes). Then the technologist runs the same 10 QC specimens on the second analyzer and again records values for the five parameters.

The two sets of values are used to compute the mean differences between cell counters for each parameter, along with the confidence limits for these differences. If the instruments have been properly calibrated and running without problems, their values should be very close.

* Using the QC data. The easiest way to determine confidence limits for our Coulter QC plan is to compute the standard deviation of duplicate specimens for each parameter being monitored. (Figure 11 shows the WBC calculations.) Our laboratory uses the common @ 2 SD or 95 per cent confidence limits.

We built a database over 30 consecutive days, computing and recording the standard deviation for the five parameters on 10 specimens daily, tested with both instruments. That seemed long enough to us; no significant changes will occur over a more extended period.

If unreasonable values had turned up during the 30 days, though, we would have eliminated them from the database and compensated by collecting data for a few more days. Figure Ill lists examples of unreasonable standard deviation values.

Frequent broad differences between the instruments indicate a possible calibration problem. The problem should be resolved before further data are recorded.

After collecting 30 reasonable 2 SD values for each parameter, we find the mean 2 SD value and the + 2 SD around that mean. To do this, we use the formula for computing standard deviation of replicates (see Figure IV for a 10-day WBC calculation).

Once we obtain the mean standard deviation, the confidence interval around that mean, and ft 3 SD as an outlier limit, we can prepare Levey-Jennings graphs for the five Coulter parameters, charting each dally 2 SD value as a point. These graphs provide a practical means for monitoring instrument calibration and precision on a daily basis. Figure V illustrates how the graphs help us check day-to-day precision of WBCs and platelets.

When instrument calibration has been maintained, and the precision between instruments is good, charted points should bounce back and forth across the mean. The closer a point is to zero, the more closely calibrated are the two instruments. Points generally remain within + 2 SD; occasionally one may fall between + 2 SD and + 3 SD . We investigate trends or shifts that appear on the graph, particularly when a point falls beyond + 3 SD, as they may indicate calibration drift or loss of precision on one or both instruments. Examining results obtained with commercial controls helps determine which instrument has a calibration or precision problem.

* Updating the QC data. We update the Levey-Jennings charts every six months. Using the last 30 to 60 data points on the graph for each parameter, we compute standard deviation of replicates to establish the mean, and + 2 SD and + 3 SD limits for the next six-month period. In general, these numbers don't vary much from one updating to the next. For the purpose of determining the limits, points falling outside + 3 SD on the current chart are eliminated as outliers.

Our hematology volume has continued to grow, and today we have four Coulter S Series cell counters-three in the main laboratory and another in an outpatient Stat lab at a nearby clinic. After our side-by-side quality control on the first two cell counters, the 10 QC specimens are passed along for a calibration check on the other instruments.

We have found our quality control program to be a simple and workable way to monitor the calibrationand precision of more than one cell counter. Inspection agencies agree. The College of American Pathologists, for one, has accepted

the approach.

The author is senior clinical technologist in quality assurance for hematopathology at the University Hospital, Ann Arbor, Mich.

Printer friendly Cite/link Email Feedback | |

Author: | Lantis, Kay L. |
---|---|

Publication: | Medical Laboratory Observer |

Date: | Nov 1, 1987 |

Words: | 1062 |

Previous Article: | Telephone reference checks: a neglected preemployment tool. |

Next Article: | Starting a pharmacokinetic dosing service. |

Topics: |