Printer Friendly

Tracking hematology QC with a concise visual display.

Tracking hematology QC with a concise visual display

CBC quality control data are often voluminous and unusable. An ordinary magnetic scheduling board consolidates test results and keeps them in plain sight for easy assessment.

Coordinating and comparing quality control for more than one CBC analyzer is no easy chore. Our large university hospital has three such analyzers, located in the routine, Stat, and outpatient clinic laboratories. The combined QC program generates reams of data. Although the extensive paperwork is necessary for documentation, it is highly unmanageable. Without manipulation, much of this available material can't be used at all.

Our hematology department had always done the QC needed. We wrote procedures for all tests and implemented a formal training program for new hires. We investigated unusual results and assembled a team of top-notch instrument troubleshooters. The QC program yielded accurate and reproducible results, but we weren't able to handle the data efficiently. Most of the data collected dust and did little to expedite the day-to-day routine. What we needed was a quality control plan that would work for us, not against us.

Interpreting CBC quality control data is particularly difficult. Standards are lacking; so is any extended stability for commercial controls. Some laboratories favor commercial whole blood controls or calibrators for quality control. Others use moving averages (Bull's algorithm-XB), which monitors RBC indices in consecutive patients. Some labs periodically retest patient specimens, using them as secondary controls to check precision and accuracy for the day. Still others rely on inter-instrument comparison or reference methods to check instrument performance. Using two or more methods is not unusual.

A three-level check (normal, abnormal high, abnormal low) on an eight-parameter CBC (monitoring WBC, RBC, HGB, HCT, MCV, MCH, MCHC, PLT) totals 24 checks per analyzer. Since we recently brought in our third analyzer, the tally is now 72 QC checks. XB analysis, generated automatically on the larger machines, adds three to six more checks per instrument. Comparisons between instruments add at least five checks per pair of analyzers, depending on the number of parameters being monitored.

Frustration and wasted time ultimately provided the motivation to revamp the system. Whenever we had a QC problem or question, I would pull the necessary reports, peruse the results, and prepare detailed charts to track the analyzer's performance. In a particularly trying week, I might repeat this process two or even three times, duplicating earlier effort. Yet there seemed no simple way to bring these charts--culled from the computer and assorted manual records--up to date without creating even more paper and work.

Recalibration entailed examining data from several QC logs, scanning 30 to 40 pages of QC computer printouts, and reviewing one to two weeks of XB printouts. Investing all this time over and over again was extremely exasperating. The notes and charts that I forever seemed to be making inspired me to find a way to summarize the section's QC data regularly. The trick was to find a system that would allow me to study the results of all our QC methods for all our CBC instrumentation simultaneously.

The solution was staring me in the face--at least, it is now: a magnetic scheduling board. I divided a 24X36-inch board gridded with one-inch squares into four vertical sections--one for QC data on each CBC analyzer and one for inter-instrument comparison. Writing on magnetic chips with a washable marking pen, I post new data regularly. The chips, in assorted shapes and colors, visually differentiate the types of data recorded. The kind of information I post, and how often, is noted in Table 1.

A detail from the hematology QC summary board is shown in Figure 1. The headings COUP, FAST, and STP are the computer code names we use for our three CBC analyzers. We post data chronologically using the small dark-blue squares labeled "Start" to indicate the oldest posting. When a column reaches the bottom of the board, we return to the top of that column. Since some columns fill up faster than others, we use the "Start" magnets to zero in on the most recent results. I also skip a line to provide a failsafe point of demarcation.

Round magnets denote the time frame for data being posted. We use light-green squares and circles for the XB data; light blue for weekly FAST versus COUP blind duplicate comparisons; light pink for weekly STP vs. FAST blind duplicate comparisons; and white rectangles and circles for commercial control data.

There is one other posting: Dark-pink magnets, which stand for instrument recalibrations (Recal). Because any instrument recalibration may influence quality control data or the agreement between the blind duplicates, it must be recorded. Today's CBC analyzers perform so well that we recalibrate every quarter or so. The board keeps us alert to any developing trends that may indicate the need to recalibrate. * Across the board. We easily determine the values to post by using a weekly computer printout that includes the calculated mean of our QC results and the manufacturer's assay mean and acceptable range, which we enter into the computer for each new lot number of controls. Here's a rundown of the data displayed in Figure 1.

The first posting in the COUP area is the lab section's commercial control data for the week of July 4-11, 1989. (We post these findings on Tuesdays.) The mean WBC for the control's low level (L) exactly matched the manufacturer's assay mean (L = 0). Our mean for normal level (N) was 0.1 higher than the assay mean (N = +0.1). The mean for high level (H) was 0.7 higher than the manufacturer's assay mean (H = +0.7)--the white blood count for this level was approximately 28,000.

The XB data is the second posting in the COUP section. This shows the mean per cent difference from target value for several parameters on 400 patient specimens. (We call these 20 batches of 20 specimens each our "20-20 report.") When the 20th specimen (of the 20th consecutive batch) goes through the analyzer, the computer screen flashes a signal instructing the technologist to print the report, which includes all calculations for that batch and the previous 19. Since the analyzer does not provide cumulative data, we hand calculate the means for the 20-20 reports in about five minutes. We process 400 specimens every two or three days and post the updated results as soon as the 20-20 reports are in hand.

The manufacturer considers anything less than a 3 per cent difference from the target value to be acceptable. In our experience, however, troubleshooting--and possibly recalibration--is indicated when a target exceeds 1 to 2 per cent for more than two successive 20-20 batches.

A simple calculation of the MCHC XB data converts the per cent difference from target value into the actual mean MCHC. This conversion adds a touch of reality to a very closely monitored index. Although the analyzers provide data for this RBC index, I do not post XB data for MCH, simply for lack of room. I do review all data in the 20-20 report, though, and keep a copy on file.

The data under FAST and STP present the same types of information, although there is a minor variation in the way we handle XB data for the STP. Since the clinic's CBC workload is still somewhat low, it takes up to two weeks to accumulate the necessary 20 batches of 20 specimens. To keep the data as current as possible, we generate a report every Friday (for however many batches have been processed) and then do the calculations. The circular magnets under STP show the date for the XB posting and specify the number of batches. One week it might be (N = 10); the next, (N = 13). Note that the COUP, FAST, and STP data are independent. Even so, we find that comparing commercial control data for the same time period helps us assess instrument calibration.

The second posting under STP notes the recalibration of HGB done on July 11 (7/11). This was simply a matter of fine-tuning the hemoglobin parameter in response to slightly low XB and commercial control values. True, the variation was only 0.1--and the clinic's analyzer was practically brand-new--but we like to see the instruments agree.

The first posting under Comparisons gives the results of the blind duplicate comparison for two of our CBC analyzers--FAST in the main lab and COUP in the Stat lab--from July 3 through 7, 1989. The information for the comparisons comes directly from the computer quality control report. The QC function in our computer will define blind duplicate files by using patient specimens as "controls."

Because the computer compares only two instruments at a time, evaluating all the analyzers requires running separate blind duplicates and making the comparison ourselves. Therefore, we select random patient specimens originally tested on the COUP, retest them on the FAST, and compare the results. On Monday through Friday we pull three specimens tested in the Stat lab and send them up to the FAST analyzer in the main lab. The technologist retests the specimens and files the results. The following Monday, I calculate the average differences of the 15 control specimens for each parameter and post my findings on the board.

As the WBC column under Comparisons in Figure 1 shows, the FAST analyzer averaged 0.1(K) lower for blind duplicate specimens initially tested on the COUP. The average difference for RBC was -0.03M. I arbitrarily chose the main lab's CBC analyzer as the reference instrument. Thus, differences are expressed in terms of the FAST analyzer.

Although the posting shows that the FAST averaged 0.03M less than the COUP for RBC, it's all a matter of interpretation, because the COUP RBC also averaged 0.03M higher than the FAST. To save space, I post the difference in values rather than posting both values.

The platelet count comparison--the last column--is given as a percentage, whereas all other blind duplicate values are expressed in absolute numbers. Our computer's flexibility allows us this distinction. Since platelet counts may go to 1 million or more, it is unrealistic to expect results on random specimens to agree within, say, 10. A percentage gives us a much better idea of what's really going on.

The second posting under Comparisons compares blind duplicate specimens tested on the STP and FAST analyzers. Using magnets of different colors for the two inter-instrument comparisons makes it easier to review the data. The location of the Start magnet in the Comparison section illustrates the way in which we wrap data around the board to maintain the chronologic order of results.

I post recalibrations ("Recal" on the magnets) in this column as well as in the section representing values from the analyzer being evaluated. This duplication makes it easier and faster to review the information. Otherwise, I would have to scan the previous columns for possible recalibrations before I could properly evaluate the comparisons. This posting must, of course, identify which analyzer has been recalibrated. * Not a cure-all. My summary board, although a great time saver, is not a QC cure-all: It cannot make effective quality control decisions by itself. It does, however, help me make my own decisions more efficiently. I've been using the board for more than four years now, and I love it. In fact, when we began setting up the clinic lab, one of my first official acts was to order a larger magnetic board to accommodate data from the third CBC analyzer.

That was some months ago, and the board has absorbed the addition with little fuss. That's the beauty of this system: You can always add another column if you need it--and if you have room for a larger board. Once the board has been set up, maintenance is minimal--but you must be diligent. I update the board at least three times a week. It takes no more than 10 minutes to review the new data and post the numbers.

This time is definitely well spent. The board provides the entire hematology staff with continuous, up-to-date quality control information and greatly reduces the aggravation quotient of CBC monitoring. For my part, I enjoy having masses of previously unmanageable data available at a glance by simply looking up from my desk. I call the board my "armchair quality control" and expect it to grace my office wall for a long, long time. [Tabular Data Omitted]
COPYRIGHT 1989 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1989 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Yapit, Martha K.
Publication:Medical Laboratory Observer
Date:Nov 1, 1989
Previous Article:Legal aspects of self-testing.
Next Article:Managing your half of dual-career couples.

Related Articles
How we automated our hematology lab; careful preparation enabled this laboratory section to computerize CBCs, differentials, and RBC morphology...
Quality control in the new environment: automated hematology.
'Expert' software & lower-cost hardware for testing/QC.
Stability of hematology panels.
Streamline your automated hematology laboratory.
Giant platelets.
Hemythology: 10 myths of hematology; are you a believer?
The quest to balance talent and technology.
Trends in hematology.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters