Printer Friendly

A method for optimizing and validating institution-specific flagging criteria for automated cell counters.

The accurate and timely delivery of differential white blood cell (WBC) count results by the hematology laboratory is crucial in many clinical settings, including acute infections, hematologic malignancies, and the administration of chemotherapy. Automated cell counters were introduced by Wallace Coulter in 1953 and have since replaced the microscope as the instrument of choice for most peripheral, differential WBC counts. The advantages of automated cell counters over the microscope include faster turnaround times, significantly lower labor costs, lack of interobserver variation, and results with greater statistical validity. (1)

Modern automated cell counters provide a reliable differential WBC count for samples that are within reference range and for those that exhibit only quantitative abnormalities. Qualitatively abnormal sample results, such as those with abnormal or immature cells, still require the preparation of a slide and microscopic analysis. (2) The instruments use flags (electronic or printed alerts) to notify the user that the automated differential WBC count may not be correct and requires review. The factors that trigger these flags vary with the underlying technology of the cell counter; in most cases, the flags have some level of specificity for the presence of certain abnormal findings. For example, instruments that use light scatter and fluorescence will flag samples that contain cell populations in certain areas of their scatter-grams with a blast flag. The presence of one or more flags does not indicate that a specific abnormality or any abnormality has to be present; it only indicates that there is an increased probability of an abnormality that can only be excluded or proven by slide review.

The presence of a flag on a sample, therefore, usually prompts the laboratory to prepare and review a slide with a microscope or a digital imaging device. (3) If no qualitative WBC abnormalities are identified on the slide and the quantitative distribution of the normal cell populations on the slide corresponds to the automated differential, the results from the instrument may be released. Otherwise, a full microscopic differential WBC count may be indicated.

The percentage of differential WBC count samples that are flagged by an automated cell counter and submitted for microscopic review is known as the review rate. Factors that influence the review rate include the patient population served (eg, the review rate will be higher in patients with hematologic or oncologic disease because of the immature or abnormal WBCs often observed in these patients), the type of automated cell counter, and the settings of the flagging thresholds of the automated cell counter. The latter variable is most easily influenced by individual laboratories.

Most automated cell counters are installed with factory-set or factory-recommended settings for the thresholds. It is then up to the individual laboratories to adjust the thresholds to the clinical needs of their patients and clinical staff. These adjustments involve a careful balancing of the potential risk of missing any abnormal cells (which favors thresholds set at very low values) and the increase in turnaround time, manual labor, and cost caused by an increase in the review rate (which favors thresholds at high values). In many cases, laboratories opt either to use the factory-set defaults or to make adjustments over time by a process of trial and error without a clear understanding of the exact consequences of the adjustments.

On the Sysmex XE-2100 and XE-5000 (hereafter, the XE-2100/5000; Sysmex, Kobe, Japan) analyzers, more than 30 different flags are used to indicate the possible presence of qualitative and quantitative abnormalities of red blood cells, WBCs, or platelets in a sample. Our study investigated 5 of those flags that indicate the possible presence of abnormal WBCs and that have user-adjustable thresholds. (4,5) Using those 5 flags as a model system, we developed a systematic method for optimizing the thresholds at which the cell counter's flags are triggered.


The Sysmex XE-2100/5000 Line of Automated Cell Counters

The Sysmex XE-2100/5000 line of automated cell counters establishes a 5-part differential WBC count using an optical technique that includes forward scatter, side scatter, and side fluorescence as well as a method of electric impedance. (6,7) An upgraded software package (XE-IG Master, Sysmex) allows users to obtain an extended differential WBC count with the addition of an immature granulocyte count, consisting of promyelocytes, myelocytes, and metamyelocytes, to the standard 5-part differential test. (8,9)

The 5 WBC-specific flags with user-definable thresholds that we used as our model system are called (1) blasts, (2) immature granulocytes, (3) left shift, (4) atypical lymphocytes, and (5) abnormal lymphocytes/lymphoblasts flags. These flags are generated by patterns in the scattergrams that are typical for certain abnormalities. We did not study WBC count flags whose triggering thresholds were not adjustable by the user or flags that indicate possible red blood cell or platelet abnormalities.

Manual Differential WBC Counts

Blood smears were prepared using the SP-1000i slidemaker-stainer (Sysmex; adult samples) or using the push-pull method with a spreader slide (pediatric samples) and stained with Wright-Giemsa. Manual differential WBC counts using the standard microscopic technique followed the laboratory's standard operating procedure, which is based on the Clinical and Laboratory Standards Institute guidelines. (10) One hundred WBCs were counted for each manual differential WBC count test.

In our laboratory, slides prepared from flagged samples are scanned by technologists for the presence of one or more abnormal cells populations, as summarized in Table 1. These criteria are based on the recommendations by the International Consensus Group for Hematology Review and modified based on the clinical needs of our institution. (11) The modifications were based on an informal survey of the practices of other laboratories, a review of the literature, consultation with the clinical staff, and the experience of our laboratory leadership. If no abnormal cell populations are found and the instrument's differential results appear representative of the relative distribution of cells on the slide, the automated differential WBC count results are released; if one or more of the conditions listed in Table 1 is present, a full microscopic differential WBC count is performed. We used these criteria to identify cases as true-positives (one or more of the criteria was met, and a microscopic differential was necessary) or false-positives (none of the criteria was met, a microscopic differential was not necessary, and the automated differential WBC count result was released).

Study Samples

The study was performed in the Core Laboratory of the Columbia University Medical Center campus of the New York-Presbyterian Hospital (New York), a tertiary care, academic medical center serving a large inpatient and outpatient population, including adult and pediatric hematology-oncology services. The study samples were routine patient specimens that had a differential WBC count ordered by the clinician and a microscopic slide reviewed because one or more of the 5 flags of interest was triggered at the factory default settings.

Statistical Analysis

Statistical analysis was performed using Excel software (Microsoft, Redmond, Washington). Receiver operating characteristic curves were constructed using Stata 10.0 (StataCorp, College Station, Texas) software. The optimal settings for the flagging thresholds were described using positive predictive values (PPVs) and efficiency, and they were derived using the maximized Youden index (YI). The PPV was defined as

PPV = True Positives/All Positives

The YI is a function of both sensitivity and specificity and is used as a summary of the diagnostic effectiveness of an assay at various cutoffs. (12) The threshold for an assay when the YI is maximized, therefore, represents the best performance profile of a test, which is the largest vertical distance from the diagonal to the receiver operating characteristic curve. The YI can be seen as a simplified measure of area under the receiver operating characteristic curve and a method of minimizing regret in medical decision making. (13,14) The YI assumes a value between 0 and 1, with 1 representing the most effective cutoff, and is calculated with the equation

YI = Sensitivity + (Specificity--1)

The maximized YI was used to select the optimized thresholds of the 5 flags, both for their specific abnormal findings and for the presence of any abnormalities in the WBC counts. Efficiency is defined as

Efficiency = (True-Positives + True-Negatives) /All Cases

Efficiency can best be understood as the proportion of samples that an assay (or threshold) correctly classifies as disease or nondisease. Efficiency is the probability of disease given a positive test result and no disease given a negative test result. As such, efficiency is a useful measure to compare 2 assays or various thresholds. (15)

Study Design.--Optimization Set.--A group of 502 specimens that had been flagged by one or more of the 5 flags of interest when thresholds were set at factory-default levels was used as the optimization set. The numeric value of each of the 5 differential WBC count-specific flag variables was extracted from the automated cell counter and incorporated into an Excel spreadsheet.


Optimization began by raising the cutoff of each flag from the factory default setting in increments of 10 units and calculating the YI of each level for the identification of the specific abnormality denoted by the flag (eg, the blast flag for blasts). The threshold yielding the highest YI was chosen for each flag.

The Figure depicts graphically the process of choosing a cutoff that maximizes the YI using the receiver operating characteristic curve of the abnormal lymphocyte/lymphoblast flag for its abnormality (atypical lymphocytes or lymphoblasts). Note that although sensitivity of the flag decreases with optimization, the maximized YI represents the optimal balance between true-positives and false-positives.

We next varied the threshold of each flag from the previously optimized point, in 10-unit increments, to find the threshold corresponding to the highest YI for each flag in the presence of any abnormality. The 5 flags were adjusted in the following order: blasts, abnormal lymphocytes/lymphoblasts, immature granulocytes, atypical lymphocytes, and left shift. In addition to maximizing the YI, we also used clinical judgment in this step. For example, we made the clinical decision that we were willing to accept a significant number of missed bands and some missed myelocytes and metamyelocytes, but we would not tolerate missing a single blast.

Validation Set.--We applied the optimized flagging thresholds derived from the optimization set to a new, separate set of 378 samples that had been flagged by at least one of the 5 flags set at factory-default levels. The PPV, the efficiency, and the resultant review rate were calculated for the validation set using the optimized criteria. Additionally, all cases that would have been missed by our new criteria were enumerated and followed up for evaluation.

Additional Blast Cases.--To increase the number of cases with actual leukemic blasts in the differential WBC count tests, we collected additional samples, flagged at factory-set criteria, which were ultimately confirmed by manual differential tests to harbor more than 1% blasts. A total of 14 cases were recruited, and the values of each of the 5 flags were recorded to assess whether the case would have been detected by optimized criteria.


Performance of the 5 Flags at Factory Default Settings

The abnormality-specific PPV of each of the 5 flags was between 5.4% (PPV of the blast flag for the presence of blasts) and 33% (PPV of the immature granulocyte flag for the presence of myelocytes and/or metamyelocytes) (Table 2). When we considered the PPV of each flag for any abnormality, the PPV ranged from 8.6% (PPV of the abnormal lymphocyte/lymphoblast flag for any WBC abnormal finding on manual differential) to 64% (PPV for the blast flag for any WBC abnormal finding). The combined PPV for any abnormal finding of any one or more of the 5 flags at factory-default thresholds was 23%. Sensitivity and the negative predictive values could not be assessed due to lack of cases negative at factory-default settings (a consequence of our sampling scheme).

Optimized Thresholds

The thresholds for the 5 flags were optimized for detection of their specific abnormalities, as well as for detection of any abnormality, as described in "Materials and Methods," to the values shown in Table 3.

The results of optimization on the abnormality-specific PPV of each flag and the PPV of each flag for any abnormality are given in Table 2 for comparison to factory-default thresholds. The abnormality-specific PPV of all but the atypical lymphocyte flag was improved, whereas all overall PPVs were improved by optimization. The overall PPV of the 5 flags for any abnormality increased to 31%, and the efficiency of flagging overall was 52%. Additionally, the flagging efficacies of each flag for its specific abnormality and for any abnormality were calculated, and these data are also summarized in Table 2.

Validation of the Optimized Settings

A second, independent set of 378 samples was used to validate the optimized settings. Application of the optimized thresholds instead of the factory-default settings to these samples would have reduced the number of false-positive samples from 275 to 106 (Table 4). The overall PPV of the 5 flags for any abnormality in the differential white blood cell count increased from 27% at factory-default settings to 37% with the optimized thresholds.

Effects of the Adjustment of the Flagging Thresholds on the Review Rate

The review rate in the validation set for the 5 flags studied was 6.5% of all complete blood cell counts with differentials when the factory-default thresholds were used. When the optimized thresholds were applied, the review rate for the 5 flags of interest dropped to 2.9%. These data

are summarized in Table 4.

Clinical Effect of False-Negative Cases

The breakdown of abnormalities observed in the manual differential WBC counts of the validation set (n = 378) was 4 samples (1.1%) with blasts, 48 samples (12.7%) with more than 3% myelocytes and/or metamyelocytes, 51 samples (13.5%) with more than 5% atypical lymphocytes, and 50 samples (13.2%) with more than 7% bands. Many of these samples had more than one flag; 103 different samples (27.2%) had one or more flags. Use of the optimized thresholds resulted in 41 samples (10.8%) in the validation set that were flagged by the factory-default settings and truly harbored significant pathology but were missed by the optimized thresholds (false-negatives). No cases of blasts were missed. The abnormalities present in these cases are summarized in Table 5.

Additional Blast Cases

Fourteen additional cases flagged at factory-default settings and proven, by manual differential WBC count, to harbor blast cells were analyzed. All 14 cases were detected by our optimized criteria; 12 (86%) by the blast flag and 2 (14%) by a combination of the other 4 user-adjustable flags. Optimization did not result in any cases of missed blasts.


In this study, we have described a method for optimizing flagging thresholds on an automated cell counter that allows laboratories to safely reduce the number of differential WBC counts that require preparation of a slide (review rate). Through a combination of a maximized YI and clinical judgment, we were able to adjust the thresholds of 5 flags on the Sysmex XE-2100/ 5000 line of hematology analyzers and reduce the review rate from those 5 flags from 6.5% to 2.9%.

Overall, the optimized thresholds resulted in an improved PPV of each flag for either its particular abnormality or for any abnormality in the differential WBC count. The exception to this was the atypical lymphocyte flag whose PPV decreased slightly in the optimization process. The generally low PPV of the blast flags (blasts and abnormal lymphocytes/lymphoblasts) for their specific abnormalities in both factory-default and optimized settings is necessary, given the clinical need to detect all cases of blasts, requiring a relatively nonspecific flagging. Interestingly, although each flag is generally a poor predictor of its specific abnormality (ie, the abnormality after which it is named), the flags (with the exception of the atypical lymphocyte flag) have good PPVs for detection of any abnormality.

Comparisons of the efficiency rates of flagging reported in the literature on automated cell counters are inherently difficult because of different definitions of clinically significant abnormalities and true-negatives and the use of different types and models of automated cells counters in different patient populations. With these limitations, the efficiency rates of the differential WBC count flags in our study are very similar to those reported by Lacombe and colleagues (16) for the Cobas Argos 5 Diff (ABX/Roche Hematology Division, Montpellier, France) and the Technicon H2 (Technicon Instruments, Tarrytown, New York), and by Ruzicka and coworkers (17) for Sysmex XE-2100 instruments.

Previous studies (18,19) have shown that flagging sensitivity is dependent on the total WBC count, with a lower sensitivity in leukopenic samples and a lower specificity in samples with WBC counts greater than 10000/[micro]L. However, other studies (17) have shown only a mild effect of WBC count on overall efficiency. We did not examine this potential confounder.

Although the adjusted thresholds afforded a decrease in the number of false-positive flags, there were cases with abnormalities that were missed by our new criteria, which would have been flagged by factory-default settings. Most of the missed cases had either more than 5% atypical lymphocytes (n = 26) or more than 7% bands (n = 9). Studies indicate that "the band count is a nonspecific, inaccurate, and imprecise laboratory test" with a review of the literature providing "little support for clinical utility of the band count in patients over 3 months of age." (20(p101)) As our laboratory routinely prepares slides for all newborns, we were not concerned that underreporting of band forms because of changes in our flagging thresholds would have an adverse clinical effect.

The difficulty in correctly classifying lymphocyte findings either as within reference range or as atypical was pointed out in 1977 by Koepke. (21) Using data based on proficiency-sample glass slides sent to more than 4000 laboratories, he reported a coefficient of variation of 88% for the atypical lymphocyte count. (21) More recently, van der Meer and colleagues (22,23) sent PowerPoint presentations of WBCs to 671 technologists at 114 hospital laboratories. That study (22) also found significant interobserver variability in the classification of lymphocytes as atypical or within reference range. Furthermore, when the same cell was shown twice in the PowerPoint presentation, it was classified by 210 of the 617 observers (34%) as a different subtype. (22,23) Because of the limited reproducibility of the atypical lymphocyte count, we felt that missing some cases with increased numbers of atypical lymphocytes was acceptable.

Although no cases of more than 1% blasts were missed by our optimized settings in our validation set, we were concerned that we did not have a sufficient number of cases to adequately test the new criteria. For that reason, an additional 14 cases, flagged by factory-default criteria and confirmed to harbor blasts by manual differential, were analyzed. No cases of blasts would have been missed using optimized criteria, although the optimized blast flag only detected 12 of the 14 cases. The additional 2 cases were detected by a combination of the 4 remaining flags. We conclude that our optimized thresholds are, at minimum, no worse at the detection of blasts than the factory-default settings.

We were concerned, however, about the 10 cases of increased myeloid progenitors missed by our optimized criteria. The percentage of immature granulocytes is a reproducible parameter and is important in the diagnosis of many disease states. (9) Review of patient histories showed that 6 of the missed cases represented acute infectious processes (cryptococcal meningitis, methicillin-susceptible Staphylococcus aureus bacteremia, infectious diarrheal disease, pediatric sepsis, and 2 cases of urinary tract infection in immunocompromised hosts). Follow-up on 3 additional samples revealed 1 patient who was recovering from extensive excision of a facial squamous cell carcinoma, 1 patient with sickle-cell disease and pain crisis, and 1 patient with a new onset pericarditis of hitherto undefined etiology. The final case was one of previously diagnosed chronic myeloid leukemia. The accurate enumeration of myeloid precursors was clinically important in these cases.

The goal of our protocol was to improve the PPV of our system of flags, thereby reducing the number of manual differential WBC counts performed on false-positive specimens. The number of missed immature myeloid cells is a consequence and limitation of our method of optimization. Use of the maximized YI optimizes the relationship between true-positives and false-positives, thereby improving the PPV. However, that improved PPV came at the expense of a decrease in negative predictive value, particularly in the area of immature myeloid precursors, such as myelocytes and metamyelocytes. Our analysis was limited to improving the PPV because our data set included only samples flagged at factory-default settings, thereby precluding estimation of a true baseline measure of the NPV.

We considered reducing our immature granulocyte flag back to factory-settings, thereby reducing the number of missed myeloid progenitors from 10 to 5. Doing so would increase the number of false-positives in our sample from 106 to 113 and the review rate from 2.9% to 3.1%, not a substantial increase. Four of the other 5 missed cases would be detected only by reducing the left shift flag to the factory-default setting. However, doing so would increase our false-positive rate to 148 from 106 and, consequently, increased our review rate to 4% while decreasing the PPV to 34.1% and the efficiency to 54%. The last missed case would only have been detected by decreasing the blast flag to 110.

An additional mechanism by which missed myeloid progenitors might be avoided is the concurrent introduction of the XE-IG master software with the new flagging system. That software allows the reporting of a parameter called immature granulocytes (promyelocytes, myelocytes, and metamyelocytes) directly from the analyzer, without a slide review. (9,24) Additional studies are required to validate the use of that technology with our optimized flagging settings.

The optimized flagging criteria described here reduces the review rate from the 5 flags studied from 6.5% to 2.9%. Further studies will be necessary to similarly decrease the review rates from other user-adjustable flags on our analyzers.

In conclusion, we have developed a method for optimizing the thresholds for quantitative flags on automated cell counters and for reducing review rates. We improved the overall PPV of each flag for any abnormal finding and achieved flagging efficacies similar to studies using other analyzers. Although this study was performed on the Sysmex XE-2100/5000 line of analyzers, the overall approach to optimization can be used on any hematology analyzer that uses quantitative flagging criteria.

The authors thank Barbara J. Connell, MS, MT SH(ASCP), for advice and careful reading of the manuscript. This work is dedicated to the memory of Daniel J. Fink, MD, MPH, who initiated the research that culminated in this article.


(1.) Pierre RV. Peripheral blood film review: the demise of the eyecount leukocyte differential. Clin Lab Med. 2002;22(1):279-297.

(2.) Hoyer JD. Leukocyte differential. Mayo Clin Proc. 1993;68(10):1027-1028.

(3.) Kratz A, Bengtsson HI, Casey JE, et al. Performance evaluation of the CellaVision DM96 system: WBC differentials by automated digital image analysis supported by an artificial neural network. Am J Clin Pathol. 2005;124(5):770-781.

(4.) BriggsC, Harrison P, Grant D, Staves J, Chavada N, Machin SJ. Performance evaluation of the Sysmex XE-2100 [TM], automated haematology analyser. Sysmex J Int. 1999;9(2):113-119.

(5.) Gould N, Connell B, Dyer K, Richmond T. Performance evaluation of the Sysmex XE-2100, automated hematology analyzer. Sysmex J Int. 1999;9(2):120-128.

(6.) Fujimoto K. Principles of measurement in hematology analyzers manufactured by Sysmex Corporation. Sysmex J Int. 1999;9(1):31-40.

(7.) Hiroyuki I. Overview of automated hematology analyzer XE-2100. Sysmex J Int. 1999;9(1):58-64.

(8.) Briggs C, Kunka S, Fujimoto H, Hamaguchi Y, Davis BH, Machin SJ. Evaluation of immature granulocyte counts by the XE-IG master: upgraded software for the XE-2100 automated hematology analyzer. Lab Hematol. 2003; 9(3):117-124.

(9.) Ansari-Lari MA, Kickler TS, Borowitz MJ. Immature granulocyte measurement using the Sysmex XE-2100. Relationship to infection and sepsis. Am J Clin Pathol. 2003;120(5):795-799.

(10.) National Committee for Clinical Laboratory Standards. Reference Leukocyte Differential Count (Proportional) and Evaluation of Instrumental Methods. Villanova, PA: NCCLS; 1992. Approved NCCLS document H20-A.

(11.) Barnes PW, McFadden SL, Machin SJ, Simson E. The international consensus group for hematology review: suggested criteria for action following automated CBC and WBC differential analysis. Lab Hematol. 2005; 11(2):83-90.

(12.) Schisterman EF, Perkins NJ, Liu A, Bondell H. Optimal cut-point and its corresponding Youden Index to discriminate individuals using pooled blood samples. Epidemiology. 2005;16(1):73-81.

(13.) Hilden J, Glasziou P. Regret graphs, diagnostic uncertainty and Youden's Index. Stat Med. 1996;15(10):969-986.

(14.) Pekkanen J, Pearce N. Defining asthma in epidemiological studies. Eur Respir J. 1999;14(4):951-957.

(15.) John R, Lifshitz MR, Jhang J, Fink DJ. Post-analysis: medical decisionmaking. In: McPherson RA, Pincus MR, eds. Henry's Clinical Diagnosis and Management by Laboratory Methods. 21st ed. Philadelphia, PA: Elsevier; 2007: 68-75.

(16.) Lacombe F, Cazaux N, Briais A, et al. Evaluation of the leukocyte differential flags on an hematologic analyzer: the Cobas Argos 5 Diff. Am J Clin Pathol. 1995;104(5):495-502.

(17.) Ruzicka K, Veitl M, Thalhammer-Scherrer R, Schwarzinger I. The new hematology analyzer Sysmex XE-2100: performance evaluation of a novel white blood cell differential technology. Arch Pathol Lab Med. 2001;125(3):391-396.

(18.) Korninger L, Mustafa G, Schwarzinger I. The haematology analyser SF-3000: performance of the automated white blood cell differential count in comparison to the haematology analyser NE-1500. Clin Lab Haematol. 1998; 20(2):81-86.

(19.) Thalhammer-Scherrer R, Knobl P, Korninger L, Schwarzinger I. Automated five-part white blood cell differential counts: efficiency of software-generated white blood cell suspect flags of the hematology analyzers Sysmex SE-9000, Sysmex NE-8000, and Coulter STKS. Arch Pathol Lab Med. 1997;121(6):573 577.

(20.) Cornbleet PJ. Clinical utility of the band count. Clin Lab Med. 2002;22(1): 101-136.

(21.) Koepke JA. A delineation of performance criteria for the differentiation of leukocytes. Am J Clin Pathol. 1977;68(1)(suppl):202-206.

(22.) van der Meer W, Scott CS, de Keijzer MH. Automated flagging influences the inconsistency and bias of band cell and atypical lymphocyte morphological differentials. Clin Chem Lab Med. 2004;42(4):371-377.

(23.) van der Meer W, van Gelder W, de Keijzer R, Willems H. The divergent morphological classification of variant lymphocytes in blood smears. J Clin Pathol. 2007;60(7):838-839.

(24.) Briggs C, Kunka S, Pennaneach C, Forbes L, Machin SJ. Performance evaluation of a new compact hematology analyzer, the Sysmex pocH-100i. Lab Hematol. 2003;9(4):225-233.

Anthony Sireci, MD; Robert Schlaberg, MD, MPH; Alexander Kratz, MD, PhD, MPH

Accepted for publication January 25, 2010.

From the Department of Pathology, Columbia University College of Physicians and Surgeons, New York, New York; and the Clinical Laboratory Service, New York-Presbyterian Hospital, New York. Dr Schlaberg is now with the Department of Pathology, University of Utah, Salt Lake City.

The authors have no relevant financial interest in the products or companies described in this article.

Reprints: Alexander Kratz, MD, PhD, MPH, Columbia University Medical Center, Core Laboratory, 622 W 168th St, PH3-363, New York, NY 10032 (e-mail:
Table 1. Abnormal Findings Qualifying as True-Positives on Manual
Differential White Blood Cell Counts

Abnormality Threshold, %

Blasts, plasma cells, hypersegmented
 neutrophils (>6 lobes) >1
Metamyelocytes and/or myelocytes >3
Atypical lymphocytes >5
Band forms >7
Nucleated red blood cell counts >1

Data are modified from the recommendations of the International
Consensus Group for Hematology Review. (11)

Table 2. Number of False-Positive Samples and the Positive Predictive
Values (PPVs) of Each Flag for Its Specific Abnormal Finding
(Optimization Set, n = 502)

Category Blast Flag Granulocyte Flag
Factory-set thresholds
 False-positives, No. 53 163
 Abnormality-specific PPV, % 5.4 24
 Overall PPV, % 64 37
Optimized thresholds
 False-positives, No. 35 111
 Abnormality-specific PPV, % 7.9 29
 Abnormality-specific efficiency, % 93 76
Overall PPV, % 71 45
Overall efficiency, % 80 74

 Left Atypical
Category Shift Flag Lymphocyte Flag
Factory-set thresholds
 False-positives, No. 84 72
 Abnormality-specific PPV, % 33 13
 Overall PPV, % 45 19
Optimized thresholds
 False-positives, No. 44 34
 Abnormality-specific PPV, % 41 11
 Abnormality-specific efficiency, % 85 89
Overall PPV, % 52 21
Overall efficiency, % 77 72

Category Lymphoblast Flag
Factory-set thresholds
 False-positives, No. 149
 Abnormality-specific PPV, % 7.4
 Overall PPV, % 8.6
Optimized thresholds
 False-positives, No. 22
 Abnormality-specific PPV, % 29
 Abnormality-specific efficiency, % 91
Overall PPV, % 29
Overall efficiency, % 74

Table 3. Flag Thresholds for Factory-Default and Optimized Settings

Flags Threshold, No.
 Factory Default Optimized

Blast 99 200
Immature granulocyte 159 (a) 250
Left shift 99 200
Atypical lymphocyte 99 150
Abnormal lymphocyte/
 lymphoblast 99 200

(a) Factory default setting for the immature granulocyte flag is 99.
However, as a result of previous adjustments by our laboratory, the
immature granulocyte flag threshold was set at 159 at the time the
study was initiated.

Table 4. False-Positives, Positive Predictive Values (PPV), Review
Rate, and Efficiency of the 5 Flags in the Validation Set (n = 378)

 Threshold False-Positive, No. Overall PPV, %

Factory default 275 27
Optimized 106 37

 Threshold Review Rate, % Overall Efficiency, %

Factory default 6.5 NA
Optimized 2.9 61

Abbreviation: NA, not available.

Table 5. False-Negative Samples Resulting From
Optimized Criteria in the Validation Set (n = 378)

 Samples, Mean, Range,
Abnormality No. % %

>7% bands 10 14.75 8-24
>5% atypical lymphocytes 28 9.52 5-33
>3% metamyelocytes and/or
 myelocytes 6 4.50 3-6
COPYRIGHT 2010 College of American Pathologists
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sireci, Anthony; Schlaberg, Robert; Kratz, Alexander
Publication:Archives of Pathology & Laboratory Medicine
Date:Oct 1, 2010
Previous Article:A cribriform urothelial neoplasm of the renal pelvis: an adenoid cysticlike variant of inverted urothelial papilloma or florid ureteritis cystica?
Next Article:Antihuman leukocyte antigen-specific antibody strength determined by complement-dependent or solid-phase assays can predict positive donor-specific...

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters