Printer Friendly

Assessing the quality of glucose monitor studies: a critical evaluation of published reports.

Glucose monitors, when used in conjunction with appropriate interventional treatment, can effectively improve glycemic control (1). The US Food and Drug Administration (FDA)[3] has cleared more than 200 glucose monitors for home and institutional use (2). To ascertain whether or not a monitor is acceptable for its intended use, the FDA carefully reviews clinical and laboratory evidence provided by the device manufacturer (3).

Clinicians are advised to evaluate medical devices before initial use (4). Some glucose monitor evaluations published by clinicians have reported poor results and have concluded that data from glucose monitors are unreliable (5), unsatisfactory (6), or show concentration dependency (7). Other clinicians have reported positive results--in some cases using the same monitor--and have concluded that data from glucose monitors are accurate and meet performance expectations (8, 9). This inconsistency in the literature is problematic because it causes confusion and may slow adoption of new indications for glucose monitors (e.g., continuous glucose monitoring).

Four potential sources of error must be considered in the evaluation of any analytical device: (a) analytical imprecision, (b) analytical bias, (c) protocol-specific bias, and (d) random patient interferences (10). Device manufacturers are generally knowledgeable about these sources of error and carefully follow procedures to control them. Bias and imprecision are controlled by testing products that conform to specifications, protocol-specific bias by adherence to careful study design, and random patient interferences by inclusion and exclusion criteria for recruitment of study participants.

Clinicians performing evaluation studies also need to be cognizant of protocol-design factors and potential sources of error (11). Guidelines have been published in an attempt to educate clinicians on proper study methodology and reporting (12, 13). Although the Standards for Reporting Diagnostic Accuracy (STARD) guidelines are intended for studies of diagnostic accuracy (13) rather than for studies of analytical performance, many of the items of the STARD checklist are important for the readers' interpretation of either type of study. The purpose of our study was to compare recent reports on blood glucose monitor performance to these guidelines.

[FIGURE 1 OMITTED]

Materials and Methods

SEARCH STRATEGY AND REPORT CRITERIA

We searched the PubMed database for articles from August 2002 to November 2006 using combinations of the words: blood glucose, performance, evaluation, accurate, accuracy, point-of-care, meter, glucometer, and monitor. The reference lists of the selected articles were also reviewed and personal files were hand searched for additional reports. Studies selected for inclusion were published analytical evaluations of marketed, handheld, blood glucose monitoring systems that used a laboratory method as a comparison method. We excluded studies that were not in English, studies of nonhuman blood samples, and studies of continuous monitoring and noninvasive devices. Our PubMed search terms and details were as follows: ("blood glucose"[MeSH Terms] OR blood glucose[Text Word]) AND (performance[Text Word] OR evaluation[Text Word] OR accurate[Text Word] OR accuracy[Text Word] OR point-of-care[Text Word] OR meter[Text Word] OR meters[Text Word] OR glucometer[Text Word] OR glucometers[Text Word] OR monitor[Text Word] OR monitors[Text Word]) AND ("2002/08/01"[PDAT]: "2006/11/01"[PDAT]) AND English(AND "humans"[MeSH Terms].

ASSESSMENT

One reviewer (J.E.) screened the titles from the computer-based search to determine relevant articles for retrieval. If the title did not provide enough information to decide whether or not to include the study, the abstract was read. The full article was retrieved if the abstract did not provide enough information. Studies were eliminated if both reviewers (J.M., J.E.) agreed that the report did not meet inclusion criteria. We obtained printed copies of all articles meeting our inclusion criteria. To evaluate the quality of reporting, we chose the 25-item STARD checklist (13, 14). However, because whole blood glucose monitors are not diagnostic devices, 5 STARD criteria (STARD checklist items 1, 9,12, 21, and 23) were deemed not applicable and were not scored. Because study methodology should be evaluated independently of the quality of the reporting (15), we developed an additional 18-item method checklist based on Clinical and Laboratory Standards Institute (CLSI) C30-A2 (12).

In analyzing published reports, we found that not all STARD or CLSI factors were obvious or clearly reported. In addition, the omission of procedural statements in the report was considered to indicate only that the procedures were not reported, not that they were not performed. Therefore, we assigned a yes (1-point) or no (0-point) value to each recommendation on our checklists depending on whether the authors had (1) included the recommended procedure in the report, (2) included supporting data confirming the use of the recommended procedure, or (3) acknowledged in the report that the recommended procedure had been considered. Differences in interpretation and discrepancies in ratings between the 2 reviewers were rare and were settled via consensus after additional review of the report for supporting evidence.

Each checklist item was given a numerical value of 1 point. Possible points included 20 STARD (reporting) items and 18 CLSI (methodological) items, for a possible maximum of 38 points. Calculations were based on percentages of 38 total points, 20 (STARD), and 18 (CLSI) points. Correlation with P = 0.05 was considered significant.

Results

A total of 1407 titles/ abstracts were retrieved, of which 93 were initially proposed (Fig. 1). On further review, 41 of these studies were found ineligible and were excluded. Exclusions were because of inappropriate (nonhuman fresh whole blood) test samples or the use of an inappropriate reference method (e.g., methods not traceable to materials or methods of higher order). For the 52 selected reports published between August 2002 and November 2006, the scores ranged from a high of 32 points (84%) to a low of 8 points (21%) (Table 1). The average score of the glucose monitor reports was low (median score, 20 of 38 points or 53%). No published report incorporated 100% of the quality factors recommended by STARD or CLSI. The CLSI checklist developed by the authors and the percentage of conforming reports to 18 CLSI recommendations (range 2%-92%) is shown in Table 2, and the STARD checklist and the percentage of reports that conformed to 20 STARD recommendations (range 0%-100%) in Table 3.

No significant trend was found when the report scores were grouped by journal type or assessed by date of publication (P = 0.5). Neither the source journal nor dates of publication were found to be predictive of report conformity to published recommendations.

Discussion

Our study shows that the average glucose monitor report used only ~50% of the combined CLSI and STARD recommendations and that the overall quality of reports is low. Compliance to these recommendations varied widely (range 21%-84%), and none of the 52 reports conformed to all recommendations. These findings suggest that many investigators disagree with, are unaware of, or are neglectful of published CLSI and STARD recommendations for conducting and reporting glucose monitor evaluation studies.

A report's procedural statements, especially how and when monitor and reference measurements are performed, provide important information regarding the quality and reproducibility of the study. We found that only 42% of the studies reported this information (Table 3) and only 13% reported following appropriate sample timing and handling procedures (Table 2). Control of sampling time is important because after a carbohydrate load blood glucose can change rapidly at a sampling site (63). Postcollection control of sample handling time is also important because glycolysis can cause rapid glycemic change, depending on the hematocrit (64). If either of these circumstances is not controlled, observed differences in the data could be caused by glycemic concentration differences in the comparative samples instead of differences between the 2 methods.

We observed that many investigators made a number of assumptions. Some assumed that the concentration of glucose in capillary and venous blood is equivalent, although equivalence cannot be assumed for individuals in the postprandial state (65). In addition, 29 (56%) of 52 studies reported testing the same sample with both monitor and comparative methods (Table 2). Thus for the other 23 reports, observed differences in the data may be attributable to glycemic concentration differences in the comparative samples. Most investigators also assumed that there is little error associated with their reference method; only 19% checked the bias of their reference with traceable materials (Table 2), although reference glucose methods can have a total error of up to 10% (66). Only 1 of the studies reported that they followed CLSI advice to check that duplicate reference tests were stable and acceptable.

Reports differed considerably in regard to the use of appropriate acceptance criteria for glucose monitor performance. Many reports used expert opinion, medical society opinion, or their own acceptance criteria, whereas relatively few used CLSI acceptance criteria for glucose monitors (12) (Table 2), which are identical to acceptance criteria published by the International Standards Organization (67).

One limitation of our study was our choice to limit our search to English language reports, although we believe that inclusion of studies published in other languages would not alter our conclusions. In addition, glucose monitor evaluation studies exist (not revealed by our search) that are, in our opinion, of relatively high quality. The Scandinavian Evaluation of Laboratory Equipment for Primary Health Care (SKUP) has performed a number of monitor evaluation studies and has issued reports. A review of 2 reports found that SKUP followed 100% of CLSI recommendations and 85% of STARD recommendations (68), (69). These 2 reports emphasized monitor training, performed a thorough reference method evaluation, tested duplicate monitor tests, tested the reference method before and after the monitor testing, properly checked [within 4% or 0.22 mmol/L (4 mg/dL)] the duplicate reference tests to ensure both method and glycemic stability, and emphasized control of elapsed time and glycolysis. Unfortunately, these SKUP reports are not found in the PubMed database.

To our knowledge, a single, published checklist that includes key reporting and methodological factors for glucose monitor evaluations does not exist. We selected the STARD and CLSI checklists because they are published and both contain important elements. Our study shows that although a large number of glucose monitor evaluation studies have been published over a 4-year period, investigators did not address many of the variables that can adversely impact internal and external validity. All glucose-monitoring systems have performance limitations [e.g., hematocrit extremes (70)] that are included in the published manufacturer labeling, yet we found several studies in which the devices were evaluated under off-label conditions. The availability and ease with which clinicians can perform evaluation studies using glucose monitors is relatively unique among in vitro tests. With the growing incidence of diabetes and new technologies for measuring blood glucose on the horizon, it is reasonable to believe that the number of such studies will continue to grow. We believe that a checklist that combines key elements from the STARD and CLSI recommendations, if published and used, would help to improve the quality of monitor evaluation studies and could form the basis for future checklists applicable to continuous monitoring and noninvasive devices. Such a tool has the potential to improve the quality of future studies.

We conclude that none of the glucose monitor evaluation reports in our review conform to all published quality recommendations, and that the overall quality of reports is low. The range of conformance to STARD and CLSI recommendations varied widely, suggesting that many of the researchers did not follow published recommendations for study design and methodology, an omission that may have adversely affected study quality. Future studies evaluating glucose monitoring systems should be carefully designed and should follow published recommendations for methodological and reporting quality.

Grant/funding support: LifeScan, Inc., provided funding for this study.

Financial disclosures: Both authors are employees of LifeScan, Inc., a Johnson & Johnson company, and both hold equity interests in Johnson & Johnson.

Acknowledgements: We thank Drs. David Horwitz and David Price for their helpful suggestions and comments. A portion of this work was presented in poster format at the 2006 AACC Annual Meeting, Chicago, IL.

Received November 21, 2006; accepted March 16, 2007. Previously published online at DOI: 10.1373/clinchem.2006.083493

References

(1.) Standards of Medical Care in Diabetes. Diabetes Care 2006; 29(Suppl):S4-42.

(2.) Gutman S, Bernhardt P, Pinkos A, Moxey-Mims M, Knott T, Cooper J. Regulatory aspects of invasive glucose measurements. Diabetes Technol Ther 2002;4:775-7.

(3.) Review Criteria Assessment of Portable Blood Glucose Monitoring In Vitro Diagnostic Devices Using Glucose Oxidase, Dehydrogenase, or Hexokinase Methodology. HHS Publication FDA-96-604. Rockville, MD: US Department of Health and Human Services, Public Health Service, Food and Drug Administration, Center for Devices and Radiological Health, Office of Device Evaluation, Division of Clinical Laboratory Services; 1996.

(4.) Nichols JH. Interpreting method evaluations. Diabetes Technol Ther 2002;4:623-5.

(5.) Finkielman JD, Oyen LJ, Afessa B. Agreement between bedside blood and plasma glucose measurement in the ICU setting. Chest 2005;127:1749-51.

(6.) Ho HT, Yeung WKY, Young BWY. Evaluation of "point of care" devices in the measurement of low blood glucose in neonatal practice. Arch Dis Child Fetal Neonatal Ed 2004;89:F356-9.

(7.) Martin DD, Shephard MDS, Freeman H, Bulsara MK, Jones TW, Davis EA, et al. Point-of-care testing of HbA1c and blood glucose in a remote Aboriginal Australian community. Med J Aust 2005; 182:524-7.

(8.) Miendje Deyi VY, Philippe M, Alexandre KC, De Nayer P, Hermans MP. Performance evaluation of the Precision PCx point-of-care blood glucose analyzer using discriminant ratio methodology. Clin Chem Lab Med 2002;40:1052-5.

(9.) The Diabetes Research in Children Network (DIRECNET) Study Group. A multicenter study of the accuracy of the One Touch Ultra home glucose meter in children with type 1 diabetes. Diabetes Technol Ther 2003;5:933-41.

(10.) Krouwer JS. How to improve total error modeling by accounting for error sources beyond imprecision and bias [Letter]. Clin Chem 2001;47:1329-30.

(11.) Binette TM, Cembrowski GS. Diverse influences on blood glucose measurements in the ICU setting. Chest 2005;128:3084-5.

(12.) NCCLS. Point-of-Care Blood Glucose Testing in Acute and Chronic Care Facilities; Approved Guideline--Second Edition. NCCLS document C30-A2 (ISBN 1-56238-471-6). NCCLS, 2002.

(13.) Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem 2003;49:7-18.

(14.) Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Chem 2003;49:1-6.

(15.) Huwiler-Muntener K, Juni P, Junker C, Egger M. Quality of reporting of randomized trials as a measure of methodologic quality. JAMA 2002;287:2801-4.

(16.) Kendall DM, Kaplan RA, Paulson CF, Parkes JL, Tideman AM. Accuracy and utility of a 10-test disk blood glucose meter. Diabetes Res Clin Pract 2005;67:29-35.

(17.) Garg SK, Carter JA, Mullen L, Folker AC, Parkes JL, Tideman AM. The clinical performance and ease of use of a blood glucose meter that uses a 10-test disk. Diabetes Technol Ther 2004;6:495-502.

(18.) Chen ET, Nichols JH, Duh SH, Hortin G. Performance evaluation of blood glucose monitoring devices. Diabetes Technol Ther 2003; 5:749-68.

(19.) Kristensen GBB, Nerhus K, Thue G, Sandberg S. Standardized evaluation of instruments for self-monitoring of blood glucose by patients and a technologist. Clin Chem 2004;50:1068-71.

(20.) Baum JM, Monhaut NM, Parker DR, Price CP. Improving the quality of self-monitoring blood glucose measurement: a study in reducing calibration errors. Diabetes Technol Ther 2006;8:347-57.

(21.) Kilo C, Pinson M, Joynes JO, Joseph H, Monhaut N, Parkes JL, et al. Evaluation of a new blood glucose monitoring system with auto-calibration. Diabetes Technol Ther 2005;7:283-94.

(22.) St-Louis P, Ethier J. An evaluation of three glucose meter systems and their performance in relation to criteria of acceptability for neonatal specimens. Clin Chim Acta 2002;322:139-48.

(23.) Buhling KJ, Henrich W, Kjos SL, Siebert G, Starr E, Dreweck C, et al. Comparison of point-of-care-testing glucose meters with standard laboratory measurement of the 50g-glucose-challenge test (GCT) during pregnancy. Clin Biochem 2003;36:333-7.

(24.) Larbig M, Forst T, Mondok A, Forst S, Pfutzner A. Investigation of the accuracy of the blood glucose monitoring device Prestige IQ. Diab Nutr Metab 2003;16:257-61.

(25.) Michel A, Kuster H, Krebs A, Kadow I, Paul W, Nauck M, et al. Evaluation of the Glucometer Elite XL device for screening for neonatal hypoglycemia. Eur J Pediatr 2005;164:660-4.

(26.) Chlup R, Payne M, Zapletalova J, Komenda S, Doubravova B, Reznickova M, et al. Results of self monitoring on gluocometer systems Advance and Optium in daily routine. Biomed Pap Med Fac Univ Palacky Olomouc Czech Repub 2005;149:127-39.

(27.) Kilo C, Dickey WT, Joynes JO, Pinson MB, Baum JM, Parkes JL, et al. Evaluation of a new blood glucose monitoring system with auto-calibration for home and hospital bedside use. Diabetes Res Clin Pract 2006;74:66-74.

(28.) Puntmann I, Wosniok W, Haeckel R. Comparison of several point-of-care testing (POCT) glucometers with an established laboratory procedure for the diagnosis of Type 2 diabetes using the discordance rate: a new statistical approach. Clin Chem Lab Med 2003;41:809-20.

(29.) Tieszen KL, New JP. Alternate site blood glucose testing: do patients prefer it? Diabet Med 2003;20:325-8.

(30.) Lippi G, Salvagno GL, Guidi GC, Negri M, Rizzotti P. Evaluation of four portable self-monitoring blood glucose meters. Ann Clin Biochem 2006;43:408-13.

(31.) Khan AI, Vasquez Y, Gray J, Wians FH, Kroll MH. The variability of results between point-of-care testing glucose meters and the central laboratory analyzer. Arch Pathol Lab Med 2006;130: 1527-32.

(32.) Rivers SM, Kane MP, Bakst G, Busch RS, Hamilton RA. Precision and accuracy of two blood glucose meters: FreeStyle Flash versus One Touch Ultra. Am J Health Syst Pharm 2006;63:1411-6.

(33.) Hawkins RC. Evaluation of Roche Accu-Chek Go and Medisense Optium blood glucose meters. Clin Chim Acta 2005;353:127-31.

(34.) Dai K, Tai D, Ho P, Chen C, Peng W, Chen S, et al. Accuracy of the EasyTouch blood glucose self-monitoring system: a study of 516 cases. Clin Chim Acta 2004;349:135-41.

(35.) Cohen M, Boyle E, Delaney C, Shaw J. A comparison of blood glucose meters in Australia. Diabetes Res Clin Pract 2006;71: 113-8.

(36.) Dhatt GS, Agarwal M, Bishawi B. Evaluation of a glucose meter against analytical quality specifications for hospital use. Clin Chim Acta 2004;343:217-21.

(37.) Oyibo SO, Pritchard GM, Mclay L, James E, Laing I, Gokal R, et al. Blood glucose overestimation in diabetic patients on continuous ambulatory peritoneal dialysis for end-stage renal disease. Diabet Med 2002;19:693-6.

(38.) Kiattimongkol W, Watanachai A, Suprasongsin C. Evaluation of the bedside glucose monitoring system in neonatal units. J Med Assoc Thai 2003;86:883-8.

(39.) Mohan V, Deepa R, Shefali AK, Poongothai S, Monica M, Karkuzhali K. Evaluation of One Touch Horizon: a highly affordable glucose monitor. J Assoc Physicians India 2004;52:779-82.

(40.) Demers J, Kane MP, Bakst G, Busch RS, Hamilton RA. Accuracy of home blood glucose monitors using forearm blood samples: FreeStyle versus one Touch Ultra. Am J Health Syst Pharm 2003;60:1130-5.

(41.) Bohme P, Floriot M, Sirveaux MA, Durain D, Ziegler O, Drouin P, et al. Evolution of analytical performance in portable glucose meters in the last decade. Diabetes Care 2003;26:1170-5.

(42.) Savoca R, Jaworek B, Huber AR. New "plasma referenced" POCT glucose monitoring systems: are they suitable for glucose monitoring and diagnosis of diabetes? Clin Chim Acta 2006;372:199-201.

(43.) Greenhalgh S, Bradshaw S, Hall CM, Price DA. Forearm blood glucose testing in diabetes mellitus. Arch Dis Child 2004;89: 516-8.

(44.) Aboezz R, Miller DR. Accuracy of portable blood glucose monitors. J Am Pharm Assoc (Wash DC) 2005;45:514-6.

(45.) Corstjens AM, Ligtenberg JJM, van der Horst ICC, Spanjersberg R, Lind JSW, Tulleken JE, et al. Accuracy and feasibility of point-of-care and continuous blood glucose analysis in critically ill ICU patients. Crit Care 2006;10:R135.

(46.) Kanji S, Buffie J, Hutton B, Bunting PS, Singh A, McDonald K, et al. Reliability of point-of-care testing for glucose measurement in critically ill adults. Crit Care Med 2005;33:2778-85.

(47.) Rao LV, Jakubiak F, Sidwell JS, Winkelman JW, Snyder ML. Accuracy evaluation of a new glucometer with automated hematocrit measurement and correction. Clin Chim Acta 2005;356: 178-83.

(48.) Kavsak PA, Zielinski N, Li D, McNamara PJ, Adeli K. Challenges of implementing point-of-care testing (POCT) glucose meters in a pediatric acute care setting. Clin Biochem 2004;37:811-7.

(49.) Solnica B, Naskalski JW, Sieradzki J. Analytical performance of glucometers used for routine glucose self-monitoring of diabetic patients. Clin Chim Acta 2003;331:29-35.

(50.) The Diabetes Research in Children Network (DIRECNET) Study Group. Accuracy of newer-generation home blood glucose meters in a diabetes research in children network (DirecNet) inpatient exercise study. Diabetes Technol Ther 2005;7:675-80.

(51.) Elusiyan JBE, Adeodu OO. Adejuyigbe EA. Evaluating the validity of a bedside method of detecting hypoglycemia in children. Pediatr Emerg Care 2006;22:488-90.

(52.) Wehmeier M, Arndt BT, Schumann G, Kulpmann WR. Evaluation and quality assessment of glucose concentration measurement in blood by point-of-care testing devices. Clin Chem Lab Med 2006; 44:888-93.

(53.) Boyd R, Leigh B, Stuart P. Capillary versus venous bedside blood glucose estimations. Emerg Med J 2005;22:177-9.

(54.) Ajala MO, Oladipo OO, Fasanmade O, Adewole TA. Laboratory assessment of three glucometers. Afr J Med Med Sci 2003;32: 279-82.

(55.) Pavlicek V, Garzoni D, Urech P, Brandle M. Inaccurate self-monitoring of blood glucose readings in patients on chronic ambulatory peritoneal dialysis with icodextrin. Exp Clin Endocrinol Diabetes 2006;114:124-6.

(56.) Solnica B, Naskalski JW. Quality control of SMBG in clinical practice. Scand J Clin Lab Invest 2005;65(Suppl):80-6.

(57.) Kulkarni A, Saxena M, Price G, O'Leary MJ, Jacques T, Myburgh JA. Analysis of blood glucose measurements using capillary and arterial blood samples in intensive care patients. Intensive Care Med 2005;31:142-5.

(58.) Velazquez Medina D, Climent C. Comparison of outpatient point of care glucose testing vs venous glucose in the clinical laboratory. P R Health Sci J 2003;22:385-9.

(59.) Choubtum L, Mahachoklertwattana P, Udomsubpayakul U, Preeyasombat C. Accuracy of glucose meters in measuring low blood glucose levels. J Med Assoc Thai 2002;85:S1104-10.

(60.) Meex C, Poncin J, Chapelle JP, Cavalier E. Analytical validation of the new plasma calibrated Accu-Chek[R] test strips (Roche Diagnostics). Clin Chem Lab Med 2006;44:1376-8.

(61.) Apperloo JJ, Vader HL. A quantitative appraisal of interference by icodextrin metabolites in point-of-care glucose analyses. Clin Chem Lab Med 2005;43:314-8.

(62.) Nobels F, Beckers F, Bailleul E, De Schrijver P, Sierens L, Van Crombrugge P. Feasibility of a quality assurance programme of bedside blood glucose testing in a hospital setting: 7 years' experience. Diabet Med 2004;21:1288-91.

(63.) Ellison JM, Stegmann JM, Colner SL, Michael RH, Sharma MK, Ervin KR, et al. Rapid changes in postprandial blood glucose produce concentration differences at finger, forearm, and thigh sampling sites. Diabetes Care 2002;25:961-4.

(64.) Sidebottom RA, Williams PR, Kanarek KS. Glucose determinations in plasma and serum: potential error related to increased hematocrit. Clin Chem 1982;28:190-2.

(65.) Voss EM, McNeill L, Cembrowski GS. Assessing the accuracy of your blood glucose meter. LifeScan Publication AW 055-272. Milpitas, CA: LifeScan, 2003.

(66.) Ehrmeyer SS, Laessig RH, Leinweber JE, Oryall JJ. 1990 Medicare/CLIA final rules for proficiency testing: minimum intralaboratory performance characteristics (CV and bias) needed to pass. Clin Chem 1990;36:1736-40.

(67.) ISO 15197. In vitro diagnostic test systems: requirements for blood-glucose monitoring systems for self-testing in managing diabetes mellitus. First edition dated 2003-05-01. ISO 15197: 2003(E) International Standards Organization, Geneva, Switzerland.

(68.) Scandinavian evaluation of laboratory equipment for primary health care, SKUP. Report on OneTouch Ultra, SKUP/2005/39. http://www.SKUP.nu (accessed February 2007).

(69.) Scandinavian evaluation of laboratory equipment for primary health care, SKUP. Report on OneTouch GlucoTouch, SKUP/ 2005/40. http://www.SKUP.nu (accessed February 2007).

(70.) Tang Z, Lee JH, Louie RF, Kost GJ. Effects of different hematocrit levels on glucose measurements with handheld meters for point-of-care testing. Arch Pathol Lab Med 2000;124:1135-40.

[3] Nonstandard abbreviations: FDA, Food and Drug Administration; STARD, Standards for Reporting Diagnostic Accuracy; CLSI, Clinical and Laboratory Standards Institute; SKUP, Scandinavian Evaluation of Laboratory Equipment for Primary Health Care.

JOHN MAHONEY [1]* and JOHN ELLISON [2]

Departments of [1] Global Product Support and [2] Clinical Research, LifeScan, Inc.

* Address correspondence to this author at: LifeScan, Inc., 1000 Gibraltar Dr., M/S 31, Milpitas, CA 95035-6312. Fax 1-408-941-9892; e-mail address jmahoney@lfsus.jnj.com.
Table 1. Clinical glucose monitor studies and their conformance
to 38 recommended study factors [STARD (20) + CLSI (18) = 38].

Study Total number Percentage, %
 of conforming
 statements or data

Kendall, 2005 (16) 32 84
Garg, 2004 (17) 32 84
Chen, 2003 (18) 30 79
Kristensen, 2004 (19) 28 74
Baum, 2006 (20) 28 74
Kilo, 2005 (21) 27 71
St-Louis, 2002 (22) 27 71
Buhling, 2003 (23) 26 68
Larbig, 2003 (24) 25 66
Michel, 2005 (25) 24 63
Chlup, 2005 (26) 24 63
Kilo, 2005 (27) 24 63
Puntmann, 2003 (28) 23 61
Tieszen, 2003 (29) 23 61
Lippi, 2006 (30) 23 61
Khan, 2006 (31) 23 61
Rivers, 2006 (32) 23 61
Hawkins, 2005 (33) 22 58
Dai, 2004 (34) 22 58
Cohen, 2005 (35) 22 58
Dhatt, 2004 (36) 22 58
Oyibo, 2002 (37) 22 58
Kiattimongkol, 2003 (38) 21 55
Mohan, 2004 (39) 20 53
Demers, 2003 (40) 20 53
Miendje Deyi, 2002 (8) 20 53
Bohme, 2003 (41) 20 53
Savoca, 2006 (42) 20 53
Greenhalgh, 2004 (43) 19 50
Aboezz, 2005 (44) 19 50
Corstjens, 2006 (45) 19 50
Kanji, 2005 (46) 19 50
Rao, 2005 (47) 17 45
Martin, 2005 (7) 17 45
Kavsak, 2004 (48) 17 45
Ho, 2004 (6) 17 45
Solnica, 2003 (49) 17 45
DirecNet, 2005 (50) 17 45
Elusiyan, 2006 (51) 17 45
Wehmeier, 2006 (52) 17 45
DirecNet, 2003 (9) 16 42
Boyd, 2005 (53) 15 39
Ajala, 2003 (54) 15 39
Pavlicek, 2006 (55) 15 39
Solnica, 2005 (56) 14 37
Finkielman, 2005 (5) 14 37
Kulkarni, 2005 (57) 14 37
Medina, 2003 (58) 14 37
Choubtum, 2002 (59) 14 37
Meex, 2006 (60) 14 37
Apperloo, 2005 (61) 10 26
Nobels, 2004 (62) 8 21

Table 2. CLSI quality recommendations for glucose monitor evaluation
studies, and the percentage of 52 studies that were found to contain
conforming statements or data pertaining to these CLSI
recommendations.

 Topic CLSI (NCCLS) C30-A2 factors for Percentage
 glucose monitor evaluation studies of studies,
 %

1 Blood sample Blood sample type (e.g., venous, 90
 capillary) is appropriate for
 monitor method.

2 Blood hematocrit checked to be 33
 within monitor's acceptable range.

3 Blood sample Appropriate anticoagulants, blood 87
 additives, or preservatives
 (if used).

4 collection Catheter is properly flushed of IV 79
 method solution prior to sampling
 (if done).

5 Skin is cleaned and dried prior to 29
 puncture (if done).

6 Blood sample Monitor and reference method are 56
 handling both tested from the same sample.

7 Blood is tested (or centrifuged) 13
 within 5 min of monitor test.
 Centrifuged plasma is tested with
 reference method within 60 min of
 monitor test.

8 Monitor Operators are trained to 58
 method manufacturer's instructions.

9 Monitor is tested in duplicate. 46

10 Reference Laboratory method checked for 44
 method stability and for being within
 its QC control limits.

11 Laboratory method is tested 23
 in duplicate.

12 Laboratory method is verified with 19
 NIST standard reference materials
 (optional).

13 Laboratory duplicates are within 2
 4% or 0.22 mmol/L (4 mg/dL)
 (or else excluded).

14 Statistics Distribution of glucose in blood 92
 and samples spans monitor's measurement
 acceptance range.
 criteria
15 Specimen sample size is [greater 85
 than or equal to]40 specimen.

16 For glucose 4.2 mmol/L (75 mg/dL), 25
 monitor result is accurate if within
 [+ or -]0.83 mmol/L (15 mg/dL) of
 laboratory average.

17 For glucose [greater than or 37
 equal to]4.2 mmol/L (75 mg/dL),
 monitor result is accurate if within
 [+ or -]20% of laboratory average.

18 Individual monitor results are 10
 compared to the mean of duplicate
 results from laboratory analyzer.

Table 3. STARD recommendations applied to 52 published glucose
monitor evaluation studies, and the percentage of these studies
that were found to contain conforming statements or data regarding
STARD recommendations.

 Section or STARD factors for reporting diagnostic accuracy
 topic

1 Keywords Identify the article as a study of diagnostic
 accuracy (recommended MeSH heading
 "sensitivity and specificity").

2 Introduction State the research questions or study aims,
 such as estimating diagnostic accuracy or
 comparing accuracy between tests or across
 participant groups.

3 Participants The study population: the inclusion and
 exclusion criteria, setting, and locations
 where the data were collected.

4 Participant recruitment: Was recruitment
 based on presenting symptoms, results from
 previous tests, or the fact that the
 participants had received the index tests or
 the reference standard?

5 Participant sampling: Was the study population
 a consecutive series of participants defined by
 the selection criteria in item 3 and 4? If not,
 specify how participants were further selected.

6 Data collection: Was data collection planned
 before the index test and reference standard
 were performed (prospective study) or after
 (retrospective study)?

7 Test methods The reference standard and its rationale.

8 Technical specifications of material and
 methods involved including how and when
 measurements were taken, and/or cite references
 for index tests and reference standard.

9 Definition of and rationale for the units,
 cutoffs, and/or categories of the results of
 the index tests and the reference standard.

10 The number, training, and expertise of the
 persons executing and reading the index tests
 and the reference standard.

11 Whether or not the readers of the index tests
 and reference standard were blind (masked) to
 the results of the other test and describe any
 other clinical information available to the
 readers.

12 Statistical Method for calculating or comparing measures
 methods of diagnostic accuracy, and the statistical
 methods used to quantify uncertainty
 (e.g., 95% CIs).

13 Methods for calculating test reproducibility,
 if done.

14 Participants When the study was done, including beginning
 and ending dates of recruitment.

15 Clinical and demographic characteristics of
 the study population (e.g., age, sex, spectrum
 of presenting symptoms, comorbidity, current
 treatments, recruitment centers).

16 The number of participants satisfying the
 criteria for inclusion who did or did not
 undergo the index tests and/or the reference
 standard; describe why participants failed to
 receive either test (a flow diagram is strongly
 recommended).

17 Test results Time interval from the index tests to the
 reference standard, and any treatment
 administered between.

18 Distribution of severity of disease (define
 criteria) in those with the target condition;
 other diagnoses in participants without the
 target condition.

19 A cross tabulation of the results of the index
 tests (including indeterminate and missing
 results) by the results of the reference
 standard; for continuous results, the
 distribution of test results by the results of
 the reference standard.

20 Any adverse events from performing the index
 tests or the reference standard.

21 Estimates Estimates of diagnostic accuracy and measures
 of statistical uncertainty (e.g., 95% CIs).

22 How indeterminate results, missing responses,
 and outliers of the index tests were handled.

23 Estimates of variability of diagnostic
 accuracy between subgroups of participants,
 readers or centers, if done.

24 Estimates of test reproducibility, if done.

25 Discussion Discuss the clinical applicability of the
 study findings.

 Percentage of
 studies, %

1 N/A

2 100

3 73

4 71

5 50

6 100

7 94

8 42

9 N/A

10 56

11 0

12 N/A

13 75

14 21

15 65

16 88

17 42

18 23

19 81

20 4

21 N/A

22 33

23 N/A

24 73

25 96

NA, Not applicable.
COPYRIGHT 2007 American Association for Clinical Chemistry, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:General Clinical Chemistry
Author:Mahoney, John; Ellison, John
Publication:Clinical Chemistry
Date:Jun 1, 2007
Words:5320
Previous Article:Development and multicenter evaluation of the N Latex CDT direct immunonephelometric assay for serum carbohydrate-deficient transferrin.
Next Article:Folate and methylation status in relation to phosphorylated tau [protein.sub.(181P)] and [beta]-Amyloid(1-42) in cerebrospinal fluid.


Related Articles
Approved IFCC recommendation on reporting results for blood glucose (abbreviated).
Between-lot variation in external quality assessment of glucose: clinical importance and effect on participant performance evaluation.
Analytical quality of near-patient blood cholesterol and glucose determinations.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |