Printer Friendly

Controlling error in laboratory testing.

While responsibility for accurate and timely test results generally lies with the laboratory, many problems can arise before the submitted specimens have been analyzed. Such errors can be monitored and controlled properly only by understanding the process well enough to identify potential sources of error. Furthermore, correct interpretation of test results requires knowledge of analytical and biological as well as pathophysiological sources of variation, their expected magnitude, and the time course over which changes can occur in health and disease. Lists of factors causing these variations have been published.[1] Figure I illustrates the relationship between analytical and biological variation, which can exist over both the short and long term.

It is difficult for the lab to establish effective methods for monitoring and controlling preanalytical variables because many of these factors occur outside the traditional laboratory areas. Nevertheless, monitoring preanalytical variables can succeed with the coordinated effort of many individuals and hospital departments outside the laboratory. Each group must recognize the importance of its own and others' efforts in maintaining a high quality of service.

Many steps intervene between the physician's initial request for a test and the arrival of the specimen for analysis. Figure II identifies preanalytical problems that must be addressed by the physician ordering a test and by the laboratory staff in collecting and processing the specimen. Preinstrumental sources of variation include those introduced during blood collection, such as the effect of the patient's posture and the use of a tourniquet. The two sources of error in laboratory testing, preanalytical and analytical variability, are characterized by short-term variation, taking place within one day, and long-term variation, taking place over a greater span of time.

Figure II

Preanalytical variables

in collection and preparation

of specimens and samples
Step Potential variance (error)
Test order Wrong patient identified
(physician) Special requirements not listed
 Patient improperly prepared
Processing test Incorrect name printed
request on order entry requisition
 Request lost or delayed
Preparing Wrong collection tube
tube/syringe Mislabeled specimen
(laboratory/ Wrong time scheduled

station) Excessive amount
 of anticoagulant used
Drawing blood Wrong patient drawn

 Specimen inadvertently
 Blood diluted with IV fluids
 or other substances
 Tourniquet kept on patient
 too long, altering blood
Transport Specimen delivery delayed
to laboratory Unsuitable conditions for
 delivery specimen exposed
 to bright sunlight, excessive
 temperature changes)
 Specimen not iced properly
 for transport (blood gases)
Specimen Blood hemolyzed
handling Aliquot tubes contaminated
in laboratory Improper centrifuge speed used
 Improper mixing of blood
 Excessive time delay
 Evaporation from
 open test tubes
Specimen Wrong temperature
storage before Improper exposure to light
analysis Contamination from stoppers,
 aliquot tubes
Logging of test Transcription error
request and Specimen out of sequence
specimen Specimen missed


Laboratory data can be interpreted correctly only when valid. Discerning whether this is the case requires an understanding of errors that may have affected the results (Figure III). When several laboratory values are abnormal and accommodate a physiological model, confirming a diagnosis may be a straightforward task. Ironically, it is often most difficult to interpret an abnormal result that constitutes an isolated finding among normal values. In such cases the test is usually repeated and its result corroborated by confirmatory analyses such as measuring other metabolites that gauge the same function or a related one. An example is the use of both creatinine and urea nitrogen as indicators of renal function.

Figure III

Some other sources of error

in laboratory testing

Related to patient (biological)

* Emotional stress * Inadequate or excessive physical activity * Inappropriate posture * Patient nonfasting; specimen taken too soon

(e.g., triglycerides measured after inadequate fast).

caffeine or alcohol ingested * Abnormal metabolites induced by medication * Measured metabolites interfered with by medication

Related to specimen (preanalytical)

* Specimen taken from venous return side

of dialysis machine * Specimen collected through indwelling catheter

contaminated with heparin, citrate, or other

substance * Inappropriate concentration or wrong

anticoagulant used * Transport time to laboratory too long (whole blood)

Related to technique (analytical)

* Aliquots inappropriately sampled * Instruments inaccurately calibrated * Certified reference materials not used

in validating method * Written analytical protocols inaccurate

and not updated * Assay variability too great * Inappropriate reference range reported

for specific methodology

The processes of interpretation involve careful consideration of all possible sources of error and variability: preanalytical and analytical errors, biological variability, and pathophysiology. Analytical variability may be especially important if a patient's progress is being monitored to evaluate the course and/or effect of therapy. As the values change, it is desirable to know whether the alteration represents improvement, deterioration, or merely random variation. For example, sodium may have a variability of [+ or -] 3 mmol. When the test value is 135 mmol, the likelihood that the real value falls within the range of 132 to 138 mmol is 95%. Thus, values of subsequent specimens that lie within the same range may not indicate a true change in the patient's status.

Even if a value lies outside the normal range, it may not reflect a physiological abnormality. Biological variation in laboratory values may be derived from a single subject (intra-individual variation) or from variation between subjects (inter-individual variation) (Figure IV). Intra-individual variation incorporates all factors that may affect the composition of body fluids.

Figure IV

Factors of biological variability

that can throw laboratory values

outside normal range

1. Normal range is inappropriate for comparison

because individual differs from the norm in:

* Race

* Gender

* Age

* Body build or weight

* Biological rhythm (circadian rhythm, menstrual

cycle, pregnancy, menopause, other)

* Diet

* Physical activity

* Stress

* Prolonged bed rest

2. Patient is normal but value lies outside reference

range (usually 95% of the range of values

encountered in a healthy reference population).

* Sources of variation in lab test results. Many kinds of error may skew the findings of laboratory tests. These include random error, random variation, constant and proportional determinate (systematic) error, and methodologic or physiologic interference.

Random errors occur haphazardly and unpredictably. The wrong patient's specimen may be mistakenly used, for example, or the reagent incorrectly prepared. Other types of random error include misreading instruments, chance errors in calculation, and transcription errors, including the transposing of digits. Random errors are inherent in all analyses; their causes usually cannot be traced. Imprecision is caused by random error, which has a Gaussian distribution described by a bell-shaped curve. Sources of random variation influence each measurement differently; the result may be higher or lower than it should be and may vary considerably or little in magnitude.

Determinate (systematic) error can be traced to a specific cause. Common causes of inaccuracy include constant or proportional errors. Constant systematic error is an error that is always in the same direction and of the same magnitude even as the concentration of the analyte changes. Proportional systematic error is an error that always occurs in one direction and whose magnitude consists of a percentage of the concentration of analyte being measured.[2]

A classic example of systematic error is a method-related error caused by the presence of interfering substances in the specimen. These interfering substances may be endogenous metabolites, such as bilirubin; drugs the patient is receiving; or contaminants, such as citrate or heparin, that entered the specimen during collection or processing.

Drug interference may be methodologic (in vitro) or physiologic (in vivo) in origin. Methodologic interference occurs when the drug is actually detected during measurement as if it were the substance of interest. Physiologic interference, which actually represents biological variation, occurs when a drug received by the patient induces an increase in the substance of interest being measured.

* Reasons for variation. Correct interpretation of a laboratory test result should take into account every possible cause of variation in test results: preanalytical, biological, and analytical variation.

[paragraph] Analytical variation. Valid data are essential in making medical decisions. The two most important considerations used in evaluating analytical performance, and thus in appraising validity, are analytical accuracy and analyticil precision. Analytical accuracy is agreement between the best estimate of a quantity and its "true" value. Analytical precision is agreement between replicate analyses.

Other important concepts are analytical sensitivity and analytical specificity. The former describes the ability of an analytical method to detect small quantities of the measured metabolite. The latter describes the ability of the analytical method to determine solely the constituent it purports to measure.

Many analytical variables must be controlled to assure accurate measurement. Reliable methods are obtained by a careful process of selection, evaluation, maintenance, and control. Variables include water quality, calibration of volumetric glassware, stability of electrical power, ambient temperature, and temperature of refrigerators. Because centrifuges directly and indirectly affect many methods, they should be consistently monitored throughout the laboratory.

The reliability of analytical results often depends on the calibration procedure used and on the quality of standards and controls. Reference materials of the highest quality and primary reference standards should be employed in calibrating and validating each method. Secondary reference materials should then be used to provide working standards for routine application of the analytical method and to assign values to quality control materials. The controls are used for daily monitoring of the methodology's effectiveness. Selecting the most appropriate reference materials and controls is critical in validating the performance of an analytical method.

Calibration and test material (CTM) should be well characterized and documented, along with the number of different quantities of standard concentrations used and the frequency of use. More detailed guidelines for control of analytic variables are provided in recommendations from the National Committee for Clinical Laboratory Standards.

Contributions of imprecision and inaccuracy to the total analytical variability of test results are additive. They are also largely method-dependent. Coefficients of variation typically range from 1% to 3% for electrolytes, glucose, and blood gases; 3% to 5% for bitirubin, bicarbonate, and creatinine; and 5% to 15% for certain hormone and drug assays.

Information regarding the standard deviations and coefficients of variation for individual tests can be determined by contacting the clinical laboratory performing them. Inaccuracy is much less predictable, however, and can account for much larger errors - as high as 50% or even 100%. The laboratory should be contacted by the clinician whenever an unexpected result is encountered. Transcription and specimen identification errors can be detected by repeat analysis of the original specimen or on a freshly drawn specimen, usually at no cost to the patient.

Laboratories can provide lists of substances that frequently interfere with analyses, either in vitro or in vivo. If interference is suspected that has not been previously documented, the laboratory will often confirm or rule out the suspicious compound by analyzing of the original specimens that have been spiked with the suspected substance.

[paragraph] Biological variation and |normal range.' Correct interpretation of a laboratory test result should consider biological variability, which may be genetically or environmentally determined and which is superimposed he other analytical sources of variation. patient's test result is compared with the normal range reported by the laboratory. This seemingly simple process is confounded by a number of problems.

First, many reported normal ranges are derived from the literature, and are thus based on studies using sample populations, specimens, analytical methods, or statistical techniques that do not reflect those used by the laboratory itself.

Second, most normal ranges reported in the literature are based on the erroneous assumption that data obtained from measurements performed on a sample population of healthy individuals followed a Gaussian distribution; the authors then calculated the 95% confidence range (x [+ or -] 2s). In fact, studies of many different laboratory tests repeatedly demonstrate that while the distributions of several are close to Gaussian, those of the majority are markedly skewed, peaked, or flattened. Serum creatinine and urea nitrogen determinations performed on an appropriately large population of apparently healthy individuals consistently reveal tailing distributions.

Third, ambiguity surrounds the term "normal," which statistically refers to the probability function described by the Gaussian curve but which may carry the clinical connotation of "healthy" or "without risk." These laboratory and clinical definitions are not necessarily equivalent. Serum cholesterol values in the range found in some "normal" adults in the United States, for instance, are probably unhealthy because they suggest a risk of atherosclerosis.

It should be noted here that the degree of biological variation (e.g., serum creatinine) in a given individual over time (intra-individual variation) approaches the degree of variability observed between different individuals at any given point in time (interindividual variation). For certain tests, however, intra-individual variation is much narrower than the range of values encountered in the sample population. Examples include serum alkaline phosphatase, immunoglobulins, cholesterol, triglycerides, and hemoglobin. Thus, even changes in a patient's test value that do not exceed the so-called normal range may nevertheless be clinically significant.

Many factors can contribute to the biological variation observed for various metabolites. While a 95% confidence range of 0.6 to 1.2 mg/dl of serum creatinine can be repeatedly obtained for any selected healthy population, for example, variables will still include diet, physical activity, pregnancy, drug intake, and (especially) age and gender, since serum creatinine is related principally to muscle mass. A diet rich in cooked meats raises the level of serum creatinine; creatine present in meat is converted to creatinine during cooking. Dehydration can trigger a rise in serum creatinine. Diuretics and salicylates may elevate serum creatinine levels, whereas the indigestion anticonvulsant drugs may lower them.[1]

[paragraph] Pathophysiologic variability, predictive value. Knowing whether the test result falls within a given normal range does not provide enough information to diagnose disease. The physician and laboratory scientist must also understand diagnostic sensitivity, specificity, and the predictive value of the test as well as the effect of prevalence on predictive value.[6]

Predictive value theory takes into account ranges of values encountered in patients with the disease or abnormality under consideration as well as values encountered in patients with other disorders. Therefore, predictive value theory does not derive a final solution to problems associated with the interpretation of the laboratory test result. Rather, it provides another path to medical decision making that takes into account the probability of various outcomes and involves assigning relative value to each possible outcome or result.

In addition, no consideration is given to the magnitude of variation in a test result. The disease or abnormality may be classified as present or not, based on whether test results fall outside an arbitrary cutoff point without regard to the magnitude of deviation. Other considerations are the size of the abnormal value itself and the time scale over which the change in test results occurred.

Most laboratory scientists continue to support the concept of distribution of values and the probability of separate distributions for both healthy and sick populations. Nevertheless, confusion has arisen regarding the application of normal limits that are obtained from these distribution, which become action values at which diagnostic or therapeutic procedures take place.

Sunderman[7] and Statland[8] have promoted the concept of using discriminant values, or decision limits, rather than normal or reference values (limits) in clinical interpretation of laboratory test results. These decision limits, which are based on scientific and clinical criteria, sometimes depend on environmental circumstances. In addition, they take into account the prevalence of different states of health or disease. The limits, therefore, vary as a function of the population studied. An example of this is the attempt to define states of health according to specific criteria, such as whether a woman takes oral contraceptives or not. Various pathophysiologic states and degrees of severity in the same patient must also be considered, as seen, for example, in chronic hemodialysis patients before and after dialysis.

* Role of point-of-care testing. Testing at the point of care represents a critical step forward in patient care and in reducing laboratory error and costs. The introduction of such testing, which significantly decreases turnaround time for obtaining laboratory results, permits real-time treatment of patients. The rapidity with which good data are generated makes the role of this system unique.

A physician making critical judgments about patient care must have confidence in the data, since there is rarely enough time to repeat or confirm the tests. For this reason, point-of-care testing should be limited to procedures that produce accurate data with fast turnaround (Figure V).


The crucial nature of this information requires that guidelines be in place to prevent errors. Frantic activity in the trauma, surgery, or intensive care areas may lead personnel to forget to perform procedures that preserve the integrity of the specimen. Proper documentation may also be abandoned in the rush to obtain and communicate test results for patients in imminent danger. A quality assurance program that stresses nonanalytical as well as analytical considerations can eliminate more than 95% of errors associated with such testing.

Recent experience has demonstrated four major preanalytical areas of concern: the patient, the specimen container, the origin of the specimen (artery, vein, catheter, other), and transport.[9-10] Also potentially troublesome is the postanalytical problem of reporting data. Point-of-care testing obviates most of these latter problems while offering earlier and specific diagnosis, faster and more frequent monitoring, and the opportunity to improve patient care and potentially reduce hospital costs.

Such testing systems have evolved over the years into sophisticated, automated, maintenance-free instruments. Equipment now on the market is small enough to be portable and fit at the bedside or in the surgery suite. Such a system can perform all necessary tests in one to three minutes. It is easy to operate and remains accurate and consistent over time. Periodic calibration is done automatically, saving training and operating time.[11] Relatively maintenance free, the instruments require a minimal amount of technical support.

Quality control of the testing components, including ion-specific electrodes, electronics, and calibrators, is done externally. Results may be programmed to appear on a computer screen, in print, or both.

Testing requires only a relatively small volume of heparinized whole blood (0.2 to 0.5 ml). The blood is aspirated directly into the instrument, minimizing the health provider's exposure.

All wastes are disposed of in a self-contained protective container. Reagents and electrodes are quickly and easily exchanged by means of a replaceable cartridge.[12]

Certainly the instrument to be chosen for a given point-of-care testing program should have a favorable cost/benefit ratio. Each institution has to determine for itself whether this technology is advantageous for analytical and clinical efficacy and patient care (Figure VI).

Figure VI

Ideal features of instruments

for use at the point of care

Quality assurance

* Automatic calibration to help insure accuracy

* System lockout features

* Mandatory programmable quality assurance

(specified frequency and number of control


* Instrument interpretation of quality control

results (pass/fail)

* Operation by authorized users only

Data management capabilities

* Patient results

* Quality control results

* Calibration data

Ease of use

* Portable, for movement between testing sites

* Relatively maintenance free; all reagents,

tubing, sensors, and waste receptacle

in a self-contained disposable container

* Bar code capability to minimize transcription


* Minimal training and troubleshooting


Minimal operating costs Flexibility

* Able to run different analyte combinations

on one instrument

Using such multianalysis profile systems for point-of-care application reduces preanalytical, analytical, and postanalytical errors in laboratory testing. By demanding minimal operator interaction and maintenance, and providing accuracy and precision, these instruments compare favorably to traditional ones. Those contemplating the prospect of obtaining such a system should consider the opportunities it may present for enhancing their institutions' ability to diagnose and treat critically ill patients quickly and effectively.

As laboratory personnel become technologically proficient in performing tests with point-of-care instrumentation in acute care settings, they concomitantly enjoy an extent of direct interaction with patients not experienced in the typical laboratory setting. New critical care testing modalities, the evolving roles of laboratory personnel, and the movement of the acute care laboratory out of the classical laboratory setting have expanded opportunities for laboratory scientists at many levels of responsibility.

[1.] Siest, G. Reference values: Their concepts and applications. In: Siest, G.; Henny, J.; Schiele, F.; et al, eds. "Interpretation of Clinical Laboratory Tests," pp. 3-25. Foster City, Calif., Biomedical Publications, 1985. [2.] Westgard, J.O.; Carey, R.M ; and Wild, S. Criteria for judging precision and accuracy in method development and evaluation. Clin. Chem. 20: 825-833, 1974. [3.] Tietz, N.W. A model for a comprehensive measurement system in clinical chemistry. Clin. Chem. 25:833-839, 1979. [4.] National Committee for Clinical Laboratory Standards. "Tentative Guidelines for Calibration Materials in Clinical Chemistry." Document C22-T. Villanova, Pa., NCCLS, 1982 (no longer in print). [5.] Boyd, J.C., and Lacher, D.A. The multivariate reterence range: An alternative interpretation of multiple protiles. Clin. Chem. 28: 259-265, 1982. [6.] Galen, R.S., and Gambino, S.R. "Beyond Normality: The Predictive Value and Efficiency of Medical Diagnosis," pp. 9-28. New York, John Wiley & Sons, 1975. [7.] Sunderman, F.W. Current concept of "normal values," "reference values" and "distribution values" in clinical chemistry. Clin. Chem. 21: 1873 1877, 1975. [8.] Statland, B.E. Establishing decision levels in clinical chemistry. In: Grasbeck, R., and Alstrom, T., eds. "Reference Values in Laboratory Medicine," pp, 207 221. New York, John Wiley & Sons, 1981. [9.] Fleisher, M., and Schwartz, M. Strategies of organization and service for the critical care laboratory. Clin. Chem. 36: 1557-1561, 1990. [10.] National Committee for Clinical Laboratory Standards. "Blood Gas Preanalytical Considerations: Specimen Collection, Calibration, and Controls; Tentative Guideline." Document C27-T. Villanova, Pa., NCCLS, 1989. [11.] Zaloga, G.P. Evaluation of bedside testing options for the critical care unit. Chest 97(suppl.): 185S-190S, 1990. [12.] Strickland, R.A. Hill, T.R.; and Zaloga, G.P. Bedside analysis of arterial blood gases and electrolytes during and after cardiac surgery. J. Clin. Anesthesia 1: 248-252, 1989.
COPYRIGHT 1992 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Special Supplement: Point-of-Care Testing
Author:Forman, Donald T,
Publication:Medical Laboratory Observer
Date:Sep 1, 1992
Previous Article:Joining the technological evolution in health care.
Next Article:The hybrid laboratory: shifting the focus to the point of care.

Related Articles
Meeting the challenge of bedside testing; technology will continue to push patient testing beyond the bounds of the laboratory.
Quality control in the new environment: lab testing near the patient.
Joining the technological evolution in health care.
The hybrid laboratory: shifting the focus to the point of care.
Using cost-effectiveness analysis to weigh testing decisions.
Why testing is being moved to the site of patient care.
A respiratory care view of point-of-care blood gas and electrolyte testing.
Managing information from bedside testing.
New POCT guide establishes testing uniformity.
Tips for managing your POCT program.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters