Printer Friendly

Clinical chemistry.

Clinical Chemistry

The use of quality control materials in chemistry has been an accepted laboratory practice for more than 25 years, and not much has changed in the way we work with these materials. Only recently have we begun to question whether the frequency of testing, the analytical statistics used, or the rejection criteria for out-of-control results are appropriate for current technology.

We use quality control samples to obtain early warning of systematic error in our analytic systems, before erroneous results are released. Testing quality control samples in the usual manner is not effective for identifying random error until the random error rate increases and in effect becomes systematic error.

How can we determine appropriate quality control for chemistry laboratories and improve our early warning system? Let's consider several major aspects of the issue:

Frequency of controls. Increased stability of instruments and reagents has changed concepts about the frequency of testing controls. Dry chemistry reagents, such as those used in the Kodak Ektachem and Du Pont ACA analyzers, appear to be quite stable within the same lot. They are not nearly as likely as less stable liquid reagents to cause run-to-run or day-to-day variance in results.

How often controls should be tested to assure reliability of results is determined by the setting in which the testing occurs--the laboratory's prior history of variance, the results of proficiency testing, the type of instrumentation and reagents used, and the level of preventive maintenance.

Optimal frequency can also be related to the cost of quality control. For example, including a control sample with every run is not practical with low-volume or Stat instruments--the cost of quality control may greatly exceed that of running patient specimens. Nor is it practical with such instruments as centrifugal fast analyzers when they are operated in a multitest panel mode.

Our own experience with a centrifugal fast analyzer indicates that in the batch mode, 26 patient specimens can be processed simultaneously, and the reagent cost, including control, ranges from $0.02 to $0.16 per test. However, in a panel mode combining six tests on the same rotor, two of every three positions is occupied by a control sample. With only one patient specimen per rotor, patient throughput is markedly reduced, and the reagent cost per patient result quadruples.

Not only does the cost of QC samples and the reagents used to run them rise, but the reduced throughput also drives labor expenses up. Moreover, as QC testing increases, so does the likelihood of random out-of-control alerts with consequent repeat testing.

Choice of QC materials. The ideal control serum should be identical to patient specimens. It should be a liquid human-plasma material, with analytes derived from human sources. It should remain stable for a long period of time even after the vial has been opened.

Unfortunately, no such material is available.

Human-based plasma with analytes of human origin is generally not offered commercially because of cost, the limited supply of such analytes, and the need to use materials that are free of hepatitis and HTLV-III viruses. Lyophilized control materials introduce errors associated with inaccurate reconstitution, incomplete solubility of some analytes, and vial-to-vial variation due to differences in the degree of lyophilization. Analytes such as alkaline phosphatase begin to change in activity shortly after reconstitution of the lyophilized material.1

The control matrix can cause other problems. Some immunoassay systems yield unusual results when performing on nonhuman reconstituted lyophilized control material. Free drug assays, performed on protein-bound drugs, may also fail to react as predicted. Control of these procedures is difficult with our current QC materials.

Liquid quality control materials present additional problems. Those containing ethylene glycol are not compatible with certain dry chemistry procedures. For example, in analytic systems such as the Ames Seralyzer that use cellulose strips impregnated with chemicals, the ethylene glycol reacts with the cellulose base. Moreover, it acts as a solvent, releasing some of the reagents from the strip.

Ethylene glycol also dissolves the gelatin carrier in film-based systems, such as the Kodak Ektachem. Because of its viscosity, this control material introduces errors in some direct-reading ionselective electrode systems. It has recently been shown that the viscosity can cause an erroneous result in the patient specimen preceding the liquid control on the SMA II.2

Test concentrations. Some thought should be given to the most appropriate concentrations of an analyte in the QC sample. Ideally, we would monitor a number of points throughout the system's effective testing range. In order to be practical and efficient, however, we can choose certain critical levels. The decision level separating normality from abnormality is an important reference point that should be monitored.

For some analytes, such as creatinine or bilirubin, the decision level is close to the test procedure's lower detection limit. A quality control sample in this range provides relatively little information about the functional integrity of the procedure, reagents, and instrument. Thus it is also important to run a control in the mid-portion of a procedure's analytical range. On enzyme procedures, adjusting a control sample's activity to the high end of the analytically valid range can provide information about a change due to reagent deterioration.

Whole blood systems. Some of the new analytical systems that accept whole blood specimens bring new quality control challenges. On the Abbott Vision, for example, the first step in the analytic process is centrifugation of the whole blood in a cassette, placed inside the instrument. Following separation of plasma from cells, the specimen is automatically transferred to the analytic chamber in the cassette. An appropriate quality control sample for this system should be able to test the integrity of the cell separation, measurement of the resulting serum specimen, and the analytical process.

With another new analytic system, a whole blood specimen is placed in a chamber that has channels leading to several ion-selective electrodes. The manufacturer expects to produce cards for a variety of analytes, including sodium, potassium, bicarbonate, chloride, urea, and glucose. Since whole blood and serum have different characteristics in interactions with ion-specific electrodes, a whole blood control sample that simulates a patient specimen needs to developed.

Reagent strip tests for determining glucose levels in whole blood are commonly used in hospital wards, doctors' offices, and the homes of diabetic patients. Because immediate medical decisions leading to significant therapeutic interventions, such as an insulin dosage change, may be based on the test results, it is important to closely monitor the test's functional integrity.

One problem with the strips is that they can rapidly lose reactivity when exposed to moisture or high humidity. And simple instruments that read the reflectance of the strips can change calibration when the battery charge is low. Erroneous readings can also result when the instrument's optics are dirty.

A whole blood control product has recently been introduced for glucose reflectance strips. Protocols for the quality control of capillary glucose testing now need to be developed for use in doctors' offices, wards, and home testing.

Data handling and analysis. For almost 25 years, we have calculated the mean and standard deviation of quality control results, rejecting the 5 per cent of runs that fall outside the 2 SD range. Westgard and others, however, have studied the power of this type of analysis and its rejection criteria and have proposed new rules for rejecting or accepting test runs.3, 4 These new guidelines appear to be more efficient in discriminating between random events and systematic errors.

Although it is somewhat complex to apply, as compared with rejecting runs that exceed 2 SD limits, the Westgard multirule scheme presents a clearly defined plan of action that theoretically results in rejection of fewer analytic runs.

Rapid analysis and interpretation of quality control data have become commonplace with widespread use of computers. Manufacturers now include quality control programs within new instruments --data are analyzed immediately as the run is performed. Some systems use the traditional mean 2 SD decision points, while others have incorporated the Westgard multirule method.

Regional and national computer-based systems--the CAP regional QAS programs, for example --enable subscribing laboratories to compare themselves with peers using the same analytic system. When QC data are submitted on a daily basis, it is possible to rapidly identify systematic problems.

Some manufacturers are considering instruments that require users to run a quality control sample and obtain an acceptable result before an analytic run can be performed. This type of system is especially useful in settings that lack fully trained medical technologists. Test marketing indicates, however, that prospective users don't like this feature and might either override it or not purchase the instrument at all.

QC algorithms (such as the Westgard multirule program), on-line computer QC analysis, and QC programming embedded within analytical instruments have diminished the need to visually inspect Levey-Jennings charts for drift trends. Because of their decreased usefulness and the limited wall space in open-design laboratories, these charts will be posted less often.

Specimen collection and handling. A vital component of laboratory testing, specimen collection and handling, has defied control. Securing positive patient identification and preserving the integrity of specimens throughout handling, aliquoting, and testing have always been a problem. Machine-readable bar codes on labels may help improve specimen identification. But there are still no effective control methods to assure that the specimen has been obtained from the correct patient, that it is labeled properly, or that the identification has been maintained throughout the handling process.

In therapeutic drug monitoring, glucose testing, and blood gas determinations, knowledge of the exact time of specimen collection is essential for interpretation of results. No strategy has been found to control this important variable, and it remains a frequent source of interpretation error. We need a means of easily and consistently noting the correct time of specimen collection and a means of checking this step.

Unsolved problems. From our discussion, it is apparent that a number of unsolved problems exist. With the evolution of more stable instruments and reagent systems, their miniaturization, and the trend toward random access analysis of different analytes, it's time to reexamine the traditional requirement of running a control sample with every analytic batch. New guidelines regarding the frequency of control analysis should be developed.

We favor an algorithmic scheme because a single set of guidelines for a specific instrument and reagent system will probably be inappropriate in some settings. The algorithm should take into account such factors as the specific instrument and reagent system; its age, previous quality control history, proficiency survey results, and preventive maintenance history; the system's frequency of use; and such ancillary quality control methods as average of normals analysis and delta check procedures.

One other major problem remains. We need a stable universal liquid-based control material that can work with a variety of systems, including the newer dry reagent chemistry analyzers.

Future challenges. Through objective studies, we must determine the degree to which different components of the analytic process are susceptible to variance. The instrument and reagent system is a major factor.

At one end of the spectrum of instruments, we have the microprocessor-controlled reflectance and transmittance spectrometers that use a double-beam reference system in the photometer. Some of these systems, especially those using a referenced xenon flash light source, are extremely stable. At the other end of the spectrum is the vintage colorimeter using a tungsten filament light source and unsophisticated photometry sensing devices.

It is necessary to collect data indicating the relative stability and susceptibility to drift of these different classes of photometers. Instruments such as centrifugal fast analyzers that have an internal referencing system should be more stable than some other instruments.

There is great variability in the susceptibility of reagent systems to systematic error. Dry reagent chemistry formulations, especially those that are individually wrapped or stored with a desiccant, are extremely stable. There is little variance from pack to pack within the same lot. Conversely, liquid reagents, especially those that must be prepared by a chemist, show run-to-run variability.

Some types of reagents are inherently less stable than others. The relatively unstable category includes P-nitrophenol phosphate, used in the alkaline phosphatase reaction, enzyme reagents with NADH, and reagents for enzyme immunoassays. Glucose oxidase reagents and dye reagents, such as brom cresyl green, are extremely stable. It should be possible to develop scoring systems for the relative likelihood that systematic errors will develop with specific instrument and reagent systems.

Other factors that need to be examined and scored relate to specific instruments. Certain brands and models are more stable than others. An instrument's age and its maintenance and repair history are significant factors in determining the level of quality control required.

Indices relating to technologists and other analysts can be developed to predict the necessary level of quality assurance. The analyst's level of training appears to be a key factor, but the amount of experience and supervision should also be considered in predicting quality control requirements. Objective data from within the laboratory may also be valuable as a predictor.

Previous quality control data, including the frequency of alerts in the Westgard-Shewhart multirule analysis and the coefficient of variation of all QC data, should be examined. The values of all controls running through the system, not just those recorded for calculating QC parameters, should be used for this purpose.

Another index that should be investigated for its predictive value is how the laboratory and the specific instrument perform with proficiency survey samples. Of particular usefulness is a changing SDI, as opposed to a constant bias from group consensus values. The changing SDI would indicate a variance from time to time in relation to the group.

We hope these factors can be used to develop an index for each analytic system, laboratory, and analyst--an index that will allow us to accurately predict the kind of quality control that is appropriate. This technique can better provide early warning of impending problems in the analytic system. We should be alerted before erroneous results, which might compromise patient care, are generated and reported.

1. Bowers, G.N. Measurement of total alkaline phosphatase activity in human serum. Clin. Chem. 21: 1988-1995, 1975.

2. Winter, S.D., et al. "Carryback': Effect of viscous liquid controls on the preceding sample analyzed with the SMA II continuous-flow analyzer Clin. Chem. 31: 1896-1899, 1985.

3. Westgard, J.O., and Torgny, G. Power functions for statistical control rules. Clin. Chem. 25: 863-869, 1979.

4. Westgard, J.O., and Torgny, G. A. multi-rule Shewhart chart for quality control in clinical chemistry. Clin. Chem. 27: 493-501, 1981.
COPYRIGHT 1986 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1986 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:quality control in the new environment; part 1
Author:Baer, Daniel M.; Belsey, Richard E.
Publication:Medical Laboratory Observer
Date:Sep 1, 1986
Words:2412
Previous Article:Quality control in the new environment.
Next Article:Giving a deposition in a malpractice suit.
Topics:


Related Articles
A protocol for verifying critical values; a few minutes more before picking up the telephone and calling in a panic value can insure that emergency...
Tailoring clinical chemistry instruction to the job market; a survey of area labs pinpointed needed changes in this MT curriculum.
Meeting the challenge of bedside testing; technology will continue to push patient testing beyond the bounds of the laboratory.
Quality control in the new environment.
Quality control in the new environment: QC materials; an ideal control closely simulates patient specimens, is stable, and comes in large homogeneous...
The past as prologue: a look at the last 20 years.
The payload of a clinical lab in space.
How to check costs and quality of point-of-care testing.
A report for government regulators, decision makers and advisors: quality control practices and preferences in today's clinical laboratory.
Looming threat to QC in physicians' offices.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters