Printer Friendly

The technologist's role in quality management of off-site testing.

The technologist's role in quality management of off-site testing

Checking the quality of components used in the test analysis and performing periodic maintenance have been found inadequate to insure the day-to-day reliability of test results. While component control is the only approach available for quality management in certain areas, such as microbiology, the function of the system as a whole should also be checked whenever possible to make sure results are reasonably precise and accurate.

Monitoring the integrated analytic system--process control. Many laboratories find feedback from knowledgeable clinicians a useful aid in monitoring the reliability of their analytic systems. When laboratory information doesn't fit the patient's clinical status or jibe with other results, these clinicians question whether the difference represents a patient problem or a laboratory anomaly. Doubts can be particularly strong in an office testing facility, where the doctor is probably quite familiar with the patient's condition.

Nevertheless, there are clinicians who will think it unlikely that a flaw in the testing process could be responsible for anomalies. This narrows the flow of useful feedback. Moreover, even with timely feedback from knowledgeable clinicians, some significant problems will escape timely discovery. Therefore, most laboratories depend on a more systematic approach for monitoring result reliability.

In one approach, patient specimens are sent to a reference laboratory for parallel testing. Conventional laboratories usually do this only when validating a new test method. Once the reliability of the method is established, they use pseudospecimens to monitor the analytic system on a day-to-day basis.

Some off-site testing facilities may find it worthwhile to use actual patient specimens to monitor the day-to-day reliability of their analytic systems. In a hospital, for example, it may be practical to select a portion of all patient specimens for retesting. An office testing facility may choose to retest specimens with abnormal results or all of the specimens that are run on a particular day. Confirmation of an abnormal result by a reference laboratory can raise the clinician's confidence that the result is reliable.

Parallel or repeat testing can, however, be an expensive and shaky means of insuring result reliability. For example, results erroneously reported to fall within the reference range may be overlooked though they are clinically significant for the patient's care. Consequently, laboratories use commercially prepared pseudospecimens (controls) containing known analyte concentrations adjusted to critical decision levels, to check the accuracy and precision of their analytic systems.

Ideally, a pseudospecimen should mimic actual specimens. This may not be possible because the material used must remain stable for an extended period of time. In addition, most reagent systems are designed to provide accurate answers with human specimens, but nonhuman material may be needed to augment a pseudospecimen's level of some analytes in order to reach critical decision levels.

Thus results produced with a pseudospecimen can exhibit significant bias yet still be used to verify a system's day-to-day calibration and performance. Because the material is stable, it is possible to define the expected result variability and develop limits that signal the need to investigate potential problems. Manufacturers' recommendations concerning the frequency of such checks should be followed.

Data recording, analysis, and interpretation. A quality management program for an off-site testing facility generally requires transformation of day-to-day quality control data into information that will quickly tell the operator whether to report the results or hold them until a potential problem can be fully evaluated.

The protocols for data recording and interpretation should be simple, self-explanatory, and minimally time-consuming. Day-to-day data can be recorded either manually or electronically, then evaluated numerically or graphically.

Setting limits. Preliminary analyses are necessary to determine the expected variability of each constituent being tested by an analytic system. This involves calculating the mean and standard deviation for pseudospecimens (Figure I) or the average difference and standard deviation for split patient specimens (Figure II). The math, which requires a calculator that can determine square roots, is usually beyond the capabilities of nontechnically trained off-site testing personnel.

The resulting limits can be used directly (Figure III) or converted to a simple visual tool (Figure IV) with explicit instructions to the operator about the release or recall of patient results. The chart in Figure IV is derived by turning the normal curve on its side and defining the mean and the 2 and 3 SD limits (Figure V).

Data interpretation. The chart can be annotated to indicate that two consecutive control results falling in the yellow area or one in the red area indicates that patient results should be held until the system is determined to be functioning properly. If pseudospecimens are used to check the system's functionality, the operator can record each control result on the chart and easily determine whether the results should be released.

When the split-specimen method is used, the difference between the testing facility and reference laboratory results is not immediately available. So patients have to be advised of a possible problem with the test data and told that changes in management will be deferred until the problem is resolved.

If there is a significant disparity, it cannot be assumed that the reference laboratory result is correct. The problem must be defined, and one must determine which aspect of the testing process is at fault. Since these charts are not excessively sensitive, off-site testing personnel could miss early signs of impending failure-- when all of the results are still within the 2 SD limit--that might be picked up by an experienced technologist.

Consequently, the consulting technologist's periodic review of the quality control charts could disclose more subtle indications of potential problems, such as a shift or drift in the data (Figure VI). If the consultant is able to check day-to-day QC results frequently, the process could be simplified. The operator would simply record the results, and the consultant would review the data to see whether the system is functioning properly.

Problem identification. The off-site testing staff is not equipped to pinpoint and correct the problem when test results are unexpected or seem wrong. Problem identification and correction as well as prevention of similar occurrences in the future require a systematic approach and the knowledge and skill of a medical technologist. The use of a checklist can help guard against omitting a vital step.

Once a potential problem has been identified, a complete clerical check of the test run should be performed. Most laboratory errors are clerical--transposed letters or digits, mistakes in calculations, results entered on the wrong report form, and so on.

If no clerical errors are found, the test procedure should be retraced step by step as a dry run. Look for any departures from the written procedure. Pay special attention to the use of proper reagents, the order of reagent additions, volumes, incubation times, instrument wavelengths, and other instrument settings.

Examine the equipment. Is the pipet working properly, and is it calibrated correctly? Check the temperature of incubators and other temperature-regulated equipment. Determine whether reagents and specimens have been allowed to reach room temperature prior to testing, if that is necessary for the test. Check the instrument's calibration, and look for any obvious sources of malfunction, such as a plugged aperture in a cell counter, dirty optics, or a burned-out light bulb on a photometer.

If the cause of the problem still has not been found, substitute calibrators, controls, and reagents, and repeat the test analysis. Compare with prior analyses such details as the colors of solutions and reactions, volumes in tubes, how the instrument sounds, and the speed of reactions and processes (such as the cell counting cycle in a cell counter). Valuable problem-solving clues may surface when this is done.

Examination of quality control and standardization data can also point to the cause of a problem. If standards appear stable and patient results still seem reasonable but one or more control values are incorrect, the culprit is likely to be a bad control. Start by retesting a new control. If controls and patient results are proportionally altered but the standard still appears to act normally, then the standard may have deteriorated. If all of the readings are unusual, suspect a problem with the reagents or the instrument.

Sometimes consultation can help solve a seemingly unsolvable problem. An independent observer may see an obvious factor that has been overlooked. Certain problems, however, require not just any observer but a specialist with expertise in the system. The consultant technologist should know whom to call and what questions to ask.

Records must be maintained on all evaluations resulting from system failures or from day-to-day monitoring of an analytic system. They should include information about any corrective action taken and evidence of review by the professional responsible for the off-site testing facility.

In such facilities, the medical technologist consultant offers many skills that are essential to managing quality of testing performed by a staff lacking technical training. The tools used in the conventional laboratory must be modified so that those with a different set of skills and priorities can use them effectively to produce reliable results.

This concludes our series of articles. It should be clear by now that, with thought and ingenuity, the medical technologist can have a significant impact on the quality of patient test analyses performed in off-site facilities.

Table: Figure I Setting limits for day-to-day decisions about system reliability when using pseudospecimens

Table: Figure II Setting limits for day-to-day decisions about system reliability when using split patient specimens

Table: Figure III Acceptable control limits for off-site testing facility

Table: Figure IV Chart for day-to-day QC using pseudospecimens or split patient specimens

Table: Figure V Transformation of normal curve to analyze QC data

Table: Figure VI Interpretation of charted day-to-day control results
COPYRIGHT 1987 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1987 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:part 4
Author:Belsey, Richard; Baer, Daniel M.
Publication:Medical Laboratory Observer
Date:Dec 1, 1987
Words:1610
Previous Article:The selection interview: are you asking the right questions?
Next Article:A cost-out program for chemistry instrument selection.
Topics:


Related Articles
Tailoring clinical chemistry instruction to the job market; a survey of area labs pinpointed needed changes in this MT curriculum.
A primer for proficiency testing.
Quality control in the new environment: lab testing near the patient.
The technologist's role in quality management of off-site testing.
The technologist's role in quality management of off-site testing.
The technologist's role in quality management of off-site testing.
The question of quality.
Fill that R&D position now.
Lab 'errors' often prove for the better.
Stat testing in the new CLIA era.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters