Printer Friendly

The technologist's role in quality management of off-site testing.

Quality management includes guidelines to limit the variability of test results, monitoring of analytic systems, steps to correct failures, and records that facilitate future problem solving.

Most medical technologists have some involvement in their laboratory's quality assurance program but are not responsible for its management or supervision. At off-site testing locations, however, the consulting technologist will often assume full professional responsibility for quality assurance.

To emphasize this role, we will use the term "quality management" while reviewing the rationale, principles, and salient features of laboratory quality assurance programs.

* Rationale for quality management. Ideally, a test result should always represent the patient's clinical condition and be absolutely reproducible. Repetitive testing of a single specimen with any available analytic system will, however, yield a family of values rather than a single result (Figure I). These fluctuations are expected and can be predicted if one knows some basic information about the system and the analysis being performed. Several other factors-including some associated with the patient, specimen, or analytic system-can interfere with the determination of the "true" value at the time of test sampling.

Imprecision (the inability to obtain identical values with repetitive testing) or inaccuracy (the inability to determine the true value) becomes an issue when it interferes with the interpretation of results near a critical decision level.

A quality management program serves three functions: It minimizes the risk of producing unretiable (inaccurate or imprecise) results; alerts operators when an analytic system begins to fail; and documents the facility's preventive posture, problem identification, and corrective action. In this way, the program attempts to support the production of consistently reliable results that can be used with confidence.

The clinician faces much uncertainty when dealing with patient care problems and, like an army general, will often prefer to take action on immediately available information rather than delay a decision pending more "perfect" data. The general is trained to factor the reliability of the information on hand into the decision-making process. Although clinicians do this with other aspects of the patient evaluation, many fail to consider the reliability of lab results. Yet the timeliness of results can be an important factor that modifies quality management standards.

An off-site testing facility's quality management program must address the clinical and financial implications of trading accuracy and/or precision for timely test result availability. For example, if a physician's office is located near a reference laboratory that provides both rapid and reliable results, it may be hard to justify doing less accurate testing inoffice.

On the other hand, physicians who practice in isolated regions may prefer to do their own testing because send-outs would entail an unreasonably long wait for results . In any event, results must be reliable enough to meet diagnostic and/or therapeutic needs.

* Salient features of quality management. To develop a quality management program, one must evaluate each aspect of the testing process to identify potential sources of increased variability or inaccuracy and determine which have the greatest bearing on production of reliable results in that particular clinical setting.

Such a program generally includes 1) primary prevention through ongoing training, the use of procedure manuals, periodic maintenance, setting component standards, and monitoring components for compliance with guidelines; 2) monitoring of the analytic system for signs of impending failure before erroneous results are reported; 3) a systematic and thorough approach to identifying, evaluating, and correcting failures; and 4) record keeping to facilitate future problem-solving efforts and to document the facility's professionalism.

It is important to determine which of these concerns are the most critical to a particular site and which can be accomplished most easily. This enables the consulting technologist to develop an appropriate quality management program. Figure II list key components of such a program.

It should be emphasized that adequate record keeping is a cornerstone of risk management programs for most health care institutions. Comprehensive records can document that proper techniques were used to minimize the possibility of error and that corrective action was taken promptly when called for.

Such records also facilitate identification of problems and document recurring difficulties. The technologist consultant must help staff members at an off-site testing facility to understand the importance of records and determine how much record keeping is necessary. One person should be designated to review the records periodically.

* Tools for quality management of off-site testing. The off-site testing facility's quality management needs are superficially similar to those of most other laboratories (Figure 11). In the off-site facility, however, the nature of the staff (including its responsibility for other patient care activities), instrumentation, test volume, and the testing environment require a somewhat different approach to principles of quality management.

* Managing the quality of input and output variables-primary prevention. Laboratory testing is a complex function that generally demands the operator's full attention and concentration. The testing facility's space design is a key influence; this area should be conveniently organized and minimize noise and other distractions. If you become involved in planning an off-site testing facility, also remember the importance of developing a design and policies aimed at protecting patients and personnel from the microbiologic, cheimical, electrical, and fire hazards that can be associated with testing.

Many factors before and after the analysis itself can affect the precision and accuracy of test results (Figure III). Off-site testing staffs frequently don't take this into account because they don't appreciate the complexity of the testing process or of their workplace.

A simple written procedure can provide guidelines for patient preparation, specimen collection and protection, specimen rejection, and result reporting. A simple method for tracking a specimen throughout the testing and reporting process is essential.

Half of all erroneous test results can be traced to clerical error. This error rate can be reduced by simplifying the design of forms for easy understanding and use, and by minimizing the number of clerical steps, including transcriptions.

The issue of calculations in an off-site testing facility may seem irrelevant, but the methods some of these sites use to multiply a result by a dilution factor can be critical to the accuracy of reported results. Given the high staff turnover in off-site testing facilities, it's a good idea to print clear instructions right on the forms used for calculations.

We hope readers already recognize that the operator is an integral component of the analytic system. Both "good quality" and "poor quality" labs tend to use the same reagents, systems, and supplies. The difference generally is in their staffs' ability and desire to produce reliable results.

A quality management program should provide adequate initial and ongoing training and require each staff member to demonstrate an ability to produce reliable results. Training must reflect the off-site testing facility's work priorities and pressures. It should be problem-oriented, dealing with the issues the staff believes are important.

The technologist consultant has to listen to staff members in order to identify their perceived problems and develop the strategies that will help them recognize other important potential problems. A consultant's success depends on his or her creativity and ability to work person-to-person with each staff member.

* Setting the standard for performance-written procedures and policies. Most laboratories use a written procedure to insure the comparability of test results from different technologists. The procedure generally spcifies rules for anything that impinges on the precision or accuracy of test data, including patient preparation, specimen handling, the materials and conditions of the test analysis, and detailed method guidelines for each analytic step and the data analysis necessary to derive a reportable result.

Procedure manuals set the standard in many laboratories but are seldom used at the bench. Because of the off-site testing staff's limited understanding of the analytic process, it is important to develop simplified procedure guides for training and for guidance during the actual test analysis.

Some staff members may have trouble recognizing either an absurd result or one that calls for immediate notification of a patient's physician. Since one of the important reasons for maintaining an off-site testing facility is to achieve early recognition of an "abnormal" result or an "action" value, It is critical that prompt reporting guidelines be featured prominently 'in a procedure guide that is kept at or near the testing or reporting area.

Operators must understand the importance of adhering to processing guidelines. As an educational exercise, they can be asked to run a single control and repeat the analysis five or 10 times during the day. Most of the variability of subsequent results can easily be attributed to slightly different timing or other technical variations in the procedure.

The next step is to review the procedure with the staff to see if analytic tolerances are specified and whether the system can be tricked into reporting an erroneous result with the control specimens. Understanding that analytic variability is expected and predictable is a fundamental concept. It defines the need for a quality management program, which would keep the variability within clinically acceptable limits.

Periodic maintenance checks of an analyzer's critical components-such as its heating or cooling system, optics, and lubrication and cleanliness-are also part of laboratory quality management. Instruments can be expected to require occasional repairs because of normal wear. Taking proper care of the system during routine processing extends the interval between failures and reduces analytic variability.

To analyze breakdowns, particularly if the technologist consultant is working with a number of the same systems in different sites, it helps if each site has a simple instrument log noting the completion of scheduled maintenance, symptoms that might have presaged the failure, the nature of the failure, what was done to identify the underlying problem, and the required repairs. This practice can save considerable time when dealing with future failures.

Most tests require complex reagents, standards, controls, or supplies to yield a result. Documentation to insure lot-to-lot consistency and comparable results is essential. When a new lot arrives, most large laboratories perform a double run with the component currently in use. Microbiology materials sometimes need to be checked to make sure that they are free of contamination, will support growth (or inhibit it in the case of selective media), and that biochemical reactions function properly.

Although the reagent packs favored by off-site testing facilities may seem to obviate the need for such precautions, some of these systems are subject to premature reagent degradation if exposed to environmental extremes during transportation or storage. The consultant should evaluate the systems used in each site to determine the need for surveillance of component viability and comparability.

* Monitoring the analytic system-result accuracy. Each testing facility should document that its results are accurate and comparable with results produced by other testing facilities. Among the questions to ask:

Is there a significant bias between a testing facility's methods and those used at similar sites elsewhere?

Is the bias consistent, or does it vary among operators?

Is a bias apparent between the testing facility's method and that of other facilities using the same instrument-method combination?

Will it be necessary to report an adjusted value or change the facility's reference range?

Such issues can be addressed through interlaboratory result comparisons with peers using the same methodology in proficiency surveys and by parallel testing of selected patient specimens in a reference laboratory. Despite problems stemming from specimen limitations and fragility, these approaches can give a testing facility confidence that it "is reporting the "common coin" for the community that it serves.

Analytic bias in itself is not a problem unless intermittent or unrecognized. Thus it is important to perform two tasks associated with documenting the accuracy or comparability of off-site testing results. First, the bias between a facility's methods and those used by its reference laboratory must be evaluated. Thereafter, periodic monitoring should check that the bias is consistent and that "improved" reagents or different operators using slightly different techniques have not changed the bias or unacceptably increased result variability.

* Proficiency testing programs and split specimen testing. Data returned from a proficiency testing program can be used to identify potential analytic problems, which are represented by the flagged results falling outside acceptable limits. Going back through previous proficiency testing results can be helpful if the current result is the first outlier or if there is a consistent pattern of unacceptable data. Was there a transient problem during processing, or is there a problem with the method? Did the specimen contain some substance that interfered with the analysis?

These questions can be answered by reprocessing the specimen, so it's a good idea to save surplus proficiency testing material under appropriate storage conditions whenever feasible. Whether the reprocessing produces the consensus result or merely reproduces the flagged result, the exercise is valuable in defining the underlying problem (Figure IV).

If a problem with the analytic system is suspected, the troubleshooting guidelines that we will describe in next month's article can be used to identify the source of the difficulty. The important point is that anomalous results must be checked, explained, and, if persistent, corrected.

Identifying excessive variability or inaccuracy with split specimens is a little more difficult than using proficiency testing results, but it can make the information available more rapidly. The development and application of acceptable variability limits between split specimen results using statistical analysis will be discussed in the next article.

Alerts to the off-site testing staff of possible problems with the system are facilitated by a single variability limit for all analyses. If the variability of one method is much greater than another, the testing staff will quickly recognize the discrepancy by comparing current results with previous differences. If the results vary excessively-as they might in a formal proficiency testing program-it is important to determine the nature of the problem and attempt to correct it.

* Maintaining records. Records at the off-site testing facility should document that good lab practices have been followed. These records demonstrate the facility's periodic calibration, validation of methods, equipment maintenance, QC, and successful participation in an external proficiency testing program.

When problems do occur, complete records show that they have been identified and corrected. The records associated with a quality management program-including proficiency testing and daily quality control-should be reviewed periodically by the professional responsible for managing the offsite testing facility. Together with documented review and corrective action, these records can be vital in the defense of a medical malpractice suit and are generally required for licensure in those states that regulate off-site testing facilities.

Past records of consultations are another valuable tool for the technologist consultant. Problems tend to occur in patterns. Office-based testing personnel who lack technical training may repeat the same kinds of mistakes seen in other off-site testing facilities. Notes of previous troubleshooting strategies may offer clues for future efforts at problem solving.

Next month, the fourth and final article in this series will deal with monitoring analytic systems on a day-to-day basis and present an approach to identifying problems encountered in off-site testing facilities.

Dr. Betsey is professor and head of chemical pathology at Oregon Health Sciences University, and Dr. Baer is chief of laboratory service at the Veterans Administration Medical Center, both in Portland, Ore.
COPYRIGHT 1987 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1987 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:part 3
Author:Belsey, Richard; Baer, Daniel M.
Publication:Medical Laboratory Observer
Date:Nov 1, 1987
Previous Article:Performance standards for the transfusion service.
Next Article:Computer system for a big, busy blood bank.

Related Articles
A primer for proficiency testing.
Quality control in the new environment: lab testing near the patient.
The technologist's role in quality management of off-site testing.
The technologist's role in quality management of off-site testing.
The technologist's role in quality management of off-site testing.
The question of quality.
How we slashed QC costs in microbiology.
Lab 'errors' often prove for the better.
A quality assurance manual for the office laboratory.
Development of a peripheral smear differential review.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters