Printer Friendly

Auditing transcription errors: a spot check for lab performance.

Auditing transcription errors: A spot check for lab performance

Like many laboratorians, we faced the prospect of DRGs with at least a little trepidation. The lab seemed to be doing a good job, but it was hard to know for sure. To resolve our uncertainty, we decided to conduct a formal performance audit, targeting the laboratory's transcription error rate for the initial study. Our goals were to establish the error rate for various sections by comparing lab reports with patient charts and--if necessary--to bring the rate down to 1 per cent or less for the entire lab.

A constructive internal audit requires extensive planning and organization, but it is feasible for any computerized laboratory to carry one out and benefit from the findings. The transcription error audit seemed like a good starting point for an ongoing evaluation of overall performance. Because medical technologists aren't necessarily trained in data entry, we believed that errors might be more likely to occur in this area than at the bench.

To our surprise, a literature search for error rates in computerized laboratories from 1980 through 1983 yielded only a single study--a survey of order entry errors.1 The most frequent types of errors included incorrect test entry, missing tests, priority entered incorrectly, wrong time or dates specified, missing venipuncture charges, and tests not performed on weekends.

With so little previous research available, we turned to the laboratory accrediting agencies. Here, too, we seemed to be breaking new ground. Neither JCAH nor CAP had ever launched such a study. That brought us back to the hospital's computer services department. Data processing personnel, we learned, must maintain a 99 per cent level of accuracy. If our medical technologists, without special DP training, could measure up to an error rate of 1 per cent or less, we could confidently state that our operation was running smoothly.

Having set a performance standard, we tackled the audit protocol. To bring the data evaluation down to reasonable proportions, we decided to monitor 25 patients over a 13-week period, running from January 23 through April 25, 1984. Two patients would be selected at random each Monday morning and then followed to discharge or through the 10th day of hospitalization. (The average length of stay at our hospital before DRGs was 9.5 days.)

Selecting from the Monday admissions would provide a good clinical cross section while allowing us to wrap up one set of data before picking up the next study group. The 10-day cutoff would see most of the patients through discharge without adding complicated data from chronic cases. Most tests are usually performed during the first few days of hospitalization anyway. However, if any tests were ordered or pending on day 10, the results would be included in the error evaluation.

Our 425-bed hospital usually admits 40 to 80 patients a day. For the random selection, we used a calculator to generate a series of numbers between 1 and 80. It was sometimes necessary to discard a number from the sequence. For example, if the first two numbers on the list were 3 and 67, but only 46 patients were admitted that Monday, we would cross out 67 and go to the third number on the list. As it turned out, only eight numbers were omitted.

We called the admissions office on Monday mornings to find out which two patients matched up with our random numbers for that week. These names were passed along to the seven section supervisors, who then pulled the lab slips for any work done on day 1, the day of admission. On day 3, they collected the data sheets for day 2. If a patient still had lab results pending from day 1, the supervisor for that section kept checking the records until the report was finally filed.

This process continued for as long as the patients remained in house or had lab work on order, through their 10th day on the audit. Most patients were discharged within a week, but outstanding lab work was monitored on a few after discharge. We sometimes had to wait for final microbiology results or for procedures such as radioisotopes, which are only performed on certain days. To minimize recordkeeping, we sent supervisors a memo whenever a patient joined or graduated from the study group.

Once we received the technologists' worksheets from all lab sections, it was our job to compare these findings with the results posted on the patient charts. The laboratory computer made it possible to gather the necessary clinical data. The log identified all of the daily test orders and indicated whether a test was pending or completed. We also used the computer data records to pull results either by patient or by test.

To expedite the final evaluation, we tallied the test results as the data came in and did a side-by-side comparison of the laboratory data sheets and the computer's records to check for transcription errors. Once the study officially ended, we spent another four weeks preparing the final report. In all, we reviewed 3,361 individual test results for the 25 patients; 3,355 results were charted correctly.

All six incorrect results involved transcription errors. We did not find any transposition errors or obvious analytical mistakes. Figure I shows the total number of test results, errors, and the error rates for our 10 laboratory sections. This gave the lab an overall accuracy rate of 99.8 per cent; the error rate of 0.2 per cent was well within our 1 per cent goal. With so few errors, it wasn't feasible to pinpoint any trends, and we didn't even try to identify the technologists involved.

Although hematology was charged with five separate errors, the section actually made just two clerical mistakes. For example, a comma must follow each response when CBC results are entered. A single transcription error and "1, 2' becomes "12,' with all subsequent results misaligned. Thus one computer typo turned into four mistakes for a hematology error rate of 0.6 per cent. This type of mistake happens so easily that we must carefully check all information entered before making it part of the permanent data record. Urinalysis's single transcription error gave that section a 0.2 per cent rating.

The audit brought two other conditions to light: improper rounding on computer-interfaced instruments and incorrect entry of microbiology results. For some reason, results from various instruments were consistently rounded off--and not always correctly --by the time they reached the lab computer. A 31.1 meq/1 CO2 reading on the chemistry analyzer, for example, became 32 meq/1 when called up from the computer. Though the chemistry technologists watched the analyzer for several days, it never repeated this feat, and we still don't know exactly what happened or why.

When accepting microbiology results, the computer refused to recognize the > sign when followed by a three-digit number. Thus "> 128' became "128' on the patient's chart. We traced the glitch to the translator that allows our Apple to communicate with the mainframe computer. A programming change corrected a problem that could have ultimately affected treatment.

The low error rate was encouraging, but we wanted to make sure that our random sample accurately reflected the hospital's caseload and the laboratory's workload. To do this, we grouped the subjects by admitting diagnosis. Thirty-six per cent were admitted for medical treatment of cardiovascular conditions and 32 per cent for general surgery. Twenty per cent were hospitalized for general medical care, and the remaining 12 per cent required cardiovascular surgery. (Figure II shows the number of study patients who had tests ordered in the various sections and the percentage of total results for each.)

The diagnostic breakdown closely paralleled the hospital's usual admissions profile. This meant that we had evaluated a representative cross section of patients and indicated that the test requests for the study group reflected our routine workload. It also validated the 0.2 per cent transcription error rate.

The random sample produced a range of 0 to 467 test results per patient. One patient hospitalized for chemotherapy required no laboratory testing at all. At the opposite extreme, another patient underwent numerous panel tests during bypass surgery, which were tallied individually for a total of 467 separate results. The most active sections in the study--hematology, urinalysis, and chemistry --usually handle the bulk of test volume for routine admissions. The critical care chemistry section was also kept busy, however, because nearly half of the patients studied required cardiovascular treatment.

Although the study was helpful, its small sample size limited the audit's value for less active sections, particularly serology and radioisotopes, in which tests are not ordered routinely. Even so, all of the sections performed at least some of the surveyed testing. Chemistry handled tests for 23 of the 25 patients; radioisotopes processed work for just two of them. Individual patient activity ranged from the chemotherapy patient who had no test requests to two patients whose test orders passed through eight of the 10 sections.

The audit's structure created a continuity for each patient, monitoring virtually all of them from start to finish. This format, however, did not allow us to audit 25 patients in each section. The original study covered hematology, urinalysis, and chemistry quite well, with more than 20 of the 25 subjects involved in these areas.

We would like to do a second audit, targeting the sections that didn't have much testing the first time. As yet, we haven't worked out a random sampling method that would guarantee 25 patients who require testing in these less active sections. It may be necessary to filter admissions by diagnosis and flag those conditions that indicate the possibility of heavy microbiology testing, radioisotopes, or whatever.

The audit was time-consuming, but the effort we invested paid off in complete success. The supervisors spent roughly 15 minutes collecting the day's data sheets, and we spent about 10 hours each week evaluating the records. To minimize the tedium and keep up with our regular responsibilities, the two of us worked on the audit on alternate days.

The project met both its goals: The audit helped establish the lab's transcription error rate, and showed that it was well below the 1 per cent evaluation standard. The survey also gave our laboratorians the peace of mind that comes with knowing that the lab's computer entry and error detection policy is adequate.

Although our error rate of 0.2 per cent turned out to be very good, it should be emphasized that no errors is the only truly acceptable standard. Ideally, we should all strive for error-free testing; realistically, we know that we'll see a few mistakes. Because technologists are human and computers are not foolproof, we must acknowledge that things occasionally go wrong.

In addition to conducting sectional audits, we're also contemplating a second full-scale random survey just to make sure the lab is holding the line. We would also like to examine patient identification and order entry protocols but have not yet decided to organize a formal study.

Auditing is both a challenge and a chore. It also takes a great deal of cooperation from the supervisory staff. Our supervisors were involved right from the start, and none of this would have been possible without their help.

A well-planned audit can pinpoint problems and help determine what went wrong and where. If a problem turns up, you can correct it. If the audit shows that the situation is under control, you can relax and move on to other projects. As our only assurance that clinical standards remain high amid constant change, auditing is probably the best investment a lab can make these days.

1. Slockbower, J.M. Blood collection problems: Faotors in specimens that contribure to laboratory error. Therapeutic Drug Monitoing Continuing Education Program of the American Association for Clinical Chemistry, 4: 1-6, 1982.

Table: Figure I A transcription error audit

Table: Figure II Audit activity by section
COPYRIGHT 1986 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1986 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Banker, Carol Ann; Polczynski, Mary K.
Publication:Medical Laboratory Observer
Date:Apr 1, 1986
Previous Article:Participative management lifts lab productivity.
Next Article:The overdependent supervisor.

Related Articles
Controlling reference testing costs; group purchase discounts and computerized test audits helped this lab slash send-out costs.
It's time to computerize urinalysis.
A primer for proficiency testing.
Integrated software: a quick way to audit turnaround time or paid hours.
Quality control in the new environment: lab testing near the patient.
Proficiency testing: error prevention and correction.
An operational approach to competency assessment.
A report for government regulators, decision makers and advisors: quality control practices and preferences in today's clinical laboratory.
Internal auditing made (practically!) painless.
Can you keep a secret? Give your lab results a HIPAA privacy checkup. (Lab Management).

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters