Printer Friendly

Prospective payment: how computers can help us cope.

By stretching our imaginations--always a healthy exercise--we can envision prospective payment as a giant new laboratory computer game.

The game's goals are clear. Achieve less lab utilization and more productivity. To reach these goals, hoever, we must navigate an often confusing maze. Laboratory-based computers can help us in three major ways: automating the manual data handling procedures in the laboratory, controlling utilization, and assessing productivity.

Laboratory computer systems generally do an excellent job in the first category, but they fall short in the other two. We would like to challenge computer vendors to give us an edge in the prospective payment game by developing systems that will control laboratory utilization and assess productivity.

To cope with overutilization, we need to be able to do three things: analyze and monitor laboratory test requests, identify requests that fall outside certain guidelines, and reject or modify requests that are unnecessary or inappropriate. All three functions should be available on the laboratory computer. We have labeled them as follows: TRAC (Test Request Audit Control), TRAP (Test Request Alert Program), and TRIM (Test Request Intervener and Modifier).

TRAC is primarily an auditing function, enabling us to study laboratory ordering patterns after the fact. TRAP and TRIM, on the other hand, operate in real time--the former to alert the laboratory the moment that an unusual test request is received, and the latter to actually intervene and modify requests that fall outside certain preset limits. Figure I summarizes the attributes of all three programs and draws parallels between them and standard quality control procedures.

Let's look more closely at each computer function:

* TRAC. This program analyzes and monitors what requests come into the clinical laboratory, where they are generated, on which patients they are ordered, and the trend of such ordering patterns. These are the analytical, descriptive, and normative aspects of laboratory utilization.

The data base for TRAC includes a patient's name, age, sex, Diagnosis Related Group (DRG), and time and date of admission. Details of transfer to another floor or institution and date of discharge are added to the data base as they become available. On top of this information, each test request must include the name of the requesting physician, time of request, unit cost, and unit charge.

TRAC can generate a number of reports. Figure II, for example, compares the laboratory and hospitalization costs for 10 hypothetical cases of acute myocardial infarction.

When we allocate test ordering according to physician, service, and ward, we can pinpoint the sources of too much or too little ordering. We can also use TRAC to predict laboratory workload in relationship to a particular medical service. It will tell us how many units of blood are used on average for each open heart surgery case. Crank in an expectation that the number of such operations will double in the next year, and you can then forecast the demand on the blood bank.

TRAC further enables us to compare utilization in our institution with that in other institutions. Figure III matches our hypothetical data on acute myocardial infarction against data from other hospital labs.

Finally, TRAC can assess the effectiveness of various programs to alter laboratory utilization. Over a period of time, we can see whether utilization patterns change in response to educational programs, incentives, and/or administrative mandates.

* TRAP uses the information gained in TRAC to flag unusual test requests based on the following conditions:

1. The total cost of laboratory tests on a patient is much higher than a predetermined threshold value for length of stay, DRG, or service.

2. The unit cost of one of the tests exceeds a predefined amount. This threshold might be a figure like $120 per test.

3. An unusual combination of tests is ordered. For example, a more specific test (urinary cyclic AMP) is ordered before a more senitive test (serum calcium).

Figure IV presents two kinds of TRAP reports. The reports are simply alerts. It's up to the laboratory to decide whether to intervene and how. Intervention may just take the form of a phone call to the clinician who ordered the test, asking the reason for the request.

* TRIM is a program with more teeth because it automatically steps in when a test request violates previously agreed-upon guidelines. These guidelines, reached jointly by the laboratory and clinicians, define the conditions under which certain test requests or combinations of requests will not be allowed. The agreements could cancel such procedures as LDH electrophoresis, when the total LDH is below a certain threshold; urine microscopy, when the screening test are all negative; and creatinine testing, when a value obtained within the last 48 hours is below a certain level or when a BUN is ordered simultaneously.

If a request violates these rules, the computer program will intervene and modify it by rejecting the offending test. A comment is sent back in place of a result.

Ideally, all tests will be requested via computer terminals in the wards. Thus, whenever an "illegal" test request is made, the rationale for refusing it will appear on the CRT screen with a message telling the requesting physician or nurse to call the pathologist if he or she wants to appeal the ruling.

Figure V diagrams the circumstances under which a creatinine level will--or will not--be performed when BUN and creatinine are ordered simultaneously. Note that the TRIM program will automatically reject a request for creatinine under certain conditions.

As outlined here, TRAC, TRAP, and TRIM serve as highly effective quality control devices for laboratory utilization. TRAC is a descriptive program that notes utilization patterns. It merely tracks results, much like proficiency surveys or Levey-Jennings plots monitor laboratory performance. The results are available after the fact and constitute a record.

TRAC, on the other hand, compares results against some predetermined standard and makes a real-time value judgement. It alerts the laboratory to an unusual test request, giving us the option to accept or to reject the request. In the case of TRIM, a request outside accepted limits is automatically rejected. The analogies to quality control are obvious.

These programs work well to decrease utilization, but they don't exist in a vacuum; They exert a financial impact on the lab. Even with decreased utilization, we must still pay for instrument leases, continue to maintain the system, bear the bulk of labor-related expenses, and absorb a proportion of indirect costs. Therefore, decreasing utilization simultaneously decreases the lab's cost-effectiveness.

This creates a dilemma that can be solved only by increasing productivity. We may, for example, buy reagents that are less expensive, or add to our automation and decrease labor costs, or reorganize the laboratory to be more effective with fewer employees. How do we decide which of these and other alternatives will successfully increase the laboratory's productivity?

Before we can decide how to improve productivity, we must assess the lab's current productivity, so let's back up a minute for some essential definitions.

The first of these, total cost, equals all the direct and indirect expenses that go to produce the laboratory's products and services. This figure helps us define productivity, which is the total output of products and services divided by the total cost. Our third definition is product-specific unit cost, which is the expense for one result of a particular type of test--a cortisol determination, say.

The last two calculations allow us to compare one laboratory's productivity data with another's, although this can be a tricky process. Two laboratories with identical total volume or costs may differ in productivity due to such factors as product mix and quality-related requirements. If one laboratory performs 200 cortisol tests and 100 sodium tests per day and the other 50 cortisol and 250 sodium tests, their productivities will be vastly different despite the fact that combined volumes are the same.

Such issues as turnaround time and analytical performance also skew the comparison. In Laboratory A, the average cost per test for a particular analyte is $0.85, the turnaround time is 24 hours, and the precision is expressed by a 6 per cent coefficient of variation. In Labroatory B, the cost of the same test is $1.80 because the performance requirements are more stringent--an expected turn around time of one hour and a 2 per cent coefficient of variation.

The volume of a specific test also affects the product-specific unit cost. Laboratory A, where the cost per test was $0.85, may perform 100 tests a day for that analyte. Laboratory B, with a cost per test of $1.80, may perform only 20.

Rather than compare productivity data between labs, a more useful approach is to monitor productivity in your own laboratory. If you measure total costs and cost per test both before and after changes in utilization, you can assess the actual savings and diagnose what operational changes, if any, to make as a consequence of the decreased volume. We should approach cost efficiency in the same way we do quality control; that is keeping track of it over a period of time.

Cost information is also vital in assessing alternatives to present systems. As we mentioned, decreasing utilization makes it mandatory that we increase productivity. This is not a blind process, however. We should be able to use the computer first to assess current productivity and then to simulate the productivity that we can expect from changes in present methods. Such changes might include new instrument systems, relocating a Stat laboratory to the central lab, merging one laboratory with another, and so on.

Each alternative must be assessed in terms of total costs as well as cost per test. One reminder: When performing such analyses, don't forget to take into account such noncost factors as turnaround time and analytical performance.

The computer can help analyze laboratory fiscal problems in a number of ways. The following are the most important: monitoring and controlling inventory, monitoring production, monitoring consumption of labor, providing a cost accounting system, and simulating alternative configurations to derive fiscal projections.

We won't attempt a thorough exposition of cost accounting here. Instead, let's concentrate on one area where the laboratory computer can be particularly helpful: monitoring labor. Here's what we suggest.

Whenever a technologist starts working at a particular work station, he or she signs onto the laboratory computer via CRT. When leaving the station, the technologist signs off. In this way, the laboratory computer monitors the actual time spent at each work station. Obviously, total hours in the laboratory will be much greater than the sum of times spent at individual work stations. For example, lunch, breaks, lectures, and merely waiting for work to be assigned are all part of undesigned time.

Comparing the hours actually spent at work stations with results produced gives us more realistic productivity figures. These figures can be used to compare productivity between hospitals and also serve as baseline information against which to compare alternative labor systems for your own laboratory.

You may, for example, want to consider such productivity measures as letting more specimens accumulate before work begins at a station. Or you might consider having technologists work four 10-hour days rather than the traditional five eight-hour days. Another scheduling alternative is to have employees come in for two or three hours per day on a part-time basis rather than the classic 40-hour work week. You can use the lab computer to simulate any of these alternatives, then compare the simulation with your present system.

We have outlined just a few ways laboratory computers should help meet requirements imposed by prospective payment. But this assistance will come to pass only if we, as potential customers, make our demands known.

Some computer vendors are beginning to implement elements of the features outlined here. Much more remains to be done.

We trust that other vendors will take up the challenge and give us the tools we need to survive and thrive in the new prospective payment game of the 1980s.
COPYRIGHT 1984 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1984 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:programs that track lab utilization
Author:Winkel, Per; Statland, Bernard E.
Publication:Medical Laboratory Observer
Date:Feb 1, 1984
Words:1987
Previous Article:Task force studying anti-core as surrogate AIDS test.
Next Article:Have you had your six-month proficiency checkup?
Topics:


Related Articles
Putting the magic to work.
A clinician's view of laboratory utilization.
New data: how labs help hospitals weather DRGs.
The impact of DRGs after year 1: management learns to cope.
A cost accounting system targeted to DGRs.
The impact of DRGs after year 2: consolidating the changes.
The impact of DRGs after year 3: doing more with less.
The impact of DRGs after year 3: how labs continue to cope.
The impact of DRGs after year 4: controlling expense, chasing revenue.
Personal computers in the lab: what will the future bring?

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters