Printer Friendly

Predictive value calculator revised.

For the last few years we have been working on microcomputer tools for evaluating clinical laboratory tests. Two years ago, we reviewed our first microcomputer program, the predictive value calculator (PVC), which ran on an Apple computer.

Based on study data, it calculates sensitivity, specificity, predictive value, and efficiency; demonstrates the effect of prevalence on predictive value for a given sensitivity and specificity; completes the fourfold table; and performs a chi-square test for statistical significance. If you already know a test's sensitivity and specificity, these numbers can be entered directly into the program as percentages. The program will also normalize the data for any total population. More than 400 laboratorians requested and received a complete listing of the program in BASIC.

The next step in developing PVC was merging it with a data base program to help evaluate lab data. A PVC data base consists of up to 500 cases with up to 18 tests per case. A test is a numeric result and represents either a finding or a hypothesis (disease status).

We've demonstrated this program by examining data from a study of lab tests we performed on emergency room patients with a tentative diagnosis of acute appendicitis. It lets us find the best test by displaying receiver operating characteristic (ROC) curves, and the best cutoff point for each test using referent value tables. Predictive value analysis completes the fourfold predictive value table and lists prevelance, sensitivity, specificity, positive and negative predictive values, and efficiency. The effect of prevalence on predictive value and efficiency is then graphed.

We can enter decision rules of test patterns as any logical Boolean expression, permitting test rules to be evaluated, and test algorithms developed. Furthermore, these test patterns can be logical, mathematical expressions of tests in the data base.

PVC familiarizes you with your data. You can compare tests and determine the most useful cutoff points. You can choose optimal test combinations and rules for interpreting them. Through the ASCP Check Sample Program in Clinical Chemistry, we distributed more than 1,000 Apple disks containing these programs and our appendicitis data together with a tutorial on how to use these programs. This version, published by Helene Laboratories, is available for the IBM-PC personal computer.

For the last several months we have been working on PVC-II, the advanced version of PVC. The original program was written by Sholom Weiss, Ph.D., of Rutgers University and graduate student Prasad Tadepalli. Dr. Weiss's main focus has been artificial intelligence in medicine. He has now developed a module by which the computer can play the so-called game for you. It will analyze your data base and tell you what the best test or tests are and what the decision rules should be. It does everything but write your research paper!

Depending on your data base, it may take several hours to run, but you can load the data base, instruct the program in the evening and let it run all night. While you are sleeping, it will examine the various test permutations and combinations in your data base and, according to your criteria, find the optimal test sequence to make a diagnosis.

Let's see how this works on our appendicitis data, and compare the computer-generated test rules with our own.

We want to retrieve our data base and select a PVC-II program called Automatic Rule Generation. figure I shows the first program menu. We select the hypothesis pattern, APP = 1 for the diagnosis of appendicitis. APP is them removed from the data base, but all other fields will be included in the exercise. We can force the inclusion of one or more tests in the process, which is useful if a certain test is popular, cheap, or easy to perform, and we want it included in our final selection. We have not forced any tests into the selection process.

In response to questions on the screen, we select sensitivity as the parameter to maximinze, and efficiency as the parameter to constrain at a minimum of 90 per cent (Figure II). We will also use up to three tests in the final selection. In other words, using up to three tests, we want to classify 90 per cent of the cases correctly (90 per cent efficiency), and want a rule that gives the highest possible sensitivity. This will yield a high negative predictive value and let us rule out appendicitis in patients in whom it is being considered. We can now go to bed.

In the morning, we find the predictive value table shown in Figure III on the screen. It illustrates the first rule found that meets our initial specifications with two tests--90.8 per cent efficiency and 100 per cent sensitivity. The rule is manual neutrophils (absolute count) greater than 5,000 or C-reactive protein greater than 1.8 mg/dl. These two tests, if run on all cases and interpreted in parallel, will yield 100 per cent predictive value negative. Remember, in a parallel rule, all tests are run on all cases and are considered positive if one or more tests are positive, and negative if all are negative.

additional rules meeting our specifications are found, but only those that improve earlier rules will be queued to print. Figure IV lists a rule that is better than the earlier rule. Here the efficiency is increased to 91.8 per cent, and one additional case is correctly classified. The program tells us this is the best rule found--and we are done.

The best rule found is manual neutrophils (absolute count) greater than 6,600 or manual bands (percentage) greater than 11 per cent of C-reactive protein greater than 1.8 mg/dl. This is also a parallel rule with these three tests performed on all cases and considered positive if one or more tests are positive, and negative only if all three tests are negative.

It's interesting for us to go back and compare this computer-generated test rule with the ones we published when we evaluated the data unaided by Automatic Rule Generation. We also came up with a three-test parallel rule: WBC greater than 10,500 or manual bands greater than 11 per cent or C-reactive protein greater than 1.2 mg/dl (Figure V). This human-generated rule is compared with the computer-generated rule in Table I. The computer beat us by two cases, as shown by comparing the fourfold tables in Figures IV and V respectively.

Whilet the human rule is not statistically different from the computer rule, and the difference indeed may be due to chance (random sampling), it is little consolation to the investigators who spent countless hours analyzing these data in search of the best diagnostic rule. This will never happen to us again!

What next? Well, we've only just begun.
COPYRIGHT 1985 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1985 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:for evaluating medical tests
Author:Galen, Robert S.
Publication:Medical Laboratory Observer
Date:Aug 1, 1985
Previous Article:Developing the potential for staff success.
Next Article:Things shouldn't go bump in the night.

Related Articles
Old problems for testing a new math.
Predictive Modeling Points Way to Future Risk Status.
Clinical suspicion of TB provides important testing information. (Disease watch).
OTS8 FDG imaging in identifying infected total joint arthroplasty. (Orthopaedic & Trauma Surgery).
Serologic and molecular biologic methods for SARS-associated coronavirus infection, Taiwan.
Calculated approach to real estate.
Predictive validity of pressure ulcer risk assessment tools in intensive care patients.
Better visibility: predictive modeling helps to steady medical malpractice underwriting.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters