Printer Friendly

Tips on technology.


The following 25 pages comprise an anthology of Tips on Technology questions published in MLO during the last year. The questions are grouped under the major laboratory disciplines, and this cross-referenced index will quickly lead you to any topic you want.


Aerosols, 126 Agglutination, 128 AIDS and allergy injections, 126 AIDS and anticoagulants, 128 AIDS and lab coats, 126 AIDS protection, 126, 128 AIDS studies, 128 Allergy injections and AIDS, 126 Anemia, 129 Anticoagulants and AIDS, 128 Anti-D sera, 128 Antithrombin test, 128 Blood drawing, 128, 129 Blood drawing tube, tiger-stripe, 128 Blood specimen re-collection, 126 Blood volume in a child, 128 CDC guidelines for HIV protection, 126 Citrate, 128 Disposal of needles, 128 EDTA, 128 Fibrin clots, 128 Fluoride, 128 Gloves, use of, 128 Hematocrit, 129 Hematocrit, posttransfusion, 127 Hemodilution, 129 Hemoglobin, 129 Hemoglobin, posttransfusion, 127 Heparin, 128 Hepatitis B, 126, 128 HIV protection, 126, 128 Lab coats and AIDS, 126 Laundering of lab coats, 126 Multiple blood drawings in a child, 128 Needle disposal, 126, 128 Needlestick injury, 126, 129 Neonatal blood drawing, 128, 129 Oxylate, 128 Posttransfusion H and H, 127 Pretransfusion testing, 128 Re-collects, acceptable level of, 126 Thrombin, 128 Tiger-stripe blood drawing tube, 128


Albumin, 131 Anabolic steroids and serum lipid values,

135 Angina, 134 Apolipoproteins, 134 Atherosclerosis, 135 Bilirubin, total, 131 Calcium, total, 131 Cancer, prostate, 137 Cardiac enzymes, 134 CBC, 134 Cholesterol, HDL, 134, 135 Cholesterol, LDL, 134, 135 Cholesterol risk factors, 134, 135 Cholesterol, total, 134, 135 CK, 131, 134, 136 CK-MB, 134, 136 CK-MM, 136 Coronary angiography, 134 Coronary heart disease, 134, 136 Creatinine, 131 Delta values, 130 Diabetes, 130, 135 D-xylose absorption test, 137 Electrophoresis, 136 Fructosamine values, 130 FSH, 130 Glucose, 130 Glycated hemoglobin, 130 HCG doubling times, 136 HDL cholesterol, 134, 135 Human chorionic gonadotropin, 136 LDL cholesterol, 134, 135 LH, 130 Lipid tests, 134, 135 Malabsorption, 137 Myocardial infarction, 134, 136 National Cholesterol Education Program,

135 Percutaneous transluminal coronary

angioplasty, 134 Phosphorus, 131 Potassium, 131 Prostate-specific antigen, 137 Prostatic acid phosphatase, 137 Protein, total, 131, 137 Radiopaque contrast media and lab tests,

134 Reference ranges, 130 Renografin, 134 Serum lipid levels, 135 Sodium, 131 Steroids, anabolic, 135 Thyroxine, 131 Urea nitrogen, 131 Uric acid, 131 Urine crystals, 134 Urine specific gravity, 134


ACT instruments, QC of, 141 Activated clotting time, 141 Anticoagulation, 138, 139 APTT, 138 Aspirin, 139 Bleeding time, use of leg for, 139 Blood collection standard, 140 Blood culture, 140 Calcium chloride, 141 CBC, 139 Cell counts in EDTA tubes, 139 Citrate, 139, 140 Coagulation standards, 138 Coagulation testing, 138, 140 Controls for PT and PTT, 138 CSF, 140 Culture, blood, 140 Differential count, 138, 141 EDTA, 139, 140 EDTA tubes, cell counts in, 139 Factor VIII, 138 Fluoride, 140 Granulocytes, 140 Heparin, 138, 140 Hepatitis, 141 Idiopathic thrombocytopenic purpura, 139 INR, 138 International Normalized Ratio, 138 Leukocyte count, 138, 140, 141 Leukocytosis, 139 Lymphocytes, reactive vs. atypical, 140 Lymphoproliferative disorders, 138, 141 Monocytes, 140 Mononucleosis, 141 NCCLS standard for blood collection, 140 Neutrophils, 140 Oxalate, 140 Platelet clumping, 139 Platelet satellitosis, 139 Prothrombin time, 138 PTT, 138 QC of ACT instruments, 141 Quality control for WBC values, 138 Reactive vs. atypical lymphocytes, 140 Reporting PT and PTT results, 138 Sequence of specimen drawing, 140 Sodium citrate, 139 Specimen drawing sequence, 140 Stability of cell counts in body fluids, 139 Standards for PT and PTT, 138 Synovial fluid, 140 Thrombocytopenia, 139 Thromboembolism, 139 Thromboplastin, 139, 149 Use of leg for bleeding time, 139 Viral infection, 141 Warfarin, 138 WBC values, quality control for, 138


Acute-phase sera, 142 ANA testing, 145 Antinuclear antibody testing, 145 CAP syphilis survey, 143 CDC syphilis guidelines, 142 Cold agglutinins test, 144 Convalescent-phase sera, 142 Convalescent serum for viral serology,

142 Cryoglobulin, 144 Cryoprecipitation, 144 CSF, 142, 143 Direct antiglobulin test, 144 Fertility, 143 HEP-2 cells, 145 KB cells, 145 Lupus erythematosus, 145 Microsomal antibodies, 145 Mouse kidney substrate, 145 Neurosyphilis, 142 Numerous spermatozoa in elderly, 144 Proficiency testing, VDRL, 143 Pyocystitis, 145 Quality control for VDRL, 142 Rape, examination of specimens, 142 Rapid plasma reagin test, 142 Reporting of spermatozoa in urine, 142 RIA counting, 145 RIA testing, 145 RPR card test, 142 Sperm count, 143 Spermatozoa in urine, 142, 144 Spinal fluid volume, 143 Substrates, ANA, 145 Syphilis tests, 142, 143 Toludine red unheated serum test, 142 TRUST card test, 142 Urinary sediment, 143 Urine, spermatozoa in, 142 VDRL, 142, 143 VDRL proficiency testing, 143 Viral infection, 142 Viral serology, convalescent serum for,

142 Viremia, 142


Adenosine triphosphate, 148 Anaerobe identification, 146 Anaerobic culture, 146 Antibiotic therapy, 149 Antigen detection, Chlamydia, 150 Antimicrobial therapy, 149 Bacteremia, 149 Bacteriuria rapid tests, 146 Bacteroides fragilis, 146 B. bivius, 146 Bioluminescence, 146 Blood culture, 149 Campylobacter, 146 Chlamydia antigen detection, 150 Chlamydia culture, 150 Culture, anaerobic, 146 Culture, blood, 149 Culture, Chlamydia, 150 Culture, stool, 146 Culture, urine, 148 Diarrhea, 146 Enzyme immunoassay, Chlamydia, 150 F. necrophorum, 146 Gram-negative bacilli, 146 Gram stain, 146, 148 Holding times for urine cultures, 148 Identifying anaerobes, 146 Immunofluorescence, 150 Leukocyte esterase test, 146 Media, blood culture, 149 Nitrite test, 146 Non-resin containing media, 149 P. acnes, 146 Pilonidal cysts, 146 P. magnus, 146 Rapid tests for bacteriuria, 146 Resin media, 149 Septicemia, 149 Stool culture, 146 Urine culture, 148 Urine culture, holding times, 148 Volume of blood culture, 149



We have a laboratory staff of about 370, with many of them in remote satellite locations of up to 100 miles away. Laboratory coats are issued to all staff on an as-required basis, but staff are expected to launder them at home. Since the hepatitis B and in particular the HIV "scare," questions have arisen from a safety standpoint. Our instructions are to wash the coats in detergent in water at 160 F for 25 minutes. If a lower temperature cycle is used (less than 158 F), then bleach at a suitable concentration should be used. Laboratory coats visibly soiled with blood or body fluids should be soaked in household bleach (1:10) for 20 minutes before laundering.(1) Would you comment on this protocol as I have not seen any guidelines specific to this problem?

The cited reference(1) as well as the CDC recommendations(2) provide almost identical recommendations on this issue. They both appear to be derived from the recommendation listed in the "CDC Guideline for Handwashing and Hospital Environmental Control, 1985."(3) All the recommendations related to laundry were rated as Category II. Two studies were cited as showing the equivalence of hot-water washing ([is greater than] 70 C) and lower water temperature washing with bleach.(4,5) These studies dealt primarily with bacterial populations rather than viral. One report(4) indicated that no viruses were recovered from the rinse water of soiled laundry.

Though no studies, to my knowledge, have specifically addressed the effect of laundering methods on HIV-contaminated laundry, the evidence available would indicate the procedures described would render HIV survival in laundry highly improbable. Furthermore, there are no reports to date implicating laundry as a source or means of transmission of HIV infection. Thus it would appear that the CDC recommendations,(3) though not specifically addressing HIV, should be quite satisfactory.

(1)Canada Disease Weekly Report. Supplement--Recommendations for preventing of HIV transmission in health-care settings. CDWR 13S3, 1987. (2)CDC. Recommendations for prevention of HIV transmission in health-care settings. MMWR 36(Suppl 2S), 1987. (3)Garner, J.S., and Favero, M.S. CDC guidelines for handwashing and hospital environmental control, 1985. Infect. Control 7:231-243, 1986. (4)Blaser, M.J., et al. Killing of fabric-associated bacteria in hospital laundry by low-temperature washing. J. Infect. Dis. 149:48-57, 1984. (5)Christian, R.R., et al. Bacteriological quality of fabrics washed at lower-than-standard temperatures in a hospital laundry facility. Appl. Env. Microbiol. 45: 591-597, 1983.


As part of our quality assurance efforts, we have been tracking the number of re-collected blood specimens. We have not found any references concerning what would be an acceptable level of re-collects. Currently, we have been running close to 3.5 per cent, which includes students and re-collections due to checks on abnormal or unexpected results. Any advice?

I too was unable to find any reference concerning an acceptable level of repeated blood collections. This seems like a very useful figure that could and should be used in quality assurance. It would seem, however, that if this factor is to be recorded, it should be divided into several categories. You have listed two: i.e., drawing by students and recollection to check on abnormal or unexpected results. I would add the category of difficulties in phlebotomy and subdivide it into second venipuncture by same phlebotomist and second drawing by another phlebotomist.

I would also add another category of processing misadventures, which would include lost specimens, broken tubes, etc. More categories may suggest themselves. The recording of these figures would make an extremely interesting system of quality assurance by the phlebotomy service and would undoubtedly satisfy a requirement for quality assurance inspections of the Joint Commission.


I work in a physician's lab. In another area of our clinic, many allergy injections are given each day. The nurses use disposable syringes but cut off the needles before throwing them away. What is their level of possible exposure to AIDS in giving these shots? They do not wear gloves. They do not feel there is any possibility of transmission as they only inject intramuscularly. They cut the needles because if left intact they fear reuse by the public who have been known to dig through our trash. The expense of containers large enough for needle disposal also influences them. In the lab, we use puncture-proof containers, but we do not have the large volume that the nurses do.

To date, the risk of acquiring HIV infection through health care activities appears to be extremely small.(1,2) Of the 22 reported cases in the world literature (as of April 1988), the majority were acquired by accidental percutaneous inoculation--most often, needlestick by a needle contaminated with an AIDS patient's blood.(1)

In light of this, the risk to personnel giving allergy shots would vary directly with the proportion of patients that were HIV-infected and the frequency of needlestick injuries following injections. Since it is unlikely that one will be able to exert much control over the incidence of HIV infection in one's practice, efforts at risk reduction logically must be primarily aimed at minimizing the potential for needlestick accidents. Clearly gloves will not contribute to protection against needlesticks.

Although the quantity of blood contaminating a needle following intramuscular injections may be less than that following intravenous procedures, some contamination does occur, and there is no basis in theory or fact (to date) to justify the "feeling" that HIV transmission is not possible following intramuscular injections. The possibility of such transmission has been suggested by statistical analysis of experiences in Africa where needles may be reused and often are inadequately sterilized.(3)

The safety of needle cutting (chopping, nipping, etc.) has been questioned by some.(4) This is based on the potential for aerosolization of viruses during the cutting process and the potential for contaminating the cutter itself. In some settings, the justification for needle clipping would appear to be quite valid. If needle clipping is to be practiced, it should be done by devices designed to contain any aerosols produced by the process.

(1)Centers for Disease Control. Update: Acquired immunodeficiency syndrome and human immunodeficiency virus infection among health-care workers. MMWR 37:229-239, 1988. (2)McCray, E., et al. Occupational risk of the acquired immunodeficiency syndrome among health care workers. N. Engl. J. Med. 314:1127-1132, 1986. (3)Mann, J.M., et al. Risk factors for human immunodeficiency virus seropositivity among children 1-24 months old in Kinshasa, Zaire. Lancet 2:654-656, 1986. (4)Rutula, W.A., and Sarubbi, F.A. Management of infectious waste from hospitals. Infect. Control 4: 198-204, 1983.


Posttransfusion hemoglobins and hematocrits have become an issue in our laboratory. A few of our technologists have been taught that posttransfusion H and H should be deferred for at least two hours, but the AABB Manual states 24 hours. When is the earliest a posttransfusion hemoglobin and hematocrit can be drawn to produce meaningful results?

Clinical quality assurance procedures require that posttransfusion hemoglobins and/or hematocrits should be done to show that the patient has had an adequate response to a transfusion.

Unfortunately, there is no hard-and-fast method of demonstrating the patient's response. At the end of transfusion, the red cells have been added to the patient's blood stream with a variable amount of fluids and/or plasma. Depending on the relative volumes, this may produce a slight hemodilution or a slight concentration. It is therefore necessary to wait until the mixing of the red cells has taken place and the fluids transfused have equilibrated before one can truly determine the transfusion response. This can vary depending on the patient's cardiac status, renal status, size, and a number of other factors.

Table 15-2 in the American Association of Blood Banks' Technical Manual(1) shows an adult patient's hemoglobin response after the transfusion of a unit of whole blood or a unit of red cells. With a pretransfusion level of 8 gm/dl, a patient would have a level of 8.4 gm/dl soon after the transfusion of one unit of whole blood, compared with 8.7 gm/dl achieved with red cells. After 24 hours, the patient's hemoglobin level becomes 9.2 gm/dl in both instances.

Therefore, to be accurate, it would be best to measure the hemoglobin response at the end of 24 hours. Certainly two hours' delay would be the minimum that could be informative. The results will depend on the patient and also on the amount of blood given, but the two-hour period would seem to be acceptable as long as one realizes this is an approximation, not a definite number. If one wishes to know the definite response, then 24 hours probably is the amount of time that one should wait.

(1)Technical Manual of the American Association of Blood Banks, 9th ed., p. 266. Arlington, Va., AABB, 1985.


Have any studies been done to determine the effect of anticoagulants (EDTA, fluoride, oxylate, etc.) on the AIDS virus's viability? Is there any possibility of a blood additive being synthesized that would kill the virus yet not affect our lab values?

To date, I am unaware of any published studies on the effect of anticoagulants on the viability of HIV. That anticoagulants may have some antimicrobial activity is well recognized. However, this is usually limited--often to just a few species of bacteria. I can find no reports of antiviral activity of anticoagulants, though it is of interest that heparin is usually recommended when an anticoagulant is required for a viral culture. Until proved otherwise, it would be prudent to assume anticoagulants have no effect on HIV or HBV.

Although almost anything is possible, the likelihood of developing a blood additive that would kill the HIV and HBV and have no effect on laboratory examination of the blood seems small. Anticoagulants in common laboratory use, such as EDTA, fluoride, citrate, oxylate, and heparin, all have adverse effects on blood from the standpoint of laboratory assays. Although many measurable parameters of the blood are adversely affected, the degree of alteration produced varies considerably among the anticoagulants, such that, for example, most hematological studies are least affected by EDTA, etc.

Thus by judicious use of several agents, we can keep anticoagulant alteration of blood values to a minimum. It is difficult to conceive of a virus-inactivating additive to blood that would not react with at least some components of blood and therefore alter it in some way.


We have a few technologists who draw blood bank specimens using the yellow tiger-stripe tube with thrombin. I have heard that we should not use these "fast chemistry" tubes because the thrombin is made up of anti-D sera. Is that true? What do you recommend?

There is a gray-and-yellow-striped tube from Becton Dickinson that contains thrombin and is designed for rapid chemistry determinations. The Becton Dickinson Technical Service Department informs me that the thrombin in this tube is an animal thrombin. For this reason, the chances of it having anti-Rh (D) activity is extremely unlikely. Some therapeutic human antiserum preparations have been found to contain significant levels of anti-D and have produced a positive indirect antiglobulin test, but I know of no such animal thrombin reports.

It is the custom in some laboratories to add a very small drop of topical thrombin, thus causing the plasma to clot, but again this topical thrombin is usually of animal origin. This technique probably should not be used since it introduces another source of possible error.

The Technical Manual(1) of the American Association of Blood Banks states that it is permissible to use either serum or plasma for pretransfusion testing since the same antibodies are present in both. It also states that most blood bank technologists prefer serum because plasma may occasionally give small fibrin clots that can be difficult to distinguish from agglutination. In an emergency, plasma can be used and in all probability will not cause any difficulty, but it is easier to allow the blood specimen to clot and use this for testing.

(1)Technical Manual of the American Association of Blood Banks, 9th ed., p. 196. Arlington, Va., AABB, 1985.


Because we are the largest hospital in the state, many infants and children are referred to us from health centers. Sometimes this requires that blood be drawn from these young patients more than once in the same day. We are unable to know the amount of blood previously drawn. The only control we have is how much we draw. We use the rule that half the baby's weight is drawn in cc. Any more than this and the parents are asked to bring the child back to have the remaining blood work drawn. How soon should a child return to complete the blood work? What do you consider to be a safe amount to draw on a child? Do any values change due to exsanguination?

It is difficult to answer your question completely since there are several unknown variables. First, is it possible to change the health centers' procedures so they do keep track of how much blood has been drawn on the infants and children and report it to you? Second, is the baby's weight expressed in kilograms or pounds? Last, I make an assumption that the young patient is sent home after the blood is drawn and brought back again for another phlebotomy.

I checked with several of our pediatricians on this question. An infant's blood volume is 80-90 milliters per kilogram, therefore even a full-term infant ([is greater than] 3 kg) has at least a 240 ml blood volume. A single venipuncture removing in milliliters half of the baby's weight in pounds would be very conservative (3 ml). In a full-term non-anemic infant, 5-6 ml of blood can easily be drawn without a reaction. A six-month-old of 6 kilograms can easily have 7-10 ml removed, and at one year of age, at least 9-10 ml. Ten to 15 ml can be removed without harm from the one-year-old if the child is non-anemic (meaning a hematocrit greater than 35 per cent) and healthy.

This therefore gives an allowable specimen volume of approximately 1-1/2 ml per kilogram of a child's weight, and this is still erring on the conservative side. Drawing this volume probably should not be repeated more than once a week, and if done for a period of time, the patient should be put on an iron supplement.

The most important thing is to keep close track of the child's hemoglobin or hematocrit. At the time of drawing blood, even as much as 500 cc in an adult, the hemoglobin and hematocrit do not change appreciably for about one to two hours after the phlebotomy. At that point and for the next 24 hours, hemodilution occurs and both the hemoglobin and hematocrit levels will decrease somewhat. Therefore a specimen drawn at the time of first admission will show if the child is chronically anemic. If the child is, then probably less blood should be drawn. If not, the amounts given above should be safe.


We recently hired a phlebotomist from a hospital with a large neonatal nursery. He explained a procedure for drawing venous blood from neonates that involves sticking a hand vein with the tip of a needle. There is no syringe or Vacutainer attached to the needle, and venous blood drips into a collecting tube (i.e., Microtainer tube). Are you familiar with this technique? Can you describe how it is performed, and has it been published in the laboratory literature?

I consulted Betty Clagg, phlebotomy supervisor at Denver General Hospital, about this technique. It has been used for a number of years by phlebotomists there and was originally taught to them by one of the staff pediatricians. They use a 21- or 23-gauge needle. Using finger pressure for venous occlusion, the tip of the needle is placed in a distended vein, and blood is allowed to drip into an appropriately sized tube. It is important to insert the needle at the correct angle.

Clagg believes that it is necessary to be taught by someone skilled in the technique and that it would be difficult to perform properly from written instructions. The technique has not been published.



Please comment on the reporting of fructosamine values. We have been getting results that are below the established reference range (2.0-2.8 mmol/L). At the present time we are doing the test for gestational diabetes only. We have tried several major reference laboratories and experience the same problem: The results are less than their reference range. Do you know if the reference range was established with "normal" non-pregnant adults or was the reference established around pregnant females?

Recent papers(1-3) and the experience of the diabetologist at a local medical center that has both an active diabetes treatment program and maternity service confirm the observation that the fructosamine reference range is lower in pregnant women than in other adults. The distribution of fructosamine values in pregnant women appears to decrease with gestational age and to increase with maternal age. The diabetes group at one of our local hospitals considers 2.5 mmol/L as the upper range of normal for a pregnant woman at 28 weeks of gestation. They have not seen a pregnant non-diabetic with a value greater than this. Papers reporting the decreased fructosamine levels in pregnancy generally show a lowering of about 0.1-0.2 mmol/L at the end of the gestational period.

Fructosamine appears to be useful in screening for gestational diabetes. Fructosamine reflects the glucose concentrations over a one- to three-week time period. Glycated hemoglobin reflects the glucose values over a one- to two-month period and is less useful in pregnancy where the reference range changes over a period of time. Morever, glycated hemoglobin has not been shown to be useful as a screening test for diabetes.

(1)Frandsen, E.K.; Sabagh, T.; and Bacchus, R.A. Serum fructosamine in diabetic pregnancy. Clin. Chem. 34: 316-319, 1988. (2)Roberts, A.B., and Baker, J.R. A screening test for diabetes in pregnancy.Am. J. Ob. Gyn. 154: 1027-1030, 1986. (3)Van Diejen-Visser, M.P.; Salemans, T.; Van Wersch, J.W.J.; Schellekens, L.A.; and Brombacher, P.J. Glycosylated serum proteins and glycosylated hemoglobin in normal pregnancy. Ann. Clin. Biochem. 23: 661-666, 1986.


Although we are a large hospital (1,000 beds), we do not have a large volume of outpatient services. We feel that it is appropriate to split samples, run purchased RIA kits, and compare the results with those of a reference lab, watching appropriate clinical ranges (i.e., an abnormal result is abnormal by both methods, a low normal is low normal by both methods, etc.) and using the kit insert's normal range. Is this adequate? If not, how would we go about establishing a normal range for tests such as FSH and LH?

You are right on the mark! The approach you use to determine reference ranges is the one that we endorse. Essentially, we collect specimens from 30-50 supposedly healthy subjects. We perform the assay under consideration and compare the results with either the proposed reference range indicated in the package insert and/or the values determined in the reference lab. In the first case, we get verification that the reference range is a reasonable one. In the second case, we have the added opportunity of verifying the accuracy (comparative data) using the reference lab. I have very little to add to your procedure.


Can you provide a method for establishing a clinically significant delta check for our newly incorporated computer system? Presently delta can be programmed in so the technologists can view the most recent previous result, which we appreciate, but per cent delta significance is our concern. We currently multiply the standard deviation of a normal control by 3 and use that as the delta value, but this makes the ranges too tight.

We find the delta check to be useful for internal quality control purposes. Although results on control sera assess analytical bias and variability, the patient delta check also serves another role. It can flag potential errors that result from patient or specimen mixups. For example, in a patient with an alkaline phosphatase of 300 U/L on Monday who then has a value of 100 U/L on Wednesday, we would be alerted that such a change would be most unlikely on the basis of the clinical setting. We might consider a clerical error. Specimen mixups or clerical errors are often not picked up using control sera results.

What delta values should one use to detect potential errors? The reader suggests using three times the standard deviation of the analytical variability determined on normal serum controls. We do not subscribe to that approach. The delta values should be based upon the total expected variation, taking into account the physiologic variation and analytical variation. We customize the delta values to consider clinical significance as well as expected analytical variability.

Ladenson(1) presented an approach to delta values using the per cent change from a previous result to a later result. His values, as well as those determined in our laboratory, are presented in Table I. In the study by Ladenson, the time limit between results is three days except for thyroxine, where it is 30 days.

Dr. Ronald Ng, director of clinical chemistry at Methodist Hospital of Indiana, supplied the delta values used at our institution. The time limits are defined as 30 days for inpatients and 90 days for out-patients. We use both absolute delta values and per cent delta values. In the case of absolute delta values, one determines the difference between the larger result and the smaller result. For the per cent delta value, one determines the difference between the larger result and the smaller result and divides by the smaller result. In constrast, Ladenson divides the difference between the prior result and the later result by the prior result.

As noted in Table I, the program used at Methodist Hospital for determining delta values is a very flexible one. For example, in the case of serum bilirubin, the delta value is 0.8 mg/dl when bilirubin is below 1.9 mg/dl and is 50 per cent when serum bilirubin is greater than 1.9 mg/dl. The choice of delta values is a very empiric one and should be viewed as a threshold value that will pick up clinically significant errors without unduly burdening the laboratory by looking up many, many results.

I would be very interested in the experience of other MLO readers regarding their use of delta values both in terms of the number of laboratories using delta values as well as the approach used to define the absolute or per cent values.

(1)Ladenson, J.H. Patients as their own controls: Use of the computer to identify "laboratory error." Clin. Chem. 21: 1648-1653, 1975.



What effect do radiopaque contrast media have on CBC and general chemistry tests? I know of a lab that draws blood immediately after injection of 100 cc of Renografin using the same needle in the same vein before withdrawal to obtain their blood for studies.

Renografin is one of several materials used in x-ray studies of the kidneys, ureters, and bladder. They are opaque to x-rays and therefore can be used to visualize these organs. These materials are complex organic molecules that contain iodine. They cause a markedly increased urine specific gravity and also can form unusual crystals in urine. I have been unable to find documentation of other artifactual effects in the blood of patients who have been given these materials.

In many x-ray departments, these and other contrast media are injected using a scalp-vein needle with catheter or other intravenous catheter that is left in place during the study. There is a temptation to use this indwelling intravenous line for collection of blood specimens following injection of the contrast media. Consequently, there is a high risk that the specimen will be contaminated with high concentrations of contrast media.

We have shown that approximately 10 ml of blood must be withdrawn from an indwelling intravenous catheter in order to minimize contamination. While it is preferable not to obtain a specimen from an indwelling intravenous catheter, there are times when this rule cannot be followed. Occasionally patients suffer adverse reactions to contrast media in the x-ray department. In these instances, it may be necessary to quickly obtain a specimen through the catheter. Under these circumstances, approximately 10 ml of blood should be withdrawn and discarded prior to obtaining the specimen for analysis.


Physicians in our hospital order cardiac enzymes on patients undergoing percutaneous coronary angioplasty. Are these always elevated in such patients after having this procedure?

Pauletto and colleagues(1) recently evaluated creatine kinase and CK-MB in patients undergoing percutaneous transluminal coronary angioplasty (PTCA) and compared these results with a control group undergoing diagnostic coronary angiography for stable angina. In this study, the authors drew blood specimens before and at two-hour intervals for the first 12 hours and then at six-hour intervals for the next 12 hours in 24 patients undergoing PTCA for stable angina caused by severe coronary artery stenosis.

CK and CK-MB were normal in the four control patients as well as in 20 of the 24 test subjects. Of the four test patients with abnormal results, one definitely had symptoms and ECG changes of acute myocardial infarction. Three had slight increases of CK-MB, and one of the three also had a slight increase in total CK. Although none of the three patients had definite signs or symptoms of acute myocardial infarction, some may have had myocardial damage secondary to undergoing PTCA.

On the basis of this information, I suggest ordering CK and CK-MB only if there is clinical or ECG evidence of myocardial infarction. (1)Pauletto, P.; Piccolo, D.; Scannapieco, G.; et al. Changes in myoglobin, creatine kinase, and creatine kinase-MB after percutaneous transluminal coronary angioplasty for stable angina pectoris. Am. J. Cardiol. 59:999-1000, 1987.


How do we calculate the risk factor for coronary disease based on lipid measurements? Are tables available on the basis of total cholesterol, LDL and HDL cholesterol, or apolipoproteins for calculating risk factor? Which is the most reliable? And can risk factor be calculated on the basis of several parameters by averaging the individual values obtained for each parameter?

Although total cholesterol and LDL cholesterol are the most quantitative risk factors for coronary artery atherosclerosis, other risk factors are also important. Cigarette smoking, hypertension, severe obesity, diabetes mellitus, and history of coronary heart disease, either in the individual or as premature cardiac heart disease in family members, have been demonstrated to increase the risk of coronary artery atherosclerosis. Males are also at a greater risk than females.

A variety of epidemiologic studies have demonstrated a direct relationship between the level of total and LDL cholesterol and the rate of coronary heart disease. These studies have shown relatively little difference in coronary heart disease up to a serum cholesterol of 200 mg/dl. The rate begins to climb between 200 and 400 mg/dl, with the steepest climb in rate above 240 mg/dl. The risk of coronary heart disease with cholesterol heart disease with cholesterol of 300 mg/dl is approximately four times that of an individual with a cholesterol of 200 mg/dl. Hypertension and smoking are powerful risk factors that add to cholesterol risk factor.

The recent National Cholesterol Education Program report(1) of the expert panel on detection, evaluation, and treatment of high blood cholesterol in adults recommends using only total serum cholesterol for the initial screening of individuals. For those with a desirable serum cholesterol of 200 mg/dl or lower, no additional testing is performed unless family history or suspicious clinical signs of coronary heart disease are present. For those with a borderline high total serum cholesterol of 200 to 239 mg/dl, but without other risk factors, dietary counseling is recommended but no other testing should be done. For those with a borderline high total serum cholesterol of 200 to 239 mg/dl with definite coronary heart disease or two other risk factors (one of which can be male sex), additional lipid analysis is performed in order to determine the LDL cholesterol level. Individuals with a high total serum cholesterol of greater than 240 mg/dl also should have additional lipid tests.

A rough rule of thumb for estimating the risk of coronary heart disease (CHD) has been developed. The risk factor is compared with an "average" person with a total serum cholesterol of 200 mg/dl with no other risk factors.

1% increase in total cholesterol = 2% increase in CHD risk(2,3)

1% increase in HDL cholesterol (from 45 mg/dl) = 1% decrease in CHD risk(4)

Although other risk factors contribute to the risk of coronary heart disease, the extent to which they do so has not been determined. For this reason, no numeric relationship has been established for their contribution to risk. The expert panel(1) recommends that when two additional risk factors are present, those individuals in the desirable cholesterol range of under 200 mg/dl should be considered as though they are in the borderline high cholesterol range of 200 to 239 mg/dl and those in the borderline high cholesterol range should be considered as though they are in the high cholesterol range (more than 240 mg/dl).

(1)National Cholesterol Education Program: Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults. Arch. Intern. Med. 148:36-69, 1988. (2)Lipid Research Clinics Program. The Lipid Research Clinics primary prevention trial results: I. Reduction in incidence of coronary heart disease. JAMA 251:351-364, 1984. (3)Lipid Research Clinics Program. The Lipid Research Clinics primary prevention trial result. II. The relationship of reduction in incidence of coronary heart disease to cholesterol lowering, JAMA 252:365-374, 1984. (4)Gordon, D.J., et al. High-density lipoprotein cholesterol and coronary heart disease: Four American studies (abstract). Circulation 76(4): part 2, supplement IV, 435, October 1987.


I have read that anabolic steroids might induce changes in serum lipid levels. Has anyone studied this question?

Dr. Hurley and co-workers determined total cholesterol, LDL cholesterol, and HDL cholesterol values prior to and after anabolic steroids were given to a number of healthy male subjects.(1) Anabolic androgenic steroids were ingested by 12 male athletes over a period of four and a half weeks. They were self-administered in these individuals for the purpose of a body building and weight training program.

The results on their serum lipid values were very dramatic. The total cholesterol value changed from a baseline of 185 mg/dl to 232 mg/dl, more than a 25 per cent increase.

Of significance clinically was the increase in LDL cholesterol, going from 117 mg/dl at baseline to 188 mg/dl after four and a half weeks. The HDL cholesterol decreased from 51 to 23 mg/dl. The consequent ratio of LDL/HDL went from 2.5 to 9.5, or a 280 per cent increase.

Based upon the risk of coronary heart disease, this value is very disconcerting. It appears that even short-term use of anabolic steroids could result in profound changes in the cholesterol risk value for coronary heart disease.

(1)Hurley, B., et al. High-density-lipoprotein cholesterol in bodybuilders vs. powerlifters. JAMA 252: 508-513, 1984.


Our normal values for creatine kinase are 10-180 U/L for males. Is there any valid reason for doing CK isoenzymes on any CK greater than 100, or should they be done only when greater than normal?

This question was referred to Dr. Ronald H. Ng, director of clinical chemistry at Methodist Hospital of Indiana. His answer follows:

Dillon and co-workers(1) have reported documented cases of myocardial infarction in which the total CK activity remained normal while the CK-MB level was elevated. Since the total CK level may be very low in small individuals, sedentary ones, and older people, an increased CK activity after an acute myocardial infarction in these subjects may not exceed the upper limit of the normal range.

In general, there is no need to perform CK isoenzyme determinations on specimens containing total CK activity less than three-fourths the upper limit of normal. I have not encountered any positive CK-MB result by electrophoresis when specimens below this cutoff level were analyzed. This is because for each activity unit of CK-MB released from the heart after an acute myocardial infarction, about 5 units of CK-MM are also released. Thus the total CK activity is readily elevated in myocardial infarction. An elevated CK-MB in the presence of a normal total CK should be questioned.

Serial monitoring to detect the pattern of a rise and a fall in the CK-MB level would be more assuring of true CK-MB release.

(1)Dillon, M.C.; Calbreath, D.F.; Dixon, A.M.; Rivin, B.E.; Roark, S.F.; Ideker, R.E.; and Wagner, G.S. Diagnostic problem in acute myocardial infarction. Arch. Intern. Med. 142: 33-38, 1982.


Is the human chorionic gonadotropin (hCG) doubling time of two days in the first trimester of pregnancy a useful diagnostic approach for distinguishing a normal pregnancy from an ectopic or abnormal intrauterine pregnancy?

This question was referred to Donald Warkentin, Ph.D., director of clinical chemistry at Overlook Hospital, Summit, N.J. His answer follows:

When used properly with ultrasound and the clinical history, hCG doubling times (DTs) can be helpful, although the commonly referred to values appear to be low. Doubling time is defined as the number of days required for serum hCG to double. It has been shown that DTs are shorter during the period from 1.5 to 3 weeks after ovulation than they are at weeks 6 to 8.

According to the literature, the values can vary from 1.4 to 4.8 days, depending on when the specimens were obtained, with considerable overlap between normal and abnormal pregnancies. Shorter DTs can sometimes be seen in multiple births, molar pregnancies, and gestational riocarcinomas. Significantly longer doubling times and decreases in hCG values have been observed in ectopic and abnormal pregnancies.

Caution needs to be exercised in interpreting DTs because an hCG test cannot pinpoint the tissue that is producing hCG and ultrasound cannot routinely detect a gestational sac until about 28 days after ovulation. In addition, because of the uncertainty of the day of ovulation, single hCG values are not very helpful. At the very least, two specimens should be drawn two to four days apart. The doubling times should be calculated according to the following formula(1): [Mathematical Expression Omitted]

If more than two points are obtained, a linear regression equation can be fitted to the values and the DT calculated by the following formula(1): [Mathematical Expression Omitted]

In summary, because of the variation in doubling times and the many factors influencing these values, hCG DTs should only be considered along with the patient's clinical history, ultrasound findings, and physical examination results.

(1)Pittaway, D.E.; Reish, R.L.; and Wentz, A.C. Doubling times of human chorionic gonadotropin increase in early viable intrauterine pregnancies. Am. J. Obstet. Gynecol. 152: 299-302, 1985.


What is the diagnostic value of the D-xylose absorption assay?

The D-xylose absorption test has been a valuable diagnostic test for malabsorption for about 50 years. Although infrequently performed, it is still considered a useful test in the differential diagnosis of malabsorption.

D-xylose is a pentose sugar that does not require enzymatic action for its digestion or absorption. It is not metabolized by the body and, consequently, measurement of its urinary excretion is a good indication of the amount of D-xylose absorbed in the intestine. When renal function is normal, a lower than expected urinary excretion of D-xylose indicates that absorption in the GI tract is deficient.

The most frequent causes of enterogenous malabsorption (malabsorption caused by a defect in the intestine) are sprue, blind loop syndrome, obstruction of the lymphatic flow of the small bowel, small bowel inflammatory disease, and shortened bowel.


What is the present diagnostic value of the prostate-specific antigen? Can it replace prostatic acid phosphatase in diagnosing prostate cancer?

Prostate-specific antigen or PSA is a glycoprotein found in the epithelial cells of the prostatic duct and acini. It has a molecular weight of 33,000 daltons. About 10 years ago Wang and coworkers(1) identified this glycoprotein in human prostatic tissue and raised an antibody against it. These researchers found that PSA was present in normal, benign hypertrophic, and malignant prostatic tissues but not in other human tissues. A monoclonal immunoradiometric assay is available for measuring PSA.

Prostatic acid phosphatase (PAP) is a commonly ordered laboratory test to diagnose prostatic cancer. PAP can be determined either by enzymatic procedures or by immunoassays. In a recent paper on PSA utility in cancer staging, Rock and coworkers(2) compared the immunoassays for PAP and PSA in four stages of prostate cancer as well as in benign prostatic hypertrophy. Prostate cancer can be staged as A through D, based upon extent of tumor, positive lymph nodes, and evidence of metastases.

For benign prostatic hypertrophy, the likelihood of PAP being elevated in 3 per cent versus 63 per cent for PSA; for Stage A, it is 0 per cent for PAP and 52 per cent for PSA; for Stage B, it is 8 per cent and 78 per cent, respectively; for Stage C, it is 60 per cent and 100 per cent, respectively; and for Stage D, it is 90 per cent and 100 per cent, respectively.

Based on this information, it is apparent that PSA is a very sensitive but less specific test for prostatic cancer, while PAP has higher specificity but lower sensitivity. I conclude that PSA should not be used as a screening test for prostate cancer due to its poor specificity; however, either PSA or PAP is an excellent monitor, useful in following patients who have been treated for prostatic cancer and in whom recurrence must always be evaluated.

(1)Wang, M.C., et al. Purification of a prostate-specific antigen. Invest. Urol. 17: 159-163, 1979. (2)Rock, R.C., et al. Evaluation of a monoclonal immunoradiometric assay for prostate-specific antigen. Clin. Chem. 33: 2257-2261, 1987.



Our anesthesiologists seem to frequently question our white counts on presurgical patients. We report by telephone call all white counts below 3 X [10.sup.9]/L or above 25 X [10.sup.9]/L. We run abnormal controls in duplicate, perform a manual white count, and correlate these counts with the blood film. Our quality control data as well as our College of American Pathologists' proficiency testing are within acceptable limits. Being conscientious technologist, we wonder if we are missing something.

You have outlined what would seem to be an acceptable quality control program, although the actual data and limits are not given. But I would presume that these would be in good order.

Several general comments can be made about leukocyte counting. First of all, the choice of a counting instrument affects the precision and accuracy of the counts. Survey participant summaries may give you some insight into the choice of instruments, but they are not the entire story.

Certain patient specimens may be associated with higher rates of spurious counts, e.g., patients on chemotheraphy or those with certain lymphoproliferative disorders. But the presurgical patients about which your doctors are asking would seem not to have those sorts of problems.

Finally, it is necessary to understand that a patient's white count may be quite labile. In other words, the leukocyte can vary significantly. Drs. Statland and Winkel have nicely documented these variations, and laboratorians as well as physicians must realize that such variations do occur, often to a considerable extent.(1) You might wish to duplicate some of Dr. Statland's studies in your laboratory, in order to highlight these changes.

If you carefully follow an acceptable quality control program while using good instruments and you receive acceptable results on interlaboratory surveys, I should think you can tell your physician users that your blood counts are indeed correct.

(1)Statland, B.E., and Winkel, P. Physiological variability of leukocytes in healthy subjects, in "Differential Leukocyte Counting," Koepke, J.A., ed. Skokie, Ill., College of American Pathologists, 1978.


Our laboratory routinely runs daily standards and controls for PT and PTT on our Coag-a-Mate 2000. The "standards" are normal and high abnormal commercial controls, while the "controls" are the same levels but of different lot numbers. What is the rationale for this, and is this necessary?

Although there are some true standards available for coagulation testing (e.g., Factor VIII standards), it would not be appropriate or useful to use such standard routinely. Some confusion is almost invariably present when hematologies talk about control, standards, and the "control with assigned values," a very ambivalent term.

It is not evident from the question that the "standard" is being used to calibrate the instrument--indeed one cannot "calibrate" coagulation detectors in the usually accepted sense of the term. It would appear that in reality you are using two control systems when only one would be entirely adequate. You may, in fact, be unnecessarily confusing the operator when one set of plasmas is "in control" and the other is "out of control." Which reflects the truth?

My recommendation is to use a single-level control plasma, preferably at the so-called decision level for it is at this level that precise testing is most important. A level in the middle of the therapeutic range for anticoagulation with warfarin (PT) or heparin (PTT), respectively, is probably ideal.

It may be that your laboratory uses the so-called "normal standard" for the comparison with patient results. We prefer to report to clinicians the midpoint of the normal range if they wish to calculate a prothrombin time ratio. Our thinking is that the "control" value is an internal quality control matter. You don't report your sodium or chloride control values, do you? In fact, you wouldn't report out patient data unless the controls are in control.

In summary, I believe you are running more controls than necessary. Certainly you should drop the "standards." Finally, keep posted on the continuing development of the International Normalized Ratio (INR) method of reporting prothrombin times. Some of your difficulties will fall away when you buy those calibrated thromboplastins.


What are the possibilities that lead to the phenomenon of platelet clumping?

This question was referred to Donald W. McCloskey, M.D., director of hematology at Methodist Hospital of Indiana. His answer follows:

Platelet clumping is a very common problem in the laboratory, frequently leading to reports of spurious thrombocytopenia, sometimes together with spurious leukocytosis. In our laboratory, which runs approximately 700 CBCs a day, the most common cause of platelet clumping is the incompletely anticoagulated specimen. In this situation, clumped platelets together with fibrin strands may be noted, especially on the feather edge of the peripheral smear. Patients with thrombocytopenia should have a peripheral smear reviewed to rule out spurious thrombocytopenia due to platelet clumping or platelet satellitosis.

Platelet clumping, especially in EDTA-anticoagulated specimens, occurs due to antibodies with apparently no in vivo significance. Platelet clumping may be eliminated by drawing the specimen in another anticoagulant such as sodium citrate. Patients with idiopathic thrombocytopenic purpura ITP), on the other hand, have true thrombocytopenia, generally without platelet aggregates.

Methods for evaluation of in vivo platelet aggregation in patients with thromboembolism are described in the references.(1-4)

(1)Kjeldberg, C.R., and Hershgold, E.J. Spurious thrombocytopenia. JAMA 227:628-630, 1974. (2)Solanki, D.L., and Blackburn, B.C. Spurious leukocytosis and thrombocytopenia. JAMA 250: 2514-2515, 1983. (3)Savage, R.A. Pseudoleukocytosis due to EDTA induced platelet clumping. Am. J. Clin. Pathol. 81: 317-322, 1984. (4)Kohanna, F.H.; Smith, M.H.; and Salzman, E.W. Do patients with thromboembolism disease have circulation aggregates? Blood 64: 205-209, 1984.


I have heard that one could perform a bleeding time on the leg rather than the arm of a patient. This may be important when both arms are immobilized or when it is difficult to find adequate space to perform such a test. Is a bleeding time as accurate on the leg?

Recently Dr. Hertzendorf and co-workers(1) published a paper on this very topic. They determined the bleeding time on 30 healthy volunteers on both their upper and lower extremities. In addition, they also assayed bleeding time two hours after aspirin administration. In performing the test on the leg, they applied a blood pressure cuff to the thigh, maintaining the pressure at 40 mm Hg, and then performed the test on the medial aspect of the calf about three inches below the knee. Results on the arm were performed in the conventional manner.

They found an excellent correlation between the two sites without any clinical or statistical significant differences. In addition, the effect of aspirin in prolonging bleeding time was seen using either the upper or lower extremity of the patient.

Based upon their studies, it appears that bleeding times performed on the arm or the leg are equally precise in normal subjects and also change in the same degree after aspirin ingestion. Such information should be very helpful when confronting a patient whose arm is not readily available because of injury or other factors. These authors recommend reference values up to eight minutes for either the arm or leg.

(1)Hertzendorf, L.R.; Stehling, L.; Kurec, A.S.; et al. Comparison of bleeding times performed on the arm and the leg. Am. J. Clin. Pathol. 87: 393-396, 1987.


Body fluids other than cerebrospinal fluid are transported to our lab in a purple-top (EDTA) tube. Our policy is to perform the cell count within one hour. Does the EDTA extend the integrity of cells found in such fluids to lengthen the time of processing to two or three hours?


The morphologic examination of various body fluid specimens has been receiving increasing attention in the last few years. The availability of cytocentrifuges, which allow for the preparation of excellent cytologic films, has done much to fuel this improvement.

Jeri Walters, a technologist from Milwaukee, has been giving seminars on the hematologic examination of body fluids. In a recent presentation, she touched on the essence of your question, stating that prompt examination of fluids (cerebrospinal, pleural, peritoneal, etc.) was an important consideration. In other words, the cell counts as well as the preparation of morphologic films should be done within an hour or so of collection. We all understand the logistic as well as technical problems this engenders, so I asked her to give me additional evidence.

She provided an article published in 1986 on the leukocyte survival in cerebrospinal fluid that may be particularly pertinent.(1) These authors who prepared CSF with known numbers of neutrophils, lymphocytes, and monocytes showed that neutrophils decreased rapidly in CSF. At one hour, one-third of the neutrophils were lost, and fully half disappeared by two hours. The effect of these artifacts on clinical diagnosis can be significant.

The reason for the loss was assumed to be the hypotonicity of the CSF (specific gravity, 1.007). Does the same thing occur in other fluids that usually have higher specific gravities? The authors of a paper on the reproducibility of cell counting in synovial fluid state that they had "confirmed in our own laboratory that leukocyte counts do fall after as little as one hour."(2)

So there is no clear-cut evidence that cell counts in body fluids other than CSF are stable for longer periods of time. The pursuit of an answer seems to be an appropriate activity for further study. Perhaps some of our readers have data they have accumulated that might shed additional light on this question.

(1)Steele, R.W., et al. Leukocyte survival in cerebrospinal fluid. J. Clin. Microbiol. 23: 965-967, 1986. (2)Schumacher, H.R., et al. Reproducibility of synovial fluid analyses. Arthritis Rheum. 29: 770-774, 1986.


In the NCCLS standard for blood collection,(1) the following order of draw of the vacuum tubes is recommended: 1) blood culture, 2) serum, 3) coagulation (citrate and heparin), 4) hematology (EDTA), and finally, 5) special tubes (oxalate or fluoride). For the new clot activator/gel separator tubes, what is the recommended order of draw?

The sequence of specimen drawing has been developed to optimize the quality of blood specimens. For instance, coagulation specimens are collected only after any tissue thromboplastin (which might have been released when the vein is pierced) has been washed away into the previously drawn serum tube, where any excess thromboplastin would have no deleterious effect. Remember how we used to use a two-syringe draw for coagulation tests?

Thus activator/gel separator tubes should be substituted for the serum tube (number 2, above) in the draw order. The draw order rather than the characteristic of the specimen tube is the important consideration.

(1)National Committee for Clinical Laboratory Standards. Procedures for the collection of diagnostic blood specimens by venipuncture. 2nd ed. Approved standard H3-A2. Villanova, Pa., NCCLS, 1984.


Some physicians ask that lymphocytes, if other than normal, be categorized as either reactive or atypical. Please elaborate on the distinction between the several different types of lymphocytes.

For many years, morphologists have struggled with the subcategorization of lymphocytes (and also granulocytes). The presence of so-called reactive or atypical lymphocytes provides the clinician with a valuable clue in a patient with a possible viral illness. So it is important to identify these cells if they are present. It has also been recognized that there is a certain amount of variation in the recognition of these cells. Note, for example, the less than unanimous consensus on the CAP morphology surveys that use photomicrographs for cell identification.(1)

In addition, the terms are ambiguous and may give rise to confusion. "Reactive" implies a reaction to something, usually a reaction to something, usually a viral agent, yet there is not a 1:1 relationship between reactive lymphocytes and viral infection. Similarly, "atypical" implies infectious mononucleosis or viral hepatitis to most clinicians, but "atypical" carries a connotation of malignancy to others.

Because of these semantic problems, the 1977 College of American Pathologists' Conference on Differential Leukocyte Counting recommended that the terms "reactive" and "atypical" no longer be used.(2) It was recommended that two types of differential leukocyte counts be provided as acceptable alternatives, depending upon the medical indications for the leukocyte count.

The first type of count, a screening differential, is suitable to ascertain whether the count is normal or abnormal but not otherwise specified. Instruments now can perform this screening function quite well.

The second type, a definitive differential count, is used to diagnose specific diseases or to follow therapy.

Subcategories of lymphocytes, such as so-called atypical or reactive lymphocytes, are not required for screening differential counts. Such cells, however, probably should be classified in definitive differential leukocyte counts.

Subsequently it was proposed that the term "variant" lymphocyte be used for any "not normal" lymphocytes in the lymphocytes, but when larger numbers are found, the senior morphologist or clinical pathologist/hematologist should more carefully evaluate the blood film and consult with the clinician to determine whether the picture is characteristic of a viral infection, lymphoproliferative disorder, or another abnormality.

(1)Dick, F.R. The lymphocyte differential count: Does it have potential? In "Differential Leukocyte Counting," Koepke, J.A., ed. Skokie, Ill., College of American Pathologists, 1978. (2)Recommendations for improving the medical usefulness of the differential leukocyte count. In "Differential Leukocyte Counting" (see 1).


In contrast to most other laboratory tests, which have well-defined quality control procedures, there are no methods we know of to insure that activated clotting time (ACT) instruments are working properly. Can you give an acceptable method for the control of ACT instrumentation?

There are two different methods--manual and instrument--for doing this procedure. The instrumental method lends itself to quality control measures quite similar to methods commonly employed in clinical chemistry.

At our hospital we developed a system using outdated fresh frozen plasma (FFP) as the control material.(1) Batches of ACT vacuum tubes containing diatomaceous earth are specially prepared by adding 68 mmol of Ca [Cl.sub.2] to each tube. Separate 2.5 ml aliquots of the FFP are stored at -20 C for subsequent use as controls. One unit of FFP will yield about 80 to 100 aliquots of plasma.

Each day we thaw an FFP aliquot, add 2.0 [Micro] l to an ACT tube, and time it in the instrument just as if it were a patient specimen. The various batches of FFP yield ACTs around 260 seconds with CVs of 6 to 7 per cent.

We have used this QC system for about two years. It has been especially useful in finding temperature fluctuations on these instruments, which seems to be the most common problem. Timer problems can be discovered by periodic time checks.

(1)Sedor, F.A.; Mayo, E.; and Kirvan, K.E. A quality control system for the "activated clotting time" test. Clin. Chem. 33: 1261, 1987.



Is it necessary to test convalescent serum for viral serology if the acute serum previously tested negative?

What is important is that the laboratory tests for viral infection should be based generally on three basic approaches:

1. Direct detection of viral antigens, either in cells or in infected tissue or fluid specimens.

2. Isolation and identification of viruses, which may be accomplished in cell cultures.

3. Demonstration of significant increase in serum antibodies to a suspected virus during the course of an illness.

Chapter 61 in the "Manual of Clinical Microbiology"(1) describes the optimal time for collection of specimens. The first phase of the viral illness is viremia. Next there is an antibody production phase, which goes on to the convalescent period. If the initial serum specimen is collected during the viremic stage, no antibodies would be demonstrated. Therefore, depending upon the collection time during the illness, the expectation in regard to the isolation of the viral antigen or demonstration of antibody titer is extremely variable.

The acute- and convalescent-phase sera should be tested together to determine antibodies that increase in titer during the course of the illness. An acute-phase specimen should be collected as soon as possible, not later than 5-7 days after the onset of the illness. A convalescent-phase specimen is often collected 14-21 days after the onset and 7-14 days after the acute-phase specimen was collected.

(1)Lennette, D.A. Collection and preparation of specimen for viral examination, chapter 61 in "Manual of Clinical Microbiology," 4th ed. Washington, D.C., American Society for Microbiology, 1985.


The CDC Manual of Tests for Syphilis 1969 says that quality control for VDRL on CSF should be a serum with a 1:80 or 80 dils titer. We rarely get a patient with such a high titer to use for QC. Are there commercially prepared sera for this? If not, what would you suggest?

You are correct in that on page 58 of the Centers for Disease Control Manual of Tests for Syphilis, the quality control for VDRL in cerebrospinal fluid should be a serum specimen with a 1:80 titer or higher in 0.9 per cent saline.

I have consulted with Sandra Larsen, Ph.D., head of the treponema research branch of CDC. Dr. Larsen states that a control serum with a titer of 1:80 is recommended so that protein concentrations similar to that of spinal fluid specimens can be achieved, while still containing a significant level of antibody.

One commercial source is Becton Dickinson Microbiology Systems in Cockeysville, Md., 21030, 1-(800) 638-8663. They make a positive human syphilitic control serum with insert directions for use in cerebrospinal fluid VDRL testing.

The VDRL is the test of choice for cerebrospinal fluid. Dr. Larsen and co-workers(1) have compared the VDRL, RPR (Rapid Plasma Reagin 18 mm circle card test), and TRUST (Toludine Red Unheated Serum Test) tests on 933 cerebrospinal fluid specimens with diseases with neurologic involvement other than neurosyphilis. They found that 139 specimens were reactive using the TRUST and RPR card tests. Thus these tests demonstrated a 14 per cent false-positive reaction rate and are therefore unsuitable for cerebrospinal fluid testing.

(1)Larsen, S.A.; Hambie, E.A.; Wobig, J.H.; and Kennedy, E.J. Cerebral spinal fluid serologic test for syphilis: Treponemal and non-treponemal tests, pp. 157-162 in "Advances in Sexually Transmitted Diseases," Morisset, R., and Kurstak, E., eds. Utrecht, Netherlands, VNU Science Press, 1986.


Is the finding of spermatozoa in urine specimens in male or female patients considered clinically important? Should each institution have a policy concerning handling, processing, and/or reporting the findings of such constituents? What is the recommended procedure in handling specimens if rape evaluation is involved?

Spermatozoa can be found in the urine of males or females who have recently engaged in coitus. This is considered a normal finding that has no clinical significance.(1) Consequently, it is unnecessary for the lab to report this finding.

Each institution should have a policy and procedure concerning handling, processing, and reporting of each type of test. The procedure for microscopic examination of urine should indicate what kinds of formed elements should be reported. It is not necessary, therefore, to have a separate procedure dealing with the reporting of spermatozoa in urine.

Laboratories should also have a comprehensive procedure dealing with examination of rape victims. Examination of the urine is usually not part of a rape victim examination since the vaginal contents are usually examined.(2,3)

(1)Haber, M.H. "Urinary Sediment: A Textbook Atlas," Chicago, ASCP Press, 1981. (2)Schiff, A.F. How to handle the rape victim. Southern Med. J. 71: 509-515, 1978. (3)Rupp, J.C. The sexual assault examination. Forensic Science Gazette (Southwestern Institute of Forensic Sciences) 2: 6-8, 1971.


We recently received a spinal fluid specimen of 2 to 3 cc from an infant of less than one year. Is this too large a volume to draw from a baby?

At birth, the head of the newborn is the largest part of the body. In infancy, the brain is almost adult size. The cerebrospinal fluid fills the space between the brain and the dura and is found within the ventricular system. Unlike the blood volume, which is very small compared with the adult, the volume of cerebrospinal fluid is closer to the amount found in the adult. Pediatricians generally try to obtain approximately 3 ml of fluid when performing a spinal tap. They do not feel that removal of this amount is detrimental to the baby.


The proficiency testing program we use refuses to accept [+ or -] 1 values as valid values in the VDRL unknowns. Laboratory textbooks consulted describe this variation from the known value as "inherent error of the procedure," which is accepted. Besides, the program makes no provision to prevent deterioration of these pooled specimens sent through the mail. We very often obtain results one step lower from the expected values, but the program requires us to get exactly the known dilution in order to get full credit. What do you say?

I am familiar with the College of American Pathologists' criteria for the syphilis serology survey. In the CAP survey system, the participant results are evaluated based on their agreement with the collective result obtained by all laboratories using the same method. The scores for the survey specimens as tested by the participant laboratories are averaged. The average scores of all participants are ranked by percentile, with approximately the highest 95 per cent being graded acceptable. The exact percentile for acceptability depends on the distribution of the scores.

It should be noted that the evaluation of quantitative results will be graded on the model value [+ or -] 1 dilution. Additional information regarding the scoring system is discussed in the following references.

(1)Williams, G.W., and Bowen, H.E. Recent developments in the scoring for CAP syphilis serology survey. Pathologist 32: 177-180, 1978. (2)Rippey, J.H. A non-mathematical explanation of syphilis serology survey grading. Pathologist 34: 223-224, 1980.


Our procedure for performing sperm counts is based on Todd & Sanford's "Clinical Diagnosis by Laboratory Methods," which states that the normal range for a sperm count is 60-150 million per milliliter. One of our local physicians feels this is entirely too high. Recently, I read in the second edition of "Body Fluids: Laboratory Examination of Amniotic, Cerbrospinal, Seminal, Serous, and Synovial Fluids" by Kjeldsberg and Knight that 10-20 million per milliliter is considered the normal range. What does the Tips panel recommend as the "normal range" for a sperm count?

Most authorities consider the normal sperm count to be between 40 and 100 million sperm/ml. A number of factors other than the number of sperm are important in determining fertility, however. Most important of these are the motility of the sperm, the proportion of those with normal morphology, and the volume of the ejaculate. If all of these other characteristics are normal, pregnancy can occur with sperm counts as low as 10 million/ml. This may account for the low figure cited by Kjeldsberg and Knight. In assessing fertility, it is important to consider the sperm exam in its entirety rather than only one part of it.


What is the current recommended procedure for collecting blood for a cold agglutinins test? One reference book states: "Specimens should be collected at 37 C and then transported to the laboratory submerged in water at 37 C. When this procedure is not possible, the specimen should be warmed for 30 minutes to 37 C before the serum is separated from the cells." Another source says to keep the specimen warmed for one hour before separating the serum from the cells. If a specimen is not prewarmed, is it valid to warm it for 30 minutes before separating it?

The procedure we have used in our laboratory is described in detail by Dr. L.D. Petz of the "Manual of Clinical Immunology," edited by W.R. Rose and H. Friedman, 1976. Subsequent editions of this manual do not discuss the special technical considerations for the cold agglutinins assay. The instructions are:

1. The blood should be collected into a warmed syringe or evacuated tube and immediately immersed in a 37 C water bath or thermos flask. If the blood cannot be collected at 37 C, then the specimen should be placed in a 37 C water bath or incubator as soon as possible. The dangers of letting the blood cool are that the auto-antibody will combine with the cells and cause agglutination, fixation of complement, and loss of antibody from serum.

2. If the specimens have inadvertently cooled, warming to 37 C for at least 10 minutes will cause the cold antibody to elute back from the cells into the serum and the autoagglutination will disperse completely in most cases. The serum complement bound to the cells in the cooling will remain on the cells. Therefore, the direct antiglobulin test should be carried out on a specimen of blood collected into EDTA, as this will prevent any complement being bound in vitro, even if cooling should occur.

3. The serum should be separated from the cells strictly at 37 C. Ideally, this means working completely in a 37 C warm room or using a heated, jacketed centrifuge. Specimens transferred from a 37 C water bath and centrifuged immediately in a Serofuge at room temperature may drop approximately 7 to 8 C after only one minute of centrifugation. Therefore, it is recommended that a Serofuge be kept in an incubator at 45 C; at this starting temperature, the specimens actually spin at 37 C. It is important to remove serum from the specimens immediately after centrifugation.

The routine use of prewarmed blood collection equipment and serum separation at 37 C is recommended for cryoglobulin assay specimen collection by Moroz and Rose.(1)

Cryoglobulins that precipitate significantly at 20 to 30 C and as high as 36 C have been described(2) and may be associated with more serious symptomatology than other cryoglobulins with a lower cryoprecipitation point, present in significantly higher concentrations.(1)

(1)Moroz, L.A., and Rose, B. Cryopathies, chapter 23, p. 465, in "Immunological Diseases," second ed., Samter, M., ed. Boston, Little, Brown and Co., 1971. (2)Saha, A.; Edwards, M.A.; Sargent, A.U.; and Rose, B. Mechanisms of cryoprecipitation. I. Characteristics of a human cryoglobulin. Immuno. Chem. 5: 341, 1968.


In response to another Tips on Technology question, you stated that spermatozoa in urine has no clinical significance and, consequently, it is unnecessary for the lab to report this finding. However, don't we need to report numerous spermatozoa in urine from elderly male patients? Isn't there a possibility that spermatozoa will form a plug that may cause an obstruction and inflammation to the urogenital tract of the patient?

Males may retain their fertility into old age. Spermatozoa, therefore, can be found in large numbers in the elderly. The significance of spermatozoa is the same as in younger men.

I noticed that the person who asked this question works in a dialysis unit. In this setting, the patients usually produce little or no urine. Any semen that is passed into the urethra or bladder might have little urine to flush it away and could be found in high concentrations. Patients who produce little or no urine sometimes develop pyocystitis, pus in the bladder. My urology consultant tells me that spermatozoa or prostatic secretions do not cause any harm when they are found in the bladder or urethra.


Our laboratory has subscribed to the proposition that we should have at least 10,000 counts for our RIA methods. Apparently this is based on statistical reasons. In the interest of performing tests that are cost-effective and time-effective, various individuals have suggested using methods with less than 10,000 counts. What do you think of this suggestion? What do you recommend?

In any counting system, whether one is counting radioactivity or blood cells in an electronic blood cell counter, it is necessary to count a sufficient number of counts to minimize random error. The expected standard deviation of the count is related to the square root of the total number of counts. Thus if 10,000 counts are collected, the standard deviation is 100 or 1 per cent of the total.

If only 100 counts are collected, however, the standard deviation is the square root of 100 or 10, 10 per cent of the total. The fewer the counts, the greater the uncertainty of the reliability of the value obtained.

In nuclear medicine, it is customary to do a 10,000 count in order to have a counting reliability of 99 per cent. The same theory of counting statistics dictates why we measure the quantity of blood we do for an electronic cell count and perform at least a 100-cell blood cell differential.


We perform antinuclear antibody testing using mouse kidney substate. We have competent technologists performing these assays. Some physicians question our assay when we give them results that differ from the results given to them by the hospitals that referred these patients. What could explain these discrepancies?

There is a wide variety of substrates available with differing levels of sensitivity, advantages, and disadvantages. The variety of substrates in commercial kits for ANA testing makes comparison of data from different laboratories difficult.(1)

The mouse kidney sections were found to be superior for screening for absence of ANA as well as for detection of smooth muscle and liver kidney microsomal antibodies.(2) HEP-2 cells and KB cells were found to be excellent for detection of antibodies to SSA, SCL-70, PM-1, and centrometer antinuclear antibodies.

In our laboratory, we have used both the mouse kidney and the HEP-2 cells for screening purposes since the mouse kidney has an advantage for detection of absence of ANA in problem cases in which systemic lupus erythematosus is one of the considered diagnoses. The quality assurance proficiency testing data from the College of American Pathologists have resulted in a recommendation that each laboratory establish age-adjusted reference ranges for particular methods and instrumentation. They should report qualitative endpoint titers in relation to these reference ranges and interpret normal or abnormal as laboratory appraisal rather than as a clinical assessment and include a low-titer ANA control sera in each assay.(1)

(1)Nakamura, R.M., and Rippey, J.H. Quality assurance and proficiency testing for autoantibodies to nuclear antigen. Arch. Pathol. Lab. Med. 109: 109-114, 1985. (2)Molden, D.P.; Nakamura, R.M.; and Tan, E.M. Standardization of the immunofluorescence test for autoantibody to nuclear antigens (ANA): Use of reference sera of defined antibody specificity. Am. J. Clin. Pathol. 82: 57-66, 1984.



Our reference laboratory receives requests for anaerobic culturing of pilonidal cysts. We always seem to grow a mixed population of different types of anaerobes that are difficult to separate and identify. Would you comment on the flora that are usually present in this type of culture and the significance thereof? Can we simply report out mixed anaerobic flora?

Pilonidal cysts, by their nature, provide an excellent culture environment for anaerobic and facultative anaerobic bacteria. Any such organism gaining access to the cyst may colonize the cyst.

Most often these organisms are derived from the cutaneous and enteric flora, both of which contain an abundance of anaerobes. Consequently it is not unusual to recover a wide variety of bacterial species when culturing infected pilonidal cysts.

Since the standard treatment of infected pilonidal cysts (surgical drainage followed by complete surgical excision once the infection is controlled) is usually successful, it is difficult to find a pragmatic role for bacterial culture. However, there may be specific circumstances when culture could be useful, e.g., recurrence of infection after surgical removal, or failure of the initial infection to respond to surgical drainage (with or without empiric antibiotic therapy). But even in such circumstances, the interpretation of cultures with multiple microbial species is often little more than guesswork. In my view, a report indicating the presence of a mixture of B. bivius, F. necrophorum, P. magnus, and P. acnes is of no greater clinical value than one indicating mixed anaerobic flora.

In this era of increasing cost containment and limitation of resources, physicians who want specific identification of each anaerobe in such cultures should be prepared to justify their requests. Obviously, communication between the laboratory and its physician clientele is essential, and both the laboratory and clinician must be open to changes and suggestions that contribute to more cost-effective health care.


I have read an article concerning Bacteroides fragilis as one among several pathogens causing diarrhea. Do you think we should screen this organism in routine stool culture? In our laboratory, we have been isolating Bacteroides fragilis in our Campylobacter media at 42 C. Do you have any comments about this, and can you give us some reference to support that this organism is reportable?

Gram-negative anaerobic bacteria are the predominant flora of normal feces. Of these, Bacteroides is the major genus present and may be present in counts of [10.sup.9] to [10.sup.11] bacteria per gram of feces. Within this genus, members of the B. fragilis group are clearly the dominant species in feces. Thus routine screening for B. fragilis should yield virtually 100 per cent positive results.

Although the B. fragilis group is well recognized as an important intra-abdominal pathogen, its role as an enteric pathogen causing diarrhea is far from proven. Until B. fragilis can be shown to be an enteric pathogen and the enteropathogenic strains can be identified and readily differentiated from the normal flora, I see no utility in culturing feces for B. fragilis.

With respect to isolation of B. fragilis on the Campylobacter media, I would wonder about the incubation atmosphere being used. If the [O.sub.2] concentration is low enough to readily grow B. fragilis, it may be too low for satisfactory growth of Campylobacter jejuni.


What is the best rapid test for bacteriuria? Is it as good as culture results?

There are three major approaches to determine bacteriuria in urine. They include microscopic observation, chemical tests, and culture. The microscopic analysis and chemical tests tend to be very rapid. Culture results may take up to a day or longer to reveal the presence or absence of bacteriuria.

Microscopic analysis is of two types. One could either examine the urinary sediment under high power as part of the microscopy examination, or one could observe the Gram-stained film of unspun urine under oil immersion. Examination of a Gram stain tends to be very time-consuming, especially when many specimens must be examined in a routine run. Furthermore, detecting the presence of bacteriuria may be dependent upon the experience and expertise of the technologist performing the examination.

There are two major chemical tests used for bacteriuria detection. One is very direct, and the other is indirect. The direct procedure is the nitrite test. A positive nitrite test indicates that bacteria that reduce urinary nitrate to nitrite are present in significant numbers. The leukocyte esterase (LE) test is indirect because it is a measure of neutrophils and not bacteriuria. A positive LE test suggests the presence of white blood cells in higher than expected quantities, which in turn suggest bacteriuria.

These assays are often impregnated into the urine dipstick along with tests for albumin, glucose, protein, and hemoglobin. The combination of nitrite and LE improves the sensitivity of this approach.

The bioluminescence procedure is a test for bacterial adenosine triphosphate; however, it requires special equipment and tends to be very time-consuming. It takes up to one hour to perform.

The last procedure is a filtration-staining method called the Bac-T-Screen (Vitek Systems, Hazelwood, Mo.). Although a rapid test, it also requires a special machine.

Table I presents each of these six rapid tests for bacteriuria and their sensitivity, specificity, predictive value of a negative test--PV ( -- )--as well as approximate detection time. This information is based upon the study by Hallander, Kallner, Lundin, et al.(1)

It is obvious that none of these tests enjoys the sensitivity and specificity that we would expect of an ideal laboratory test. The independent judge of the diagnostic utility of these tests is the urine culture. A colony count is still the method of choice by which all other methods can be evaluated.

What is truly needed in rapid detection of bacteriuria is an assay that will have the diagnostic utility of culture but will provide the clinician with the results while observing the patient. One final note should be added. There is controversy among infectious disease experts as to what is the critical threshold for a positive colony count, e.g., [10.sup.2] CFU/ml versus [10.sup.5] CFU/ml. Using the lower values as an indicator of bacteriuria would suggest that these simple rapid assays are even more insensitive.

(1)Hallander, H.O.; Kallner, A.; Lundin, A.; et al. Evaluation of rapid methods for the detection of bacteriuria (screening) in primary health care. Acta Path. Microbiol. Immunol. Scand. Sect. B. 94:39-49, 1986.


I would like to have your comments about my conclusion on holding times for urine cultures. Of 542 urine cultures that had no growth after overnight incubation, 98.7 per cent (535) were still no growth after 48 hours of incubation. The remaining 1.3 per cent (7) had growth of 10,000-100,000 yeast. The majority of these had been set up late the day before. This leads me to think that we could report out as a final report any negative urine after overnight incubation. It seems more accurate than many urine screening devices and relieves the laboratory of unnecessary incubation and paperwork.

Incubation of urine cultures overnight (16-24 hours) should certainly be adequate for routine specimens. I would suggest a cutoff time (for example, 5 p.m.), so that specimens arriving in the lab or being set up after the cutoff time would be held longer, e.g., the second morning may fit the work flow best. This would also allow recovery of some of the yeasts.

Of course, one must remain flexible enough to culture urines for much longer periods and under a variety of conditions when the clinical situation indicates the possibility of unusual pathogens. Another indication for longer incubation is the presence of a monomorphic population of bacteria on initial Gram stains with no growth after overnight incubation. With these exceptions, I would agree with the conclusion you reached.


Our laboratory currently uses the Bactec 460 for blood cultures. For patients receiving antibiotic therapy, we are drawing both the non-resin containing media 6B and 7C and the resin media 16B and 17D. In the interest of cost containment, we would like to discontinue this practice and draw only the resin media on those patients receiving antibiotics. The literature and our own experience seem to suggest that these media can be used as a stand-alone system. The only advantage we can see in inoculating both sets of bottles is the increased volume of blood obtained. What is your opinion?

I agree that the major advantage of inoculating both sets of bottles is the increased volume of blood cultured. This is an advantage that should not be underestimated, however. Volume of blood cultured is now clearly recognized as a major factor in determining the sensitivity of conventional blood culture systems to detect bacteremia,(1,2) and there is good evidence to suggest that this is also true for the Bactec system.(3) Recognizing that many factors must be considered, the authors of Cumitech 1A(4) state that "...10 ml of blood appears to be a reasonable lower limit per culture...." Thus, to reduce the number of bottles from four to two, one would either decrease the volume of blood cultured, or double the blood:medium ratio. Theoretically, the latter alternative is more apt to provide comparable results with the four-bottle draw.

When a blood culture study purporting to evaluate two or more media shows no significant difference in recovery of organisms between the different media, it is important to also note the difference between the individual media and the total. This will give some indication of the effect of volume of blood cultured. For example, a study comparing medium X with medium Y (each with 5 ml of blood inoculated) might give the following results: total positives = 100, positives with medium X = 84, positives with medium Y = 80. Such a study shows no significant difference between the two media, but does show that culturing 5 ml of blood would miss 16 to 20 per cent of bacteremias detectable by culturing 10 ml of blood.

One other factor should also be considered. Studies examining the blood:medium ratios of blood cultures indicate no significant difference between a standard 1:10 ratio and a 1:5 ratio--except in patients on antimicrobial therapy.(3,5) In the latter instance, better results were obtained with greater dilution of the blood. I am not aware of studies that have looked at this factor for resin-containing media. This may well be worth investigating before altering the recommended ratios.

(1)Hall, M.M.; Ilstrup, D.M.; and Washington, J.A. Effect of volume of blood cultured on detection of bacteremia. J. Clin. Microbiol. 3:643-645, 1976. (2)Washington, J.A. Conventional approaches to blood culture, pp. 41-88, in "The Detection of Septicemia," Washington, J.A., ed. West Palm Beach, Fla., CRC Press, 1978. (3)Salventi, J.F.; Davies, T.A.; Randall, E.L.; Whitaker, S.; and Waters, J.R. Effect of blood dilution on recovery of organisms from clinical blood cultures in medium containing sodium polyanethol sulfonate. J. Clin. Microbiol. 9:248-252, 1979. (4)Reller, L.B.; Murray, P.R.; and MacLowry, J.D. Cumitech 1A, "Blood Cultures II," Washington, J.A., coordinating ed. Washington, D.C., American Society for Microbiology, 1982. (5)Auckenthaler, R.; Ilstrup, D.M.; and Washington, J.A. Comparison of recovery of organisms from blood cultures diluted 10% (volume/volume) and 20% (volume/volume). J. Clin. Microbiol. 15:860-864, 1982.


Our ob/gyn physicians routinely order Chlamydia cultures on each new patient as a screening test. I have recently been investigating the use of the Chlamydia antigen detection test as opposed to an actual culture. We use a reference lab to perform these tests, and the antigen detection test is much cheaper and does not present us with the specimen handling problems associated with the culture. Have there been any studies to determine whether the antigen detection test is as sensitive as the culture in identifying Chlamydia? What specimen handling procedures are necessary to assure the best results when a culture is ordered? What are the advantages and disadvantages of the antigen detection test as opposed to the culture?

Several published papers have evaluated the performance of the two major Chlamydia detection systems (direct immunofluorescence and enzyme immunoassay) in comparison with cultures. Three of these papers are listed below.(1-3)

Overall, both of these tests seem to have a sensitivity, compared with culture, of around 90 per cent. These figures vary in different reports and appear to be influenced, at least in part, by the nature of the population being studied (e.g., male vs. female, low prevalence vs. high prevalence of Chlamydia infection, and so on) and the cutoff values for positivity in the immunofluorescent method (number of elementary bodies) and enzyme immunoassay (optical density value).

There are several important contrasts between Chlamydia culture and Chlamydia antigen detection, from the laboratory and the clinical perspective: * Accuracy. Culture currently remains the gold standard against which other methods must be compared. The better than 90 per cent sensitivity and even higher specificity of the antigen detection methods, however, are considered quite acceptable in most clinical settings. * Evaluation of specimen quality. A negative result should be based on analysis of an adequate specimen. The only system currently available that permits evaluation of specimen quality is the immunofluorescent method, for which the presence of epithelial cells is required. * Specimen holding. Whereas great care (appropriate transport media at 4 C) and minimum delay ([is less than] 24 hours) are requisites for optimal recovery of Chlamydia by culture when immediate inoculation is not possible, the requisites for delay in testing by the antigen detection systems are much less stringent. * Turnaround time. Chlamydia culture requires 24-96 hours between inoculation and reporting of results, depending on method, blind passage, etc. The antigen detection systems are same-day procedures. * Cost. From the standpoint of materials, time, and technical expertise, culture is more expensive. When the volume of cultures is large, the use of microtiter wells instead of vials can greatly reduce this cost. For most laboratories, the antigen detection methods are more cost-efficient, and for large-volume laboratories, the enzyme immunoassay would appear to have the advantage.

(1)Lindner, L.E., et al. Identification of Chlamydia in cervical smears by immunofluorescence: Technic, sensitivity, and specificity. Am. J. Clin. Pathol. 85:180-185, 1986. (2)Ryan, R.W., et al. Rapid detection of Chlamydia trachomatis by an enzyme immunoassay method. Diagn. Microbiol. Infect. Dis. 5:225-234, 1986. (3)Chernesky, M.A., et al. Detection of Chlamydia trachomatis antigens by enzyme immunoassay and immunofluorescence in genital specimens from symptomatic and asymptomatic men and women. J. Infect. Dis. 154:141-148, 1986. [Tabular Data Omitted]
COPYRIGHT 1989 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1989 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Clinical Laboratory Reference 1989; blood bank, chemistry, hematology, immunology, microbiology
Publication:Medical Laboratory Observer
Date:Jan 1, 1989
Previous Article:Saving time with combined microcomputer applications.
Next Article:How final 1990 budget affects labs.

Related Articles
Needed: consultants to physicians' office labs.
A guru's hard look at the lab field's future.
Safe BUNs and other changes: we've come a long way in three decades.
The past as prologue: a look at the last 20 years.
Reequipping the lab: a brisk pace of renewal.
Reengineering the clinical laboratory: transitioning to an open laboratory.
How to assure consistently accurate lab results.
Leveling the playing field: the economics of robotics in the hospital clinical labs.
The quest to balance talent and technology.
Blackboard basics for Lab 101.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters