Printer Friendly

Sensitometry: the professional's test tool.

The consummate professional, in any profession, is the person who not only knows all the "bits and pieces," but also understands how they fit together. He or she can analyze ideas, situations and problems and offer intelligent guidance. We all can think of several people who fit this description, and most of us aspire to emulate them.

In my opinion, a professional radiologic technologist is someone who can do more than produce a quality diagnostic image; he or she is the person who understands the many factors involved in creating that image and controlling its quality.

In diagnostic imaging, quality is defined and controlled by sensitometry. Sensitometry is the response (measurement) of film to controlled exposure (sensitization) and development. Sensitometry also applies to filmless systems.

If a radiologist is able to detect a lesion by examining a patient's radiograph, it is because the film has adequate contrast, a sensitometric measurement. With controlled exposures, we can monitor processing, determine optimal processing through a gamma-response curve, troubleshoot and control quality. With controlled development, we can use sensitometry to optimize exposures, set up or change techniques, troubleshoot and control quality.

Everything that occurs in the production of the useful image must be taken into consideration for every exposure. The sum total is a sensitometric image, which we call a patient film.

Consider the individual contrast modifiers: focal spot size/target quality, tube window (inherent filtration), added filtration, kVp, subject (patient part), film, screen, Bucky/grid (forward scatter), backscatter, processing, viewing and background lighting.

The subject, of course, is a variable. Radiologic technologists are specially educated to deal with subject variability. We also use special equipment, techniques and film/screen combinations, such as "mammo" or "ortho" film, to control subject quality. In mammography, the berilium window offers a different beam hardness and image contrast factor than the silicon window used for the higher inherent subject contrast of orthopedic radiography.

But of all these variables, which is the most important? Which factors do technologists need to be concerned about because they control contrast?

Heurter and Driffield tried to characterize how different emulsions (film types) worked. They gave the different films equal exposures and development. The defined characteristics allowed them to use different films for different jobs. The scientific tool they used to show the differences was a "stimulus-response" curve.

On the abscissa or horizontal base of the chart is plotted a uniformly increasing exposure system based on the logarithm of the square root of 2. The ordinate or vertical portion is laid out in log units of density, which is the inverse of the light transmittance. Heurter and Driffield (H&D) charts showed fairly linear (straight line) curves. Film-based radiography shows an S-shaped curve because of the screen, which is an indirect exposure modifier. Filmless systems look more like the original H&D curves.

These curves also are called characteristic curves or sensitometric curves. From them, four measurements may be made: minimum density, speed, contrast and maximum density. The values can be compared and studied relative to each other, to trend charts, and against manufacturer or other published values. The values of one film can be compared to another. And the H&D graphs themselves can be overlaid to study variation. However, simply knowing that film A has more contrast than film B is not as useful as understanding how it happened.

The indirect exposure system with salt intensifying screens actually is a light exposure, and the screen does not respond linearly to the radiation. There is a nonlinear toe region in the light density and a nonlinear shoulder region in the higher densities. These are connected by a linear portion that contains the highest level of contrast. It is here we try to position the patient part for the best visualization.

Contrast is a difference, and a difference in density is needed to see subject differences between bone, oil, air or water. Density itself is significantly of less value.

Unfortunately, with all the factors that influence the final "radiographic contrast," the position of the H&D curve can shift up or down into lower contrast regions. Thus, image creation and modification simply is a process of placing the subject part in the best position on the sensitometric curve to achieve the best possible contrast. Sometimes this is not higher contrast, but actually lower contrast and wider latitude (not to be confused with exposure latitude).

Contrast neither creates nor destroys detail; it merely enhances its visibility. A correct contrast level will allow visualization on a film that is too light (low density) or too dark (high density). A film with optimal density but poor contrast, however, will have to be repeated or result in a false reading.

Unfortunately, many radiologic technology schools teach sensitometry in the same vein as the Gurney-Mott theory of the formation of the latent image: "You need to know about it, but you don't need to use it." This gives students the impression that the only people who can understand sensitometry or need to use it are QC technologists. In reality, everything we do is based on the sensitometric outcome. Every radiologic technologist needs to understand sensitometry and practice it each time a technique is chosen and whenever a film is critiqued.

Sensitometry requires a thorough knowledge of densitometry and logarithms. A film critique should never consist of a simple statement such as "The film is too dark." Instead, it should explain exactly why the film is too dark: "There is too much density in the toe, resulting in a toe lift that reduces contrast and thereby the ability to see small bits of information." Contrast influences resolution and edge sharpness.

Radiologists are exposed to sensitometry but rarely are they trained in it, because their job is to read films, not create them. Thus, the professional radiologic technologist should develop a dialogue with radiologists that elicits information relating to sensitometric understanding.

Every image is a sensitometric image. All factors that can possibly influence the final image must be considered and controlled from the sensitometric point of view. Radiologic technology is based on image formation, not patient anatomy and handling. Sensitometry is fundamental to the control of radiographic quality, because image quality is defined in sensitometric values. Sensitometry is the major scientific test tool of the equipment and supply companies. Sensitometric changes relate directly to trend chart variations.

Of course, the consummate professional in radiography already knows all of this. Sensitometry is the foundation of imaging science. Every radiologic technologist needs to know how to construct an H&D curve, calculate values, analyze the results and make professional decisions.

William E.J. McKinney is a consultant and lecturer in radiographic quality control and the author of Radiographic Processing Quality Control, published by Academy Medical, Rolling Hills Estates, Calif. Mr. McKinney is a graduate of Villanova University and formerly was a senior technical specialist of DuPont Imaging Systems, where he founded the DuPont Processing Training Center.
COPYRIGHT 1996 American Society of Radiologic Technologists
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:McKinney, William E.J.
Publication:Radiologic Technology
Date:Jul 1, 1996
Words:1153
Previous Article:Radiologic technologists and child abuse.
Next Article:Advances in lithotripsy and stone disease treatment.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |