Printer Friendly

Technical correspondence.

Technical Correspondence

VOLUME RENDERING VERSUS SURFACE

RENDERING

While the article "Volume Rendering" by Karen Frenkel (Communications, Apr. 1989, pp. 426-435) is timely and its spirit is very appropriate for the non-specialist, we have strong misgiving about it for two reasons. The first is policy related, and the second is scientific. Our comments relate to volume rendering as applied to medical 3D (three-dimensional) imaging, since our own interest is in medical imaging. Prior to stating our concerns, we wish to summarize our perception of volume rendering and surface rendering.

Both of the methodologies start with a three-dimensional array of values and produce a two-dimensional image based on this array. Unavoidably, there will be a great deal of data reduction and, hence, a loss of information. The important issue is how to do the rendering so that (medically) relevant information is retained.

Surface renderers do this via a special preprocessing step: some methods or other (not necessarily a simple one such as thresholding) surfaces of the objects that are represented in the three-dimensional array are determined and are displayed by more-or-less standard computer graphics techniques. Volume renderers project onto the display screen the whole three-dimensional array, possibly, also after some preprocessing. Since the preprocessing may, indeed, be surface detection, the image that is produced by a surface renderer can always be produced by a volume renderer.

As far as discussing the relative usefulness of the images themselves, there can only be a dispute if volume rendering is used in a mode other than surface rendering. It is indeed such a mode that is discussed in Frenkel's article:

the particular volume-rendering methodology is one which somehow blends together into a single color for a pixel on the display screen information from all volume elements encountered by a line emanating from this pixel. The volume elements of the three-dimensional array are assigned colors and opacities (based on their original values) and the renderer does a "volumetric compositing."

In common with the article, we use below the phrase "volume rendering" specifically to mean this approach.

We feel that it should be counter to the policies of a reputed journal such as Communications to publish an editorial article which is an uncritical partisan presentation of one methodology where others exist for the same purposes. To make matters worse, the article does not even give any references to the literature, which deprives the already misled readers of a chance to get a balanced view. We consider that the validity of the claim, stated in the article as fact, of the superiority of volume rendering over surface rendering in medical imaging is neither obvious nor has been established scientifically. We, therefore, consider that statements in it are at best opinions, at worst untrue, and, in any case, controversial at present.

In addition, the general impression of the history of rendering volumetric medical images provided by the article is false; the implication of the article is that medical 3D imaging came into being because of the development of volume rendering. In fact, medical 3D imaging predates volume rendering; see, for example, the 1970 paper of Greenleaf [6]. Even the history of volume rendering is essentially incomplete in Frankel's article. The first published volume-rendering technique that we are aware of is due to Harris [8] who suggested the idea of mixing densities or information derived from densities to create volume renditions more than a decade ago. We have no doubt that the recent volume renderers are more sophisticated than the technique of Harris, but the fact is that the essence of the idea is due to him rather than to the people listed in Frankel's article. Now we discuss scientific matters.

The article asserts that, philosophically, surface rendering is less appropriate than volume rendering in medical imaging because the former techniques "assume that a thin surface, suspended in air, accurately represents the original volume, but often data are taken from volumes containing fluids and tissues that interface and form local mixes. They absorb and emit light differently, information which is lost if the data are reduced to shell-like surfaces." We find this assertion objectionable for the following reasons. The information that is sought after varies considerably in medical imaging from application to application. It is usually not the case that the physician analyzes all types of tissue within the region of the body for which data are gathered. Quite often the external shape of a particular organ system is all that is of interest. It makes perfect sense to attempt to estimate where surfaces lie, since most structures within the body have tangible boundaries which the surgeon actually encounters during a surgical procedure. Just by itself, the philosophy adopted in surface rendering does not make it an inferior depictor of structure compared to volume rendering. Surface renderers do not assume that an entire volume can be represented by a surface; rather, they operate on the priciple that there are identifiable objects inside the human body and that these are appropriately represented by their surfaces. The usefulness of surface rendering in a variety of medical applications has been established by a number of independent critical studies (e.g., [1, 4, 7, 16]). The philosophy of surface rendering makes sense for other reasons too. There are establish theories of vision which, based on known results in human vision from fields ranging from neuro-physiology to psychophysics, have led to computational frameworks for vision that assume that "the visible world can be regarded as being composed of smooth surfaces having reflectance functions whose spatial structure may be elaborate" [11].

Frenkel's article implies that volume rendering has more precision and is less prone to artifacts that surface rendering. To us this is by no means obvious. While we have heard similar opinions expressed prior to the article, we have not yet seen a scientifically acceptable validation of such a hypothesis. The notions of precision and artifact are very complicated and cannot be treated in a general fashion independent of specific applications and observers. An artifact in the signal processing sense is not necessarily a medically relevant artifact. How well the tissue types of interest are identified--an operating generally referred to as classification--is vital to both surface and volume methods. Given the same degree of success in classification for both surface and volume rendering, it is not clear that either method is better than the other as far as the portrayal of medically useful information is concerned.

The article conveys that natural objects such as internal organs do not lend themselves to geometry-based description. This completely contradicts our experience. Though surfaces of medical objects are continuous and smooth, they can be modeled and rendered remarkably well using a large number of discrete surface elements [2, 5, 9, 10, 12], see Figures 1-2. Though the geometry of the modeled surface is discrete, the shape of the original continuous surface can be portrayed accurately so as to show fine details such as the sutures on a human skull via a combination of techniques for object interpolation and for estimating normals to the continuous surface in the vicinity of points on the discrete surface. It is medically not necessary that a fine structure such as a suture be modeled with absolute accuracy (meaning represented precisely in the geometry of the discrete surface). All we need is that the corresponding feature in the geometric surface should lie in close vicinity of the underlying real structure. Appropriate normal estimation procedures [2] guided by the discrete geometry and the original data can make up for deficiencies due to classification. Further, data structures representing such surfaces have far fewer elements and require, typically, an order of magnitude less storage than those used to represent the three-dimensional data set in volume rendering. In addition, since for both methods of rendering the number of elements to be addressed while producing a single image is of the same order of magnitude as the total number of elements in the data structure, once preprocessing is completed, typically much less computational work needs to be done by a (special-purpose) volume renderer than by a (more general-purpose) volume renderer. Modeling objects and surfaces in such a discrete fashion is desirable for reasons beyond rendering: it allows us access to powerful results from mathematical topology which can be applied to the study of objects and their surfaces in the discrete environment through the development of appropriate theories [5, 9, 10, 12, 13].

Visualization is only one aspect of a complete ability to analyze volumetric data. Mensuration [14] and interactive manipulation of structures (such as cutting out a segment of the object and relocating it) are essential capabilities in a variety of applications such as surgical planning [3, 15]. These are, at present, possible only with structure-oriented approaches such as surface rendering.

In summary, much remains unknown about the relative merits of volume and surface techniques for visualization in medical imaging. Both of these methodologies are exciting areas of research, and, in some sense, they complement each other. For example, when the object to be visualized is not well-defined (e.g., a diffused tumor), volume rendering, in principle, seems to be more appropriate than surface rendering; but, the medical efficacy of such a display will depend on how opacities and colors are assigned to data points. On the other hand, for the display of tangible objects, a high-quality surface renderer may well be preferred to any currently available volume renderers since it is likely to be computationally less expensive and yet provide better capabilities for mensuration and for interactive manipulation. We are confident that medical imaging well greatly benefit from the development of both volume and surface rendering in the years to come.

Jayaram K. Udupa Gabor T. Herman Department of Radiology University of Pennsylvania Philadelphia, PA 19104

REFERENCES

[1] Burk, D.L., Jr., et al. Three-dimensional computed tomography of acetabular fractures. Radiology 155, (1985), 183-186.

[2] Chuang, K.S., Udupa, J.K., and Raya, S.P. High-quality rendering of discrete three-dimensional surfaces. Tech. Rep. MIPG130, Medical Image Processing Group, Dept. of Radiology, Univ. of Pennsylvania, Philadelphia, 1988.

[3] Cutting, C.C., et al. Computer-aided planning and evaluation of facial and orthognathic surgery. Comput. Plast. Surg. 13, (1986), 449-461.

[4] Gillespie, J.E., Isherwood, I., Barker, G.R., and Quayle, A.A. Three-dimensional reformations of computed tomography in the assessment of facial trauma. Clinical Radiology 38, (1987), 523-526.

[5] Gordon, D., and Udupa, J.K. Fast surface tracking in three-dimensional binary images. Comput. Vis. Graph. Image Processes 45, (1989), 196-214.

[6] Greenleaf, J.F., Tu, T.S., and Wood, E.H. Computer-generated three-dimensional oscilloscopic images and associated techniques for display and study of the spatial distribution of pulmonary blood flow. IEEE Trans. Nucl. Sci., NS-17, (1970), 353-359.

[7] Hadley, M.N., et al. Three-dimensional computed tomography in the diagnosis of vertebral column pathological conditions. Neurosurgery 21, (1987), 186-192.

[8] Harris, L.D., Robb, R.A., Yuen, T.S., and Ritman, E.L. Non-invasive numerical dissection and display of anatomic structure using computerized x-ray tomography. In Proceedings of the Society of Photo-Optical Instrumentation Engineers (Bellingham, Wash.). SPIE, 1978, pp. 10-18.

[9] Herman, G.T., and Webster, D. A topological proof of a surface tracking algorithm. Comput. Vis. Graph. Image Processes 23, (1983), 162-177.

[10] Herman, G.T., and Liu, H.K. Three-dimensional display of human organs from computed tomograms. Comput. Graph. Image Processes 9, (1979), 1-29.

[11] Marr, D. Vision. W.H. Freeman and Co., San Francisco, 1982, p. 44.

[12] Raya, S.P., and Udupa, J.K. Shape-based interpolation of multidimensional objects. IEEE Trans. Med. Imag. To be published.

[13] Sander, P.T., and Zucker, S.W. Inferring differential structure from 3-D images: Smooth cross sections of fibre bundles. Tech. Rep. TR-CIM-88-6, Computer Vision and Robotics Laboratory, McGill Research Center for Intelligent Machines, McGill University, Montreal, Canada, 1988.

[14] Trivedi, S.S., et al. Measurements on 3-D surface displays in the clinical environment. In Proceedings of the 7th Annual Conference and Exposition of the National Computer Graphics Association (Fairfax, Va). NCGA, 1986, pp. 93-110.

[15] Udupa, J.K., and Odhner, D. Display of medical objects and their interactive manipulation. In Proceedings of the 15th Canadian Conference on Computer Graphics and Computer Vision (Palo Alto, Calif.). Morgan Kauffman, 1989, pp. 40-46.

[16] Wojcik, W.G., Ediken-Monroe, B.S., and Harris, J.H. Three-dimensional computed tomography in acute cervical spine trauma: A preliminary report. Skeletal Radiology 16, (1987), 621-629.

AUTHOR'S RESPONSE

The origin of Udupa and Herman's many accusations becomes clear upon a close, honest look at scientific matters:

In their first objection, the authors quote a passage from a section titled "The Material Mixture Model." This model was proposed by Bob Drebin, Loren Carpenter, and Pat Hanrahan of Pixar. The section's purpose is to explain why these researchers departed from geometry-based modeling. The section is based on quotes from interviews with Drebin and summarizes parts of Drebin et al.'s paper, "Volume Rendering," which Drebin delivered at the SIGGRAPH '88 panel to thousands of computer graphics specialists, and which was published in Computer Graphics, Aug. 1988, pp. 65-74. In their paper, Drebin et al. write:

An implicit assumption in surface rendering algorithms is that a model consisting of thin surfaces suspended in an environmental of transparent air accurately represents the original volume. Often the data is from the interior of a fluid-like substance containing mixtures of several different materials. Subtle surfaces that occur at the interface between materials, and local variations in volumetric properties, such as light absorption or emission, are lost if the volume is reduced to just surfaces.

The corresponding passage in my article and the one found so problematic by Udupa and Herman are a summary of the above. The summary is immediately followed by a quote from Drebin. He said, "We have a three-dimensional data set and we'd like to see into that three-dimensional data set, so let's treat it as a three-dimensional image." The paragraph in its entirety is about Drebin's reasons for departing from surface rendering techniques and developing a new model.

It is also interesting that Udupa and Herman do not comment on a key passage before the one they find so offensive. In the preceding paragraph, I summarized several surface extraction techniques and ended with:

Other surface techniques output polygons at every voxel; each voxel might be treated as a cube whose faces are output as square polygons, or values at each vertex are used and estimates made of where a surface cuts through the cube.

This passage summarizes cuberille and marching cube techniques, as found in Drebin et al.'s paper:

The cuberille technique first sets a threshold representing the transition between two materials and then create[s] a binary volume indicating where a particular material is present. Each solid voxel is then treated as a small cube and the faces of this cube are output as small square polygons (Herman, 1979) ... The marching cubes technique places the sample values at the vertices of the cube and estimates where the surface cuts through the cube (Lorensen, 1987).

The first author of "(Herman, 1979)" is the second author of the letter above and its reference [10] [see preceding reference list] and "(Herman, 1979)" are the same paper. Drebin found an alternative to Herman's method of treating voxels as cubes, and Herman does not like it. The quarrel is between Udupa and Herman and Drebin et al. and seems to predate my article. My article makes no value judgments on the contributions of any researchers. There is nothing "partisan" about a journalist who, familiar with the literature, chooses not to quote directly from it and opts, instead, to use interview quotes. In light of Udupa and Herman's camouflaged position, I think it unnecessary to refute any other accusations. But, if my article sparks or rekindles a useful debate between factions, I am pleased.
COPYRIGHT 1989 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1989 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Udupa, Jayaram K.; Herman, Gabor T.
Publication:Communications of the ACM
Date:Nov 1, 1989
Words:2638
Previous Article:Abstracts from other ACM publications.
Next Article:Acm news.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |