Printer Friendly

Region Space Guided Transfer Function Design for Nonlinear Neural Network Augmented Image Visualization.

1. Introduction

Since there are the two characteristics of visibility of object and clear detail revealing, visualization has been proven to be of paramount important for exploring meaningful properties of volume data [1]. Because of the ability of obtaining the two-dimensional rendering results on the screen directly from the data field without building a network model in advance, volume rendering is thus certified to be an effective visualization method of extracting underlying information of interest from volumetric data using interactive graphics and imaging [2, 3]. Kniss et al. [4] visualized the muscle, soft tissues, and the bone from the visible male head data using the volume rendering method and produced a set of direct manipulation widgets to make exploring such features convenient. Ching and Chang [5] rendered the feature of interest in CT (Computed Tomography) images and generated a large wide-angle perspective projection view in an endoscopy to help a physician in diagnosis. Zhang et al. [1] synchronized the dual-modality of cardiac MRI (Magnetic Resonance Imaging) and 3D ultrasound volumes and visualized the dynamic heart by 4D cardiac image rendering. Based on volume rendering. Zhang and Wang et al. developed a platform integrating multivolume visualization method for both heart anatomical data and electrophysiological data visualization [6]. Hsieh et al. [7] visualized the three-dimensional (3D) geometry of the ear ossicle with the segmented ossicle computer tomography (CT) slices, which presented the spatial relation with the temporal bone to diagnose middle ear disease. To visualize brain activity conveniently, Holub and Winer [8] performed 3D and 4D volume ray casting on a tablet device in real-time.

Transfer function plays a fundamental role in visualization for its capability of classifying and segmenting features of volume data, which may affect the quality of rendering image and the perception of users to volume data. To measure through-plane MR flow. Thunberg et al. [9] presented a visualization method which combined the magnitude and velocity images into one single image. By using the transfer function, the velocities are color-coded and set to a predefined opacity. How the measured blood flow was related to the underlying anatomy can thus be understood. Zhang presented a statistics-based method to visualize 3D cardiac volume data set [10] and further proposed a novel transfer function design approach for revealing detailed structures in the human heart anatomy via perception-based lighting enhancement [11]. Yang presented a fusion visualization framework to combine the cardiac electrophysiology pattern with the anatomy pattern through a novel multidimensional fusion transfer function [12].

Ebert et al. [13] studied the accuracy of volume rendering for arterial stenosis measurement and the results suggested that the choice of transfer function parameters greatly affects the accuracy of volume rendering, while accurate transfer function parameters selection is still a challenge due to the lack of meaningful guidance information and intuitive user interface. The present methods for this problem mainly are object-centric, image-centric, and data-centric [14]. Object-centric approach first classifies or segments the volume data through clustering, probability, and machine learning which covers artificial neural network, support vector machine, and hidden Markov model [15-18]. Then the optical parameters are specified based on the classification result.

Different from the object-centric method, the image centric transfer function is designed on the rendered images. Through the evaluation of the projective images, parameters of transfer function are automatically adjusted and reapplied to the original data recursively until the satisfied rendering result is achieved. Based on a set of rendered images, He et al. [19] presented the stochastic method to search the satisfied transfer function. Marks et al. [20] proposed the Design Gallery method to provide the user varieties of ordered graphics or animations with different perceptions, which are generated automatically by a series of transfer functions given input parameter vector. Users then explore these images space to search for the satisfactory transfer function.

In data-centric approach, parameters of transfer function are specified by analyzing the volume data. Generally, collecting additional information related to the data prior to confirming transfer function makes the design more convenient. Scalar value of volume data is commonly considered for deriving 1D transfer function. The gradient [21, 22] and curve [23, 24] are introduced as the second variable for the two-dimensional transfer function. Roettger et al. [25] extended the variable of transfer function to spatial information. Spatial regions connected with each other were grouped and thus classified. Huang et al. [26] added spatial information into the transfer function domain and extended the number of dimensions to three. Material boundaries are accurately revealed by taking advantage of a designed cost function in three dimensions to imply regional growth algorithm. Some approaches were proposed to classify the topological structure of volume data for the transfer function design [27, 28]. Through the continuous scale-space analysis and detection filters, Correa et al. [29] obtained the 3D scale fields which represent the scale of every voxel. The size-based transfer function is then proposed using the scale fields and maps the scale of local features to color and opacity. Thus the features with similar scalar values in the complex data can be classified based on the relative size. With the increasing dimensions of transfer function, specifying its parameters properly becomes a more difficult and tedious task.

When the dimension is more than 2, it is difficult to specify the parameters for the higher dimensional transfer functions. The histograms are often used to find satisfied transfer functions. Based on the first and second derivatives in the volume, Kindlmann and Durkin [22] built a 2D histogram and the object boundaries appear as arcs in the histogram. A feature-sensitive transfer function can then be semiautomatically generated according to the arcs to reveal features of interest. Lum et al. [30 ] employed gradient-aligned samples instead of first derivatives as the first property for creating a variant of 2D histogram. The transfer function designed through the histogram classifies the voxels with different degree of homogeneity by mapping them to different optic parameters. However, with an increasing number of boundaries, their separation based on above methods becomes more difficult due to intersections and overlaps.

In this paper, firstly a neural network volume data preprocessing approach for slice denoising is implemented to improve the quality of 3D biomedical data. Then a two-dimensional transfer function with the preprocessed data is designed based on a modified 2D histogram, which is created using a novel region space based method with depth information. The features of interest in the data are thus exactly explored. In Section 2 of this paper, the method for denoising is described and a two-dimensional transfer function based on 2D histogram is designed. Efficiency and practicability of the presented method are further shown in Section 3. Finally, conclusions are discussed in Section 4.

2. Transfer Function Design

2.1. Augmentation on Slice Data. Biomedical volume data produced by current noninvasive devices such as CT and MRI scanners are usually accompanied by noisy, partial effect, and bias. Data with serious noise or error message which causes low SNR (Signal Noise Ratio) will directly affect transfer function specification and cause the objects obscuring in the resulting image.

Spatial mean low-pass filtering such as General Median filtering and Gaussian smoothing has the advantage of reducing the amplitude of noise fluctuations. While the filtering blurs the details in the data such as the line or edge and does not focus on processing regional boundary or tiny structures, which makes the resulting image too fuzzy. This is a hamper to effectively enhance boundary for those noisy data containing lots of details.

Although nonlinear filtering has the achievement of reserving the edge, it produces the loss of resolution due to the suppression of details. To solve this problem, the nonlinear enhancement algorithm uses information of boundary and the neighbor of a pixel to preprocess the image data [14, 31], which effectively removes the noise region with homogeneous physical properties and significantly improves the image quality. Loss of information can thus be minimized by reserving the object boundary and the detailed structure and the shape is enhanced by discontinuous sharpening. Since the anisotropic diffusion filtering smoothes the image along the edge direction rather than in the orthogonal direction to the edge, location and intensity of the edge can be retained. Unlike traditional methods, neural network can learn more features beneficial to the task through hierarchical structure. Based on a multilayer perceptron neural network, Burger et al. [32] presented a denoising algorithm that is learned on a large dataset for image denoising. For the multilayer perceptron (MLP), it can be represented as a nonlinear function which maps a noisy image to a noise-free image:

h(u) = [[alpha].sub.3] + [W.sub.3] * [THETA] ([[alpha].sub.2] + [W.sub.2] * [THETA]([[alpha].sub.1] + [W.sub.1] * u)) (1)

Here the network has three hidden layers. [[alpha].sub.1], [[alpha].sub.2], [[alpha].sub.3] are vector-valued biases. The weight matrices of the structure are [W.sub.1], [W.sub.2], [W.sub.3]. The function [THETA] operates component-wise. In order to realize image denoising, the clean images are selected from the image dataset and the input noise level is employed to produce the corresponding noisy images. The MLP parameters are then estimated by the back propagation algorithm satisfying:

[mathematical expression not reproducible] (2)

where u is the vector-valued noisy images input and h(u) is the mapped vector-valued denoised images output. v is the clean images. The architecture of the network is as Figure 1.

During application for image denoising, MLP uses fully connected neural network to process image fragments and then splits and combines all processed image segments to form a denoising image. First the noisy image is split into overlapping patches and each patch u is denoised separately. Then the denoised patches h(u) are placed at the locations of their noisy counterparts. The denoised image is thus obtained by averaging on the overlapping regions.

2.2. Region Space Guided Transfer Function Design. Kindlmann et al. [22] added higher-order derivatives of the voxel to transfer function domain in his presented approach. Those extracted boundaries appeared as arches in the derived histogram with axes representing scalar value and gradient magnitude. Although using this histogram can improve selection of boundaries, intersection or overlapping of two arches caused by different feature voxels sharing the same scalar value and gradient magnitude may result in ambiguities in classification of boundaries. Sereda et al. [33] proposed a multidimensional transfer function based on the LH histogram to facilitate separation of features which are represented by the arches.

LH Histogram based method computes low and high values of each sample voxels which are labeled as FL and FH respectively. For each sample voxel, if the gradient is less than the threshold, the voxel is identified to be internal sample. Otherwise the voxel is considered to be the boundary element. FL and FH of the internal voxel are equal and the value is the scalar of the voxel. For the voxels which are supposed to belong to boundaries, integration is implemented along the gradient and reverse direction in gradient field until the gradient is less than the threshold. FL and FH can then be found. The FL and FH values of all voxels are expressed in the same coordinate system, and the LH histogram is thus obtained. Since the value of FL is not more than FH, points in the LH histogram are only located above the diagonal line. The points on the diagonal indicate the internal voxels; that is, the FL and FH values are equal. The rest represent the boundary voxels and the corresponding FL and FH values are the scalar value of the two materials of the boundary respectively.

In volume rendering, using LH method to design transfer function can not only reduce the dependence on image segmentation, but also include voxel gradient information and boundary gray information. Due to the characteristics of medical data and clinical application, centralization of the voxel is required.

A proper region space [OMEGA] is selected and a voxel is compared with the voxels in [OMEGA]. Let V(p) be the scalar or intensity value of a voxel p. The intensity mean m and variance v of all voxels within [OMEGA] are given the following, respectively:

m = 1/n ([summation over ([p.sub.i][member of][OMEGA])]V([p.sub.i]) + V(p)) (3)

v = 1/n ([summation over ([p.sub.i][member of][OMEGA])][parallel]V([p.sub.i]) - m[parallel] + [parallel] V(p) - m[parallel]) (4)

where [p.sub.i] represents the adjacent voxel in the region space of voxel p and n is the number of voxels in [OMEGA]. The criteria for identification boundary voxels can then be formulated as in (5):

P [member of] [S.sub.inner] v < r

p [member of] [S.sub.boundary otherwise (5)

where [S.sub.inner] is the set of voxels inside the materials and [S.sub.boundary] is the set of voxels on the boundaries. Thus the voxel that difference between it and the voxels in [OMEGA] falls out of the range of r is considered to the boundary voxel.

In the region where the complex boundary exists, using the single criterion will result in boundary determination error. Since some boundaries only appear at a certain depth and then disappear when they reach a certain depth, the complex boundaries can be further differentiated according to the depth information. Then a modified 2D histogram is created using the region space based method with depth information. In this paper the points in the original histogram are further grouped according to the corresponding depth.

A 2D transfer function can then be specified based on the created LH histogram by selecting relevant areas and by assigning them color and opacity. The corresponding features in the volume data can thus be explored. The details of the proposed method are given in Algorithm 1.
Algorithm 1: Region space guided visualization.

   Input: Anisotropic diffused volume data.
   Output: Visualization result of biomedical volume data.

1    for each voxel p(x,y,z) do
2     chose adjacent voxels in the region space [OMEGA];
3     calculate the intensity median m of p;
4     calculate the variance v of p;
5     set the value of estimation radius r;
6     if v < r then
7     p is marked as an inner voxel;
8            FL = FH = f(p);
9     else
10    p is considered to be the boundary voxel;
11    use the second order Runge-Kutta method to search FL and
       FH value of p;
12    end
13   end
14   compute the depth information for each voxel;
15   construct the depth LH histogram and design the transfer function;
16   visualize the volume data according to the transfer function;


3. Results and Discussion

In this section, some data sets are used as the test data, including tooth data and sheep heart data, to evaluate the performance of the proposed transfer function. The size of data set is 256x256x161 and 352x352x256, respectively All the experiments are carried out on the computer with Intel Core i5 2.66G, 4.00G RAM and graphics card of NVIDIA GeForce GT 650.

Biomedical volume data produced by current noninvasive devices such as CT and MRI scanners are usually accompanied by serious noise, which will generate poor visualized image and cause the blur objects in the resulting image. Thus the MLP neural network is implemented to denoise the volume data. In the experiments, the union of the LabelMe dataset is used to train MLP which contains approximately 150,000 images. Before training, the data are filled with padding operation and each pixel is filled with 6 pixel sizes. The noise level a is set to 10. We use a patch of size 39x39 to generate the predicted patch and then adopt a filter of size 9x9 to average the output patches, thus its effective patch size is 47x47. In the experiment the learning rate r in each layer is equal to r/N and N is the number of input units of current layer. The basic learning rate was set to 0.1. To improve results slightly we use the sliding window method with stride size of 3 which weights denoised patches with a Gaussian window instead of using all possible overlapping patches. Figure 2 compares the image denoising results of nonlinear enhancement and MLP neural network on the tooth data. The Peak Signal to Noise Ratio (PSNR) with the nonlinear enhancement filtering is 41.4294, and the Structural Similarity Index (SSIM) is 0.9325. The PSNR with the MLP method is 41.4294, and SSIM is 0.9325. As one can see, the MLP network produces more visually pleasant results. Compared with original data, noise in the homogeneous region is suppressed and the quality of the image is improved.

In LH histogram, points around diagonal represent interior of materials. Regions including those points are thus assigned to lower opacity in the transfer function to fade unimportant information out. Remainder regions are the accumulation of boundary voxels which contain features of interest. Figure 3 shows LH histogram and rendering result of the original tooth data and MLP denoised data. Figure 3(a) describes the created LH histogram (left) and the corresponding rendering result with original tooth data set (right). Figure 3(b) shows the LH histogram (left) based on the denoised data. From Figure 3(b), we can see the more compact separation of points in the LH histogram, which is on account of the suppression of noise in the homogeneous region and enhancement of the boundaries between materials. Since the noise is suppressed effectively, boundaries between different materials can be visualized more clearly. As shown in Figure 3(b), the fact which is specifically manifested through the experiment result is that dentine-enamel (yellow) is explored exactly and noises around the root of the dentine are considerably removed.

Figure 4 shows the created LH histogram and rendering result on the anisotropic diffusion enhanced tooth volume data using the conventional and regional criteria based method. The number of iteration of the nonlinear filtering is the process ordering parameter. Figure 4(a) presents the visualization result of enhanced data using the conventional method. The gradient magnitude threshold for investigating the FL and FH intensity profile is set to 10. Figure 4(b) shows the LH histogram constructed and corresponding rendering images through regional criteria based method. Here the region range r is set to 1.6. As shown in Figure 4(b), since the noise is suppressed and the path tracing for FL and FH value starts with regional criteria, the distribution of separated parts of points that correspond to different features is more concentrate in LH histogram, which thus ensures a more accurate identification of boundary voxels. The fact which is specifically manifested through two experiment results is that noises around the root of the dentine are considerably removed, and the phenomenon of blurred boundary is removed and various boundaries of the tooth, i.e., enamel-air (white), dentine-enamel (yellow), pulp-dentine (red), and dentine-air (pink) boundary, are revealed clearly in the final image.

Figure 5 shows the rendering result after introducing depth information into region space with the MLP augmented data. Classifying boundary voxels through conventional LH histogram will result in the confused boundary exploration. From Figure 4, we can see that there exists a visible discontinuity in the pulp-dentine boundary. This discontinuity is due to classifying those boundary voxels of the dentine tissue falsely. Because the boundary appears at a specific depth, for example, the tooth enamel is in the region near the human eye, while the medulla is at the depth far away from the human eye, thus we add depth information for the histogram construction and can obtain the distinct boundary. In our experiment the depth of enamel is about 120 and the depth of medulla is about 80. As shown in the result image in Figure 5(a), since more voxels are classified into boundary voxels via the transfer function which is designed based on LH histogram with depth information in region space properly, the discontinuity of the pulp-dentine boundary (red) in Figure 4 is corrected. And the exact pulp-dentine boundary is revealed in the rendering result image. The rendering time of the proposed method is 1.1s, which enriches real-time interaction property of visualization.

Figure 6 shows the denoising result of sheep heart slice data and gives the rendering result via the proposed region space guided transfer function. Figure 6(a) shows the original slices. In Figure 6(b), the two corresponding denoised results are presented. It is obvious that preprocessing for tissues of the sheep heart such as the muscle and the fat is effective in decreasing noise and the boundary details are consequently remained. Fine structural features of interesting in the sheep heart are thus clearly visualized through the region space based transfer function with the augmented data and, as in Figure 6(c), the fat of sheet heart is colored in yellow and the muscle is red. The profile of sheep heart is colored by white. From the result, structures of the sheep heart are thus exactly explored and the shape, spatial position, and relationship of the tissues can be observed without ambiguity. The rendering time of the sheep heart data set is 2.6s.

4. Conclusion

Transfer function in performance of volume rendering plays a crucial role for exploring directly detail information hiding in data as well as enhancing important boundaries. In this work we first implement the MLP neural network on volume data to denoise while preserve the boundary. This method can considerably improve quality of volume data acquired by devices. Then we improve the LH method by combining the regional depth information to achieve the transfer function semiautomatic generation. This method can avoid the influence of noise and make the voxels more centralized. In the LH histogram the voxel distribution at the diagonal line is more concentrated, and the boundary of important objects are effectively emphasized. The features of interest in the data can thus be found exactly by mapping scalar value of boundary voxels which correspond to the points in LH histogram to appropriate opacity and color.

https://doi.org/10.1155/2018/7479316

Data Availability

The two datasets are both open data which are available at http://visual.nlm.nih.gov/.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work was supported by the National Natural Science Foundation of China (NSFC) under Grant no. 61502275 and the Postdoctoral Science Foundation of China (no. 2017M622210). This work was also supported in part by the Natural Science Foundation of Shandong of Grant no. ZR2017MF051, the National Natural Science Foundation of China (NSFC) of Grant no. 61501450, and the MOE (Ministry of Education in China) Project of Humanities and Social Sciences of Grant no. 16YJC880057.

References

[1] Q. Zhang, R. Eagleson, and T. M. Peters, "GPU-based visualization and synchronization of 4-D cardiac MR and ultrasound images," IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 5, pp. 878-890, 2012.

[2] H. Pfister, B. Lorensen, C. Bajaj et al., "The transfer function bake-off," IEEE Computer Graphics and Applications, vol. 21, no. 3, pp. 16-22, 2001.

[3] Q. Zhang, R. Eagleson, and T. M. Peters, "Volume visualization: A technical overview with a focus on medical applications," Journal of Digital Imaging, vol. 24, no. 4, pp. 640-664, 2011.

[4] J. Kniss, G. Kindlmann, and C. Hansen, "Multidimensional transfer functions for interactive volume rendering," IEEE Transactions on Visualization and Computer Graphics, vol. 8, no. 3, pp. 270-285, 2002.

[5] Y.-T. Chingand C.-L. Chang, "A volume rendering technique to generate a very large wide-angle endoscopeic view," Journal of Medical and Biological Engineering, vol. 22, no. 2, pp. 109-112, 2002.

[6] L. Zhang, C. Gai, K. Wang, W. Lu, and W. Zuo, "GPU-based high performance wave propagation simulation of ischemia in anatomically detailed ventricle," in Proceedings of the Computing in Cardiology Conference (CinC '11), pp. 469-472, Hangzhou, China, September 2011.

[7] M. S. Hsieh, F. P. Lee, and M. D. Tsai, "A virtual reality ear ossicle surgery simulator using three-dimensional computer tomography," Journal of Medical and Biological Engineering, vol. 30, no. 1, pp. 57-63, 2010.

[8] J. Holub and E. Winer, "Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device," Journal of Digital Imaging, vol. 30, no. 6, pp. 738-750, 2017

[9] P. Thunberg and A. Kahari, "Visualization of Through-Plane Blood Flow Measurements Obtained from Phase-Contrast MRI," Journal of Digital Imaging, vol. 24, no. 3, pp. 470-477, 2011.

[10] L. Zhang, K. Wang, F. Yang et al., "A Visualization System for Interactive Exploration of the Cardiac Anatomy," Journal of Medical Systems, vol. 40, no. 6, 2016.

[11] L. Zhang, K. Wang, H. Zhang, W. Zuo, X. Liang, and J. Shi, "Illustrative cardiac visualization via perception-based lighting enhancement," Journal of Medical Imaging and Health Informatics, vol. 4, no. 2, pp. 312-316, 2014.

[12] F. Yang, W. G. Lu, L. Zhang, W. M. Zuo, K. Q. Wang, and H. G. Zhang, "Fusion visualization for cardiac anatomical and ischemic models with depth weighted optic radiation function," in Proceedings of the Computing in Cardiology Conference (CinC '15), pp. 937-940, IEEE, Nice, France, September 2015.

[13] D. S. Ebert, D. G. Heath, B. S. Kuszyk et al., "Evaluating the potential and problems of three-dimensional computed tomography measurements of arterial stenosis," Journal of Digital Imaging, vol. 11, no. 3, pp. 151-157, 1998.

[14] G. Gerig, O. Kubler, R. Kikinis, and F. A. Jolesz, "Nonlinear anisotropic filtering of MRI data," IEEE Transactions on Medical Imaging, vol. 11, no. 2, pp. 221-232, 1992.

[15] F.-Y. Tzeng and K.-L. Ma, "A cluster-space visual interface for arbitrary dimensional classification of volume data," in Proceedings of the in Proceedings of the 6th Joint Eurographics-IEEE TCVG Symposium on Visualization, pp. 17-24, Konstanz, Germany, 2004.

[16] P Sereda, A. Vilanova, and F. A. Gerritsen, "Automating transfer function design for volume rendering using hierarchical clustering of material boundaries," in Proceedings of the in Proceedings of the 8th Joint Eurographics-IEEE TCVG Symposium on Visualization, pp. 243-250, Lisbon, Portugal, 2006.

[17] F.-Y. Tzeng, E. B. Lum, and K-L. Ma, "A novel interface for higher-dimensional classification of volume data," in Proceedings of the IEEE Visualization Conference (VIS '03), pp. 505-512, Seattle, WA, USA, 2003.

[18] F.-Y. Tzeng, E. B. Lum, and K.-L. Ma, "An intelligent system approach to higher-dimensional classification of volume data," IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 3, pp. 273-283, 2005.

[19] T. S He, L. C. Hong, A. Kaufman, and et al, "Generation of transfer functions with stochastic search techniques," in Proceedings of the Seventh Annual IEEE Visualization '96, pp. 227-234, San Francisco, CA, USA.

[20] J. Marks, B. Mirtich, B. Andalman et al., "Design Galleries: A general approach to setting parameters for computer graphics and animation," in Proceedings of the 1997 Conference on Computer Graphics, SIGGRAPH, pp. 389-400, August 1997

[21] M. Levoy, "Display of surfaces from volume data," IEEE Computer Graphics and Applications, vol. 8, no. 3, pp. 29-37, 1988.

[22] G. Kindlmann and J. W. Durkin, "Semi-automatic generation of transfer functions for direct volume rendering," in Proceedings of the 1998 IEEE Symposium on Volume Visualization, VVS 1998, pp. 79-86, USA, October 1998.

[23] J. Hladuvka, A. Konig, and E. Groller, "Curvature-based transfer functions for direct volume rendering," in Proceedings of the In Spring Conference on Computer Graphics 2000, vol. 16, pp. 58-65, 2000.

[24] G. Kindlmann, R. Whitaker, T. Tasdizen, and T. Moller, "Curvature-Based Transfer Functions for Direct Volume Rendering: Methods and Applications," in Proceedings of the VIS 2003 PROCEEDINGS, pp. 513-520, USA, October 2003.

[25] S. Roettger, M. Bauer, and M. Stamminger, "Spatialized transfer functions," in Proceedings of the In Eurographics, IEEE VGTC Symposium on Visualization, pp. 271-278, 2005.

[26] R. Huang, . Kwan-Liu Ma, P McCormick, and W. Ward, "Visualizing industrial CT volume data for nondestructive testing applications," in Proceedings of the IEEE Visualization 2003, pp. 547-554, Seattle, WA, USA.

[27] I. Fujishiro, T. Azuma, and Y. Takeshima, "Automating transfer function design for comprehensible volume rendering based on 3D field topology analysis," in Proceedings of the IEEE Visualization '99, pp. 467-470, October 1999.

[28] S. Takahashi, Y. Takeshima, and I. Fujishiro, "Topological volume skeletonization and its application to transfer function design," Graphical Models, vol. 66, no. 1, pp. 24-49, 2004.

[29] C. D. Correa and K.-L. Ma, "Size-based transfer functions: a new volume exploration technique," IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1380-1387, 2008.

[30] E. B. Lum and K.-L. Ma, "Lighting transfer functions using gradient aligned sampling," in Proceedings of the IEEE Visualization 2004-Proceedings, VIS 2004, pp. 289-296, USA, October 2004.

[31] P Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, 1990.

[32] H. C. Burger, C. J. Schuler, and S. Harmeling, "Image denoising: Can plain neural networks compete with BM3D?" in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012, pp. 2392-2399, USA, June 2012.

[33] P Sereda, A. V. Bartroll, I. W. O. Serlie, and F. A. Gerritsen, "Visualization of boundaries in volumetric data sets using lh histograms," IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 2, pp. 208-217, 2006.

Fei Yang (iD), (1,2) Xiangxu Meng (iD), (1) JiYing Lang, (2) Weigang Lu (iD), (3) and Lei Liu (4)

(1) School of Computer Science and Technology, Shandong University, Jinan 250101, China

(2) School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, 264209, China

(3) Department of Educational Technology, Ocean University of China, Qingdao, 266100, China

(4) The Institute of Acoustics of the Chinese Academy of Sciences, Beijing, 100190, China

Correspondence should be addressed to Xiangxu Meng; mxx@sdu.edu.cn

Received 6 July 2018; Accepted 12 September 2018; Published 1 November 2018

Guest Editor: Shengping Zhang

Caption: Figure 1: The architecture of the MLP network.

Caption: Figure 2: Preprocessing results of tooth dataset. From top to bottom: original slice data, denoised images by the nonlinear enhancement algorithm, and denoised images by MLP (a) Original slice data; (b) nonlinear enhancement denoised result; (c) MLP denoised result.

Caption: Figure 3: LH histogram and rendering result of the original tooth data and MLP denoised data: (a) LH histogram and the corresponding rendering result of original data and (b) LH histogram and corresponding rendering result of denoised data.

Caption: Figure 4: LH histogram and corresponding rendering result with nonlinear enhanced tooth dataset through the conventional and region criteria based method: (a) LH histogram based on conventional method with gradient threshold of 5 and the corresponding rendering result and (b) LH histogram based on region criteria based method and the rendering result.

Caption: Figure 5: Rendering result of MLP augmented tooth dataset with two methods: (a) rendering result based on region criteria based method and (b) rendering result based on depth enhanced method.

Caption: Figure 6: Visualization result of augmented sheep heart dataset with region space guided transfer function: (a) the original sheep heart data; (b) the denoised data; (c) rendering result of sheep heart with the denoised data.
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Yang, Fei; Meng, Xiangxu; Lang, JiYing; Lu, Weigang; Liu, Lei
Publication:Advances in Multimedia
Date:Jan 1, 2018
Words:5181
Previous Article:Robust Visual Tracking with Discrimination Dictionary Learning.
Next Article:Visual Tracking Based on Discriminative Compressed Features.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters