Printer Friendly

Unsupervised Joint Image Denoising and Active Contour Segmentation in Multidimensional Feature Space.

1. Introduction

Unsupervised image segmentation is an important problem with many applications in science, including medical imaging. Image segmentation is a postprocessing problem in many computer vision tasks; its aim is to divide an image into finite number of subregions. The features of different subregions are utilized as the segmentation criteria. The statistical methods, such as expectation-maximization (EM) algorithm [1] and fuzzy C-means clustering (FCM) algorithm [2], are applied in classifying the pixels based on some particular image features segmentation criteria. In general, the statistical methods achieve the classification based on only one segmentation criterion. However, there is various kinds of features in an image and the features may vary spatially. Therefore it will be not precise to use one kind of these methods. How to extract the features of an image and how to utilize these features as the segmentation criterion are significant for segmentation.

Many works utilize the difference between invariable pixel intensities, as well as their spatial connectivity, in assessing whether two pixels belong to the same object. These active contour models based on the level set method [3] classify the pixels by only one image feature, that is, the image intensity based on uniform distribution [4-6]. Nevertheless, the image intensity varies spatially; thus the image intensity is not necessarily described by one kind of specific distribution. For improving the precision, the works of [7, 8] extract the multifeature to deal with more complex information content. Simultaneously, the additional artificial parameters are introduced; thus it needs the experience to set the parameters.

The Polyakov action was introduced in image processing by Sochen et al. in [9]. This segmentation model is different from the other segmentation methods in two ways. First, images are represented as Riemannian manifolds embedded in a higher dimensional spatial-feature manifold. Second, the Polyakov action provides an efficient mathematical framework to embed the multifeature of images in higher-dimensional Riemannian manifolds by harmonic maps. Bresson et al. [10] propose active contour models based on the Polyakov action. These models map several kinds of features, for example, color and texture, into higher dimensional space. Because these models choose a metric with artificial parameters on the feature space, it requires careful manual parameter-tuning.

In this paper, the proposed active contour model is formulated in the framework of the Polyakov action [9]. Unlike the other related works [7-9], a metric on the feature space manifold is defined by the invariant geometry of images. Consequently, the proposed method is purely based on the geometrical features of images without any artificial parameters. We implement the segmentation through two steps. First, an approximated image, removing the noise while preserving the main structures, is found in the feature space built on geometrical features of the original image. Second, the active contour is embedded into the feature space built on both the statistical and geometrical features of the approximated image. For efficiency, we solve the proposed model via the improved Chambolle dual formulation [10] of the minimization problem.

The paper is organized as follows. In Section 2, we introduce the mathematical framework based on the Polyakov action. In Section 3, we introduce the proposed model and the numerical algorithm of the proposed method is also summarized. In Section 4, we validate our model by some experiments on medical images. In Section 5, we end the paper by a brief conclusion.

2. Geometrical Framework Based on Weighted Polyakov Action

Sochen et al. introduce a general geometrical framework for low-level vision, based on the Polyakov action [9]. In this framework, images are represented as the surfaces on a Riemannian manifold. The Polyakov action is a functional that measures the weight of a mapping X = ([X.sup.1]([sigma]), ..., [X.sup.m]([sigma])) between an n-dimensional embedded manifold (e.g., the image manifold) [SIGMA] with coordinates [sigma] = ([[sigma].sup.1], ..., [[sigma].sup.n]) and the m-dimensional manifold M with the coordinates ([X.sup.1]([sigma]), ..., [X.sup.m]([sigma])), m > n. A Riemannian structure metric [g.sub.uv] can be introduced to measure the local distances on the embedded manifold [SIGMA], whereas we use the metric [h.sub.ij] to measure the distance on the manifold M. To measure the weight of the mapping X : [SIGMA] [??] M, the Polyakov action is used as a generalization of the [L.sub.2]-norm on the embedded image to space feature manifold M:

[mathematical expression not reproducible], (1)

where g is the determinant of the image metric tensor [g.sub.uv] and [g.sup.[mu]v] is its inverse. The metric g is chosen as the induced metric, obtained by the pullback relation: [g.sub.uv] = [h.sub.ij][[partial derivative].sub.[mu]][X.sup.i][X.sub.j]; the Polyakov energy is shortened to

S([X.sup.i], [h.sub.ij]) = [integral] [square root of [g]] [d.sup.n] [sigma]. (2)

In the relevant works [7, 8], the authors get the denoised image and the segmentation results by minimizing the energy functional (2) with respect to denoising and segmentation, respectively. In seminal work [9], they embed grey images in the feature (x, y, I(x, y)), where I(x, y) is the grey intensity value for pixel (x,y). They choose a metric [[h.sub.ij]] = diag(1, 1, [[beta].sup.2]); [beta] > 0 is a constant. Based on this metric on feature space and the Polyakov energy, the regularization term on the intensity values is given by [integral] [square root of 1 + [[beta].sup.2][[absolute value of [nabla]I].sup.2]]. Although it allows setting the scale of the feature dimension independently of the spatial dimensions, the accuracy of the scale is subject to the artificial parameter [beta].

3. The Active Contour Model in Multifeature Space

In this work, we utilize an improved geometrical framework based on the weighted Polyakov action without any artificial parameter. First, we get an approximated image by embedding it into the feature space constituted by the features of the original image. Second, given the approximated image, active contour is driven by embedding the level set function into the higher dimensional feature space composed of the geometrical and statistical features of the approximated image.

3.1. Approximating Image under an Improved Geometrical Framework. The original image I(x,y) is defined on the image manifold Z with coordinates (x, y). The approximated image u is defined on the image manifold [SIGMA] and denoted by u(x,y). To preserve the main edges of the original image, we extract the geometrical features of edges, [[nabla].sub.S]I = ([([I.sub.x]).sup.3] + [I.sub.x][([I.sub.y]).sup.2],[([I.sub.y]).sup.3] + [I.sub.y][([I.sub.x]).sup.2]), derived from the anisotropic diffusion equation [2]. Considering the intensity value I(x, y) as another feature, we build the feature space, (x, y, u, ([[nabla].sub.S]I(x, y) - [[nabla].sub.S]u(x, y)), (I(x, y) - u(x, y))), denoted by (x, y, u, [f.sub.1], [f.sub.2]) for the sake of simplicity. To avoid the influence of the artificial parameter, we choose a metric tensor [[h.sub.ij]] on the feature space M, which is defined by the invariant geometry of the original image I. Consider [[h.sub.ij]] = diag(1,1,1, [square root of 1 + [I.sup.2.sub.x] + [I.sup.2.sub.y] + 2[I.sub.x][I.sub.y]], 1/ [square root of 1 + [I.sub.xx] + [I.sub.yy] + 2[I.sub.xy]]). The pullback relation yields the determinant of metric tensor [[g.sub.[mu]v]] on manifold [SIGMA]:

[mathematical expression not reproducible]. (3)

Analogizing based on the Polyakov energy (2), we get the approximated image u by minimizing the energy functional as follows:

[mathematical expression not reproducible], (4)

where the weight coefficient of second term, corresponding to the third element of the metric [[h.sub.ij]], denotes the coefficients of first fundamental form in differential geometry. When this weight coefficient is larger, the edge structure is enhanced in the vicinity of the edges; otherwise, smoothing the image is strengthened. The weight coefficient of the third term, corresponding to the last element of the metric [[h.sub.ij]], denotes the coefficients of second fundamental form in differential geometry. Approximating the intensity I(x,y) is strengthened when this coefficient is larger, whereas smoothing the image is strengthened when the weight is smaller.

3.2. Active Contour Evolution under the Improved Geometrical Framework. The active contour is represented as the zero level set function {[phi](x, y) = 0} on the image manifold [SIGMA]. For avoiding the effects of the intensity nonuniformity, we extract the statistical features ([c.sub.1],[c.sub.2]) on the local region of size 3x3, where [c.sub.1], [c.sub.2] denote the mean intensity in the local region inside and outside the zero level set. The feature space is [(x, y, [phi](x, y), (u(x, y) - [c.sub.1]).sup.2] - (u[(x, y) - [c.sub.2]).sup.2]), denoted by (x,y,[phi](x,y),[f.sub.3]). The metric tensor defined on this feature space is [[[??].sub.ij]] = diag(1,1,1/[square root of 1 + [I.sup.2.sub.x] + [I.sup.2.sub.y] + 2[I.sub.x], [I.sub.y]], [phi]). The pullback relation yields the determinant of metric tensor [[[??].sub.[mu]v]]] on manifold [SIGMA]:

[mathematical expression not reproducible]. (5)

According to the Polyakov energy (2), we drive the curve evolution by minimizing the energy functional as follows:

[mathematical expression not reproducible], (6)

where the weight of first term is actually an edge detector. The curve evolution tends to stop when it decreases to zero, whereas the evolution goes on.

3.3. Dual Algorithm. To apply the dual gradient algorithm, we introduce the dual variable, p. The total variation term in (4) and (6) can be formulated as follows:

[mathematical expression not reproducible]. (7)

The approximation formulation of the energy of our model can be rewritten as

[mathematical expression not reproducible], (8)

where R([c.sub.1], [c.sub.2], u) = [absolute value of [(u - [c.sub.1]).sup.2] - [(u - [c.sub.2]).sup.2]]. We then apply the split Chambolle dual algorithm [10] to solve the optimization problem.

Introducing the auxiliary variables [v.sub.1], [v.sub.2], solving the energy functional (8) is equivalent to minimizing the problem as follows:

[mathematical expression not reproducible], (9)

where the parameter [theta] > 0 is chosen to be small for avoiding smearing the edges (in this paper, we choose [theta] = 0.15.), v(z) = max{0,2[absolute value of z - 1/2] - 1} is an exact penalty function provided that the constant [gamma] is chosen large enough compared to [lambda] such as [gamma] > ([lambda]/2)[[parallel]R[parallel].sub.L[infinity]([OMEGA])], [[alpha].sub.1] = [square root of 1 + [I.sup.2.sub.x] + [I.sup.2.sub.y] + 2[I.sub.x] [I.sub.y]], and [[alpha].sub.2] = 1/[square root of 1 + [I.sub.xx] + [I.sub.yy] + 2[I.sub.xy]]. The minimization problem (9) can be divided into four subproblems as follows and can be solved alternatively.

(a) Given image I, [v.sub.1], update u. we search for u as the solution of

[mathematical expression not reproducible]. (10)

The solution of (10) is given by

u = I - [v.sub.1] - [theta] div [p.sub.1], (11)

where [p.sub.1] = ([p.sup.1], [p.sup.2]) can be updated by fixed point method: initializing [p.sub.1] = 0 and updating

[mathematical expression not reproducible]. (12)

In this paper, we choose [delta] [less than or equal to] 1/8 to ensure convergence.

(b) Given u, [v.sub.2], we search for [phi] by solving the minimization problem as follows:

[mathematical expression not reproducible]. (13)

The solution of (13) is given by

[phi] = [v.sub.2] - [theta] div [p.sub.2]. (14)

Equation (14) is solved by a fixed point method:

[mathematical expression not reproducible]. (15)

(c) Given the solution of 0, we search [v.sub.2] by solving

[mathematical expression not reproducible]. (16)

The solution of (16) is given by

[v.sub.2] = min {max {[phi] - [theta]R ([c.sub.1], [c.sub.2], u), 0}, 1}. (17)

(d) Given I, u, we search for [v.sub.1] as the solution of

[mathematical expression not reproducible] (18)

The solution of (18) is given by

[mathematical expression not reproducible]. (19)

After [v.sub.1] is solved, it is utilized in (12) for next iteration.

The algorithm of minimizing our model is described in the following.

Step 1. Initialize [u.sup.0] = [[phi].sup.0] = [v.sub.1.sup.0] = [p.sup.0.sub.1] = [p.sup.0.sub.2] = 0.

Step 2. Given the fixed threshold of iterations [T.sub.iteration] > 0, if k = [T.sub.iteration], then stop; else go to Step 3.

Step 3. Do the iteration for solving subproblem.

Step 3.1. While [absolute value of [u.sup.k] - [u.sup.k-1]] > [epsilon] do

Step 3.2. Update the dual variable [p.sup.k.sub.1] [right arrow] [p.sup.k+1.sub.1] by (14) and then update [u.sup.k] [right arrow] [u.sup.k+1] by (13); go to Step 3.3.

Step 3.3. Given [v.sup.k.sub.2], update the dual variable [p.sup.k.sub.2] [right arrow] [p.sup.k+1.sub.2] according to (15) and then update [[phi].sup.k] [right arrow] [[phi].sup.k+1] by (16); go to Step 3.4.

Step 3.4. Given [[phi].sup.k+1], [u.sup.k+1], compute R([c.sub.1],[c.sub.2],u) and update [v.sup.k.sub.2] [right arrow] [v.sup.k+1.sub.2] by (17). Go to Step 3.5.

Step 3.5. Given I, u, update [v.sup.k.sub.1] [right arrow] [v.sup.k+1.sub.1] by (19); then go to Step 3.1; otherwise, go to Step 4.

Step 4. End while.

4. Experimental Results

All the experiments are run with Matlab code on the PC of CPU 3.2 GHz, RAM 728 M. we show the experiments results for medical image segmentation of Chan-Vese model (CV) [11], the structure-based level set method (SLM) [12], and the region-scale fitting model (RSF) [4]. Figure 1 shows the experiments on the synthesized noisy images. This image is of size 266 x 313 with 10% white Gaussian noise.

As shown in Figures 1(b) and 1(c), the CV model and the SLM models are sensitive to noise. As shown in Figure 1(d), the proposed model is robust to noise and obtain the correct boundary. The CV model generates the unwanted contours because of the strong noise. Based on the edge detector function, the SLM model is more robust to the noise than the CV model. The result of the proposed model shows that it is able to extract the real object boundary even when the noise is strong.

Figure 2 is brain magnetic resonance image (MRI) of size 397 x 397 with 2% noise and 10% level intensity nonuniformity. The brain MRI mainly consists of three parts: the cerebrospinal fluid, the gray matter, and the white matter. The cerebrospinal fluid is the dark matter which exists in two places: the middle of the brain surrounded by the gray matter and the gap between the cranium and the brain. The task of segmenting the brain MRI is to extract the contour profile between the white matter and the gray matter. Since the contrast of boundary between the white matter and the gray matter is lower than the boundary between the cerebrospinal fluid and the gray matter, the latter is always extracted wrongly as the object boundary. As shown in Figures 2(b) and 2(c), the CV model and the SLM model cannot extract the object boundary with low-contrast. Although the RSF model can extract the boundaries with low-contrast, it also extracts some unwanted objects. The segmentation results of the proposed model show that it is robust to noise and can extract more boundaries with low-contrast.

Figure 3 is a brain MRI of size 258 x 258 with 5% noise and 40% intensity nonuniformity. As shown in Figures 3(b) and 3(c), the drawbacks of the RSF model and the SLM model still exist. Compared with the other methods, the proposed method clearly extracts more object boundaries with low-contrast between the gray matter and the white matter.

Figure 4 show the segmentation results of the active contour methods based on multilayer level set functions. By the multilayer level set functions, the cerebrospinal fluid, the gray matter, and the white matter can be extracted simultaneously. In Figures 4(b) and 4(f), the results of CV model show that the cerebrospinal fluid is not extracted completely. As shown in Figures 4(c) and 4(g), the boundaries between the white and gray matters are not extracted completely by SLM model. We can observe that in Figures 4(d) and 4(h), the results of the proposed method based on multilayer level set functions show that the completed boundaries of the cerebrospinal fluid, the gray matter, and the white matter are extracted simultaneously. To show the convergence speed of the compared methods, Table 1 shows the iteration numbers and processing time in each iteration for the CV model, the SLM model based on the steepest descent method, and the proposed model with both single- and multilayer level set functions. Table 2 shows the segmentation accuracy of the compared active contour models based on multilayer level set functions.

The data in Figure 5 is download from the website [13]. We also show the segmentation accuracy in Table 3 by the DICE metric [14], compared with the ground truth given in this website. We can see from Figures 5(c) and 5(d) that the CV model and the SLM model only extract the boundaries with high contrast, while the object boundaries between the gray matter and white matter are not extracted. And some CSF of the image is not extracted. The proposed method can extract the more completed CSF and can preserve the cerebral cortex more efficiently.

5. Conclusion

In this paper, we propose a new variational model for image segmentation and image denoising simultaneously. We obtain the approximated image by embedding the approximating criteria into a specific multifeature space. And then

the segmentation result is obtained by embedding the active contour into another multifeature space which is composed by the segmentation criteria depending on the approximated image. The segmentation and the denoising problems are solved by the split Chambolle dual algorithm alternately. The comparisons of the other popular segmentation models demonstrate the accuracy and efficiency of the proposed model.

Additional Points

The following are the research highlights of this paper. The proposed variational model incorporates segmentation and denoising together. Segmentation and denoising processing are achieved alternately by the Polyakov action framework. An improved Polyakov action framework is purely based on the geometric features of the image without any manual

http://dx.doi.org/10.1155/2016/3909645

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the Natural Science Foundation Science Foundation of China under Grant nos. 61502244, 61402239, and 71301081, the Science Foundation of Jiangsu Province under Grant nos. BK20150859, BK20130868, and BK20130877, the Science Foundation of Jiangsu Province University (15KJB520028), NJUPT Talent Introduction Foundation (NY213007), NJUPT Advanced Institute Open Foundation (XJKY14012), China Postdoctoral Science Foundation (2015M580433,2014M551637), and Postdoctoral Science Foundation of Jiangsu Province (1401046C).

References

[1] A. P. Dempster, N. M. Laird, and D. B. Rubin, "Maximum likelihood from incomplete data via the EM algorithm," Journal of the Royal Statistical Society Series B, vol. 39, no. 1, pp. 1-38, 1977.

[2] J. C. Bezdek, Pattern Recognition With Fuzzy Objective Function Algorithms, Plenum Press, New York, NY, USA, 1981.

[3] V. Caselles, R. Kimmel, and G. Sapiro, "Geodesic active contours," International Journal of Computer Vision, vol. 22, no. 1, pp. 61-79, 1997.

[4] C. Li, C.-Y. Kao, J. C. Gore, and Z. Ding, "Minimization of region-scalable fitting energy for image segmentation," IEEE Transactions on Image Processing, vol. 17, no. 10, pp. 1940-1949, 2008.

[5] A. K. Mishra, P. W. Fieguth, and D. A. Clausi, "Decoupled active contour (DAC) for boundary detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 310-324, 2011.

[6] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, "Contour detection and hierarchical image segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 898-916, 2011.

[7] Q. Ge, L. Xiao, J. Zhang, and Z. H. Wei, "An improved region-based model with local statistical features for image segmentation," Pattern Recognition, vol. 45, no. 4, pp. 1578-1590, 2012.

[8] H. Wu, V. Appia, and A. Yezzi, "Numerical conditioning problems and solutions for nonparametric i.i.d. statistical active contours," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1298-1311, 2013.

[9] N. Sochen, R. Kimmel, and R. Malladi, "A general framework for low level vision," IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 310-318, 1998.

[10] X. Bresson, S. Esedoglu, P. Vandergheynst, J.-P. Thiran, and S. Osher, "Fast global minimization of the active contour/snake model," Journal of Mathematical Imaging and Vision, vol. 28, no. 2, pp. 151-167, 2007.

[11] D. Mumford and J. Shah, "Optimal approximations by piecewise smooth functions and associated variational problems," Communications on Pure and Applied Mathematics, vol. 42, no. 5, pp. 577-685, 1989.

[12] B. Dizdaroglu, E. Ataer-Cansizoglu, J. Kalpathy-Cramer, K. Keck, M. F. Chiang, and D. Erdogmus, "Structure-based level set method for automatic retinal vasculature segmentation," EURASIP Journal on Image and Video Processing, vol. 2014, no. 1, article 39, 26 pages, 2014.

[13] http://www.cma.mgh.harvard.edu/ibsr/.

[14] L. R. Dice, "Measures of the amount of ecologic association between species," Ecology, vol. 26, no. 3, pp. 297-302,1945.

Qi Ge, (1) Xiao-Yuan Jing, (2) Fei Wu, (2) Jingjie Yan, (1) and Hai-Bo Li (1,3)

(1) College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China

(2) College of Automation, Nanjing University of Posts and Telecommunications, Nanjing 210003, China

(3) KTH Royal Institute of Technology, 10044 Stockholm, Sweden

Correspondence should be addressed to Qi Ge; geqi@njupt.edu.cn

Received 28 April 2016; Accepted 26 June 2016

Academic Editor: Giuseppina Colicchio

Caption: FIGURE 1: Segmentation for synthetic noisy image.

Caption: FIGURE 2: Segmentation for brain MRI. (a) Original image and initial level set contour. (b) Segmentation result of the CV model with 200 iterations. (c) Segmentation result of the SLM with 100 iterations. (d) Segmentation result of the RSF model with 200 iterations. (e) Segmentation result of the proposed model with 30 iterations. (f) The approximated result of the proposed model.

Caption: FIGURE 3: Segmentation for brain MRI. (a) Original image and initial level set contour. (b) Segmentation result of the CV model with 200 iterations. (c) Segmentation result of the SLM with 100 iterations. (d) Segmentation result of the RSF model with 200 iterations. (e) Segmentation result of the proposed model with 30 iterations. (f) The approximated result of the proposed model.

Caption: FIGURE 4: Segmentation for brain MRIs by the active contour methods based on multilayer level set functions. (a) and (e) are original images and initial level set contours. (b) and (f) are segmentation results of the CV model with 100 iterations. (c) and (g) are segmentation results of the SLM with 50 iterations. (d) and (h) are the results of the proposed method with 30 iterations. parameters. Minimizing the variational model is achieved by the improved Chambolle algorithm.

Caption: FIGURE 5: Segmentation for 3D brain image: (a) original image; (b) segmentation result of the CV model with 200 iterations; (c) segmentation result of the SLM model with 100 iterations; (d) segmentation result of the RSF model with 200 iterations; (e) segmentation result of the proposed model with 30 iterations; (f) the denoising result of the proposed model; (g) the 3D original image; (h) the 3D segmentation result of the proposed method; (i) the approximated 3D image.
TABLE 1: Iteration numbers and processing time in each iteration.

                                       CV model        SLM model

Single level set function              0.35 sec        0.23 sec
Multilayer functions                   1.47 sec        0.62 sec
Iteration number for convergence.   100 iterations   50 iterations

                                    The proposed model

Single level set function                0.18 sec
Multilayer functions                     0.39 sec
Iteration number for convergence.     30 iterations

TABLE 2: Quantitative evaluation for brain MR-data in Figure 4.

                     The CV   The SLM    The proposed
                      Model    model       model

Overall accuracy     65.2%     34.6%       91.5%
GM (dice metric)     78.3%     82.4%       97.2%
WM (dice metric)     69.0%     13.5%       94.7%

TABLE 3: Quantitative evaluation for brain MR-data in Figure 5.

                   The CV model   The SLM model   The RSF model

Overall accuracy      15.4%           15.7%           82.6%
GM (dice metric)      13.5%           14.1%           79.6%
WM (dice metric)      16.2%           16.8%           83.7%

                   The proposed
                      model

Overall accuracy      88.7%
GM (dice metric)      90.4%
WM (dice metric)      87.9%
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Yan, Jingjie; Li, Hai-Bo
Publication:Mathematical Problems in Engineering
Date:Jan 1, 2016
Words:4209
Previous Article:Features Conduction Neural Response and Its Application in Content-Based Image Retrieval.
Next Article:Sensorless Control of Permanent Magnet Synchronous Motors and EKF Parameter Tuning Research.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters