Printer Friendly

A Novel Fuzzy Level Set Approach for Image Contour Detection.

1. Introduction

Image segmentation is an important component of image analysis and computer vision. The results of segmentation are not always satisfactory because of low contrast, blurry boundaries, noise, and inhomogeneous intensities. Hence, image segmentation is still a quite difficult task [1].

Recently, active contour models have attracted great attention from researchers [2]. There are two kinds of active contour models. One is the snake model which defines a parametric curve. All snake properties and its behavior are specified through an energy function. A partial differential equation controlling the snake makes it evolve to reduce the energy [3]. The physical analog can be extended, and the motion of the snake can be viewed as the simulated force acts on it. The other one is the geometric model [4, 5]. The main difference between these two kinds of approaches is that the geometric active contour introduces the level set function representing the evolving curve into the energy function. The implicit boundary representation does not depend on a specific parameterization. During the propagating, no control point mechanisms need to be employed. Level set-based active contour models have many advantages, and among them the most important one is the ability to track the topological variations of the boundary curves [6, 7].

Level set-based segmentation models can be divided into four categories: "region-based" [5, 8-11], "edge-based" [4, 12-14], "shape-prior" [15,16], and "multifeature-based" [17]. Region-based approach utilizes global information such as region statistics, region mean, and weighted mean of the constrained or scalable neighboring regions and can produce more semantically meaningful results [18]. The major limitation of region-based approach is that it is very difficult to define a suitable region descriptor for image with inhomogeneous intensity [19]. Some methods are based on a general piecewise matter [20, 21]. These methods do not assume the homogeneity of the intensities; therefore, they are able to segment images with inhomogeneous intensities, and they have been actively studied recently [5, 8]. However, these methods are quite sensitive to the noise and cannot work well with blurry and weak edges. It will be discussed in detail in the Experimental Results.

Shape-prior approach incorporates the shape information. The method introduces a representation for deformable shapes and defines a probability distribution over the variances of a set of training shapes. Obviously, such method cannot work well if the prior shapes of the objects are unknown.

Edge-based approaches do not need to assume the homogeneity of image intensities; hence, they can even be applied to images with inhomogeneous intensities [12] and has been applied to many segmentation tasks. An edge representation is used to find the boundary curve with strong edge response [13]. However, these approaches suffer from serious boundary leakage problems, especially when the objects have inhomogeneous intensities; and they cannot converge on the real boundaries of the objects in many cases either [22]. This will be discussed in detail in the Experimental Results as well.

Multifeature-based approaches use the surround inhibition weights of individual features, including orientation, luminance, and luminance contrast. Features are combined according to a scale-guided strategy, and the combined weights are then used to modulate the final surround inhibition of the objects [17].

The blurry and weak edges, the noise, and the inhomogeneous intensities can cause uncertainty and fuzziness which will result in poor outcomes of segmentation. In fact, images have uncertainty and fuzziness due to the following. (1) When performing 3D to 2D projection, some information was lost. (2) Some definitions of images such as edges and contrast are uncertain and fuzzy. For instance, an edge is defined as there is intensity difference between a pixel and its neighbor pixels; however, how large the difference should be is not precisely specified and defined, and eventually it is task-dependent. Therefore, we should use fuzzy logic to handle the uncertainty and fuzziness of the images [23, 24].

Many methods try to detect contour by calculating high gradients of color or gray levels. As a result, they are very sensitive to noise and textures. Entropies can be used as a measure of dissimilarity or inverse cohesion between two (or more) probability distributions [25]. For example, in [26], a thresholding scheme is proposed to minimize the Tsallis cross-entropy between the original image and the thresholded image. Then, the contours of the objects are obtained. Average Entropy (AE) is defined as a new information measurement of regions in image [27].

In this paper, a novel fuzzy level set active contour model is proposed. It will combine the advantages of fuzzy logic and level set theory and can generate much better results in segmentation. In experiments, a series of images are employed for evaluating the proposed method and comparing with existing segmentation algorithms. The experimental results demonstrate that the proposed approach can segment both synthetic and real images satisfactorily. Furthermore, it makes the evolving function converge to the real boundaries even with low contrast, inhomogeneous intensities, and blurry edges.

The rest of the paper is organized as follows. In Section 2, the proposed fuzzy level set approach for segmentation is described. A variety of images have been tested and validated the proposed approach (however, due to page limit, only a few of them are shown here), and the performance is evaluated in Section 3. Finally, the conclusions are summarized in Section 4.

2. Proposed Fuzzy Level Set Approach

The proposed fuzzy edge-based level set approach consists of the following major components: fuzzification, fuzzy energy function, and evolution equation.

2.1. Image Fuzzification. Assume that the size of image I is M x N, and I(x, y) is the gray level of the pixel at coordinates (x, y). In order to apply fuzzy logic to deal with the fuzziness and uncertainty of the image, a suitable membership function is necessary. The most commonly used membership function is the standard S-function [24]:

[mathematical expression not reproducible]. (1)

The value of [mu](I(x, y)) represents the membership of I(x, y), which is simplified as [mu](x, y), and the fuzzified image is denoted by Parameters a, b, and c determine the shape of the S-function. Based on information theory, the maximum entropy corresponds to the maximum information. We use the maximum fuzzy entropy principle to determine parameters a, b, and c. We will find the combination of the parameters corresponding to the maximum entropy:

[mathematical expression not reproducible], (2)

where H(I) is the entropy of the image and [S.sub.n](*) is the Shannon function:

[mathematical expression not reproducible]. (3)

There are many ways to find the maximum value of (2) such as simulated annealing, neural networks, and genetic algorithm. In this paper, we use simulated annealing algorithm [24] to find the optimum values of ([a.sub.opt], [b.sub.opt], [c.sub.opt]) and to avoid sticking at local minimum:

[mathematical expression not reproducible], (4)

where [I.sub.min] and [I.sub.max] are the minimum and maximum intensity values of the image, respectively.

Using above fuzzification process, we can have the maximum information when transforming image from space domain to fuzzy domain according to information theory. In addition, the S-function can enhance the images [23]; that is, it can improve the weak edges and prevent leakage further.

Then, the original image I is transformed to a fuzzified image [mu] according to (1).

2.2. Fuzzy Energy Function and Evolution Equation. Considering a fuzzified image [mu] as a real positive function defined in domain [OMEGA], the boundaries are defined as the fuzzy zero level set of [[empty set].sub.[mu]](x, y), which will be simplified as [[empty set].sub.[mu]]. Given the fuzzy level set function [[empty set].sub.[mu]], the basic form of the energy function in the ordinary space [10] can be adapted and transformed into the fuzzy domain:

E([[empty set].sub.[mu]]) = [alpha]P([[empty set].sub.[mu]]) + [beta][L.sub.g] ([[empty set].sub.[mu]]) + [[gamma].sub.g] ([[empty set].sub.[mu]]), (5)

where [alpha] > 0 is a parameter controlling the effect of penalizing the deviation of [[empty set].sub.[mu]] from a signed distance function; [beta] and [gamma] are positive constants; and P([[empty set].sub.[mu]]) is a penalizing term defined as the integral below:

p([[empty set].sub.[mu]]) = [[??].sub.[OMEGA]] 1/2 [([absolute value of [nabla] [[empty set].sub.[mu]]] - 1).sup.2] dx dy. (6)

P([[empty set].sub.[mu]]) characterizes how close function [[empty set].sub.[mu]] is to a signed distance function in domain [OMEGA].

An external energy for function [[empty set].sub.[mu]] is defined as

[mathematical expression not reproducible], (7)

where [delta] is the univariate Dirac function, H is the Heaviside function, [L.sub.a] ([[empty set].sub.[mu]])is the length of the zero level curve, [A.sub.g]([[empty set].sub.[mu]]) is used to speed up curve evolution which is the weighted area of the subregion, and g is the fuzzy edge indicator function:

g = 1/1 + [[absolute value of [nabla][G.sub.[sigma]] * [[empty set].sub.[mu]]].sup.2], (8)

where [G.sub.[sigma]] is the Gaussian kernel with standard deviation [sigma] and * is the convolution operator. The energy function drives the fuzzy zero level curve towards the boundaries and stops evolving with the strongest boundary response.

The fuzzy zero level curve evolves to the gradient flow correspondingly and drives the evolution equation for finding the minimum of the energy function. With total variation method, the associated gradient flow is derived:

[mathematical expression not reproducible], (9)

where [[empty set].sup.0.sub.[mu]] is the initial condition defined in fuzzy domain and the last equation in (9) is the boundary condition. The initial condition can be formulated as

[mathematical expression not reproducible], (10)

where [d.sub.0] is a predetermined constant larger than 2[epsilon], which is set to 4 for all experiments, and [partial derivative][[OMEGA].sub.0] is the initial boundary.

For numerical calculation, the Dirac function [delta](*) is smoothed as

[mathematical expression not reproducible], (11)

where [epsilon] = 1 is used in all experiments.

Equation (9) is discretized by the central difference, and the approximation is

[mathematical expression not reproducible], (12)

where [D.sup.0] is the central difference operator.

The flowchart of the proposed method is described in Figure 1. The steps of the proposed method are summarized as follows:

(1) Initialize the fuzzy level set function using (1).

(2) Calculate fuzzy edge indicator function g using (8).

(3) Initialize the fuzzy level set function [[empty set].sup.0.sub.[mu]].

(4) Calculate [[empty set].sup.n+1.sub.[mu]] from [[empty set].sup.n.sub.[mu]] by (12).

(5) Check whether the convergence of [[empty set].sub.[mu]]is satisfied; if it is not steady or has not reached the predetermined number of iterations, go to step 4.

3. Experimental Results

We conduct five groups of experiments using synthetic and real images. The same images were also tested by IGAC (improved geometric active contours) model [4] and LIF (local image fitting) model [5]. Due to page limit, we only use a few of the images to demonstrate the effectiveness and usefulness of the proposed approach here. The parameters used here are as follows. The step time [DELTA]t can be chosen from 0.1 to 100, and here it is set to 10. The time step [DELTA]t and the coefficient [alpha] must satisfy [alpha] * [DELTA]t < 0.25. The coefficient [beta] determines the smoothness of the zero level curve, and it can be chosen from 1 to 30. The coefficient [gamma] of the weighted area term should be a positive value, so that the contours can shrink faster. Also [alpha] = 1/5[DELTA]t, [beta] = 6, and [gamma] = 3. All parameters are determined by experiments.

In experiment 1, both the object and background are homogenous. We can see that the LIF method does not work well as shown in Figure 2(b); the IGAC model can perform relativelywell with minorerrorsasshown in Figure 2(c); however, the proposed method can detect the boundaries even better as shown in Figure 2(d).

In experiment 2, a more complex image with inhomogeneous intensities is tested. The background is homogeneous and the object is inhomogeneous with stepwise gray values. Experiment 2 demonstrates that the proposed method performs better than IGAC model on inhomogeneous images. The LIF method completely failed and cannot converge as shown in Figure 3(b). In Figure 3(c), four regions of the object are wrongly segmented. This is due to the fact that IGAC model tends to drive the zero level curve towards the boundaries corresponding to the gradients and to stop evolving with the strongest boundary response. However, in many cases, the real boundary may not have the strongest response; and IGAC model cannot have sufficient global knowledge to capture the real boundary.

In experiment 3, the proposed approach, LIF method, and IGAC method are applied to a real image from Amsterdam Library of Object Images (ALOI) [28]. The result of LIF is also very poor as shown in Figure 4(b). After applying the IGAC method, the ill-defined border of the box is not connected well due to the leakages occurring in the weak edges. The result of the proposed approach is shown in Figure 4(d), where the border is well connected and correctly detected as shown in Figure 4(d).

We have also tested many images with low contrast and nonuniform illuminations selected from Amsterdam Library of Object Images (ALOI). We can observe from Figures 5 and 6 that the proposed method produces good results, and the shapes and edges of the objects can be extracted much better. The IGAC method tends to converge to the interior of the objects and obtains wrong boundaries. The leakages occurred in the week edges. The LIF method performs the poorest among these methods as shown in corresponding Figures 5(b) and 6(b).

In experiment 4, methods are applied to the real images from other resources. LIF method generates too many segments as shown in Figures 7(b) and 8(b). More background regions are wrongly covered when using the IGAC method as shown in Figure 7(c). The proposed method can capture the complex boundaries more accurately and achieve better performance than both the IGAC and LIF methods.

In experiment 5, we use real breast ultrasound (BUS) images [29] to evaluate IGAC, LIF, and the proposed methods. The images are very noisy, with low contrast, and inhomogeneous. Due to high level of inherent speckle noise, LIF produces oversegments as shown in Figures 9(b) and 10(b). In Figure 9(c), IGAC converges to a false boundary and because of that the image is noisy and has blurry boundary of the tumor. In Figure 10(c), although the tumor boundary is quite clear, IGAC still achieves wrong segmentation due to leakage. The proposed method can obtain accurate results as shown in Figures 9(d) and 10(d).

For evaluating segmentation results, three area error metrics were used: the true positive (TP) ratio, the false positive (FP) ratio, and the similarity (SI) [30,31]. They are popularly used for evaluating the performance of segmentation. Let [A.sub.[alpha]] be the object region selected by the algorithm and let [A.sub.m] be the corresponding real object region; the three error metrics are

TP = [absolute value of [A.sub.m] [intersection] [A.sub.a]]/[absolute value of [A.sub.m]]

[mathematical expression not reproducible]. (13)

The object regions obtained by the algorithms, [A.sub.[alpha]]s, are compared with manual delineations [A.sub.m]s which are considered as the grand truths. When the TP ratio is higher, it means that more real object region [A.sub.m] is covered by [A.sub.[alpha]]; and when the FP ratio is lower, it means that less background region is covered by [A.sub.[alpha]]. Meanwhile, the higher SI ratio implies that [A.sub.[alpha]] is more similar to [A.sub.m]; that is, the overall performance is better. Since LIF oversegments all the images and cannot find the major regions in the background and objects, the following discussion will not utilize the results of LIF. The performances of the IGAC model and proposed method are listed in Table 1.

The TP ratios of the proposed method are much higher than that of the IGAC model (especially in the second and third rows in Table 1), and they indicate that the real object regions in all images were segmented by the proposed method more accurately. Because of low contrast of the edges, there are many local minima and the IGAC model may converge to some local minima, and its TP ratios could be extremely low (Table 1); and the FP ratios of the IGAC model are much higher than that of the proposed method. It means that many background regions are included in the object regions generated by the IGAC model. In addition, the unsuitable regions cannot be cut off easily and the results directly influence the subsequent analysis. The proposed method can handle the blurry and weak boundaries well and the segmentation results are more accurate and reliable. In the last row of Table 1, the FP ratio of the proposed method is a little higher than that of the IGAC model. This is due to the weak edges and blurry boundaries, and the evolving function of IGAC metho d will tend to converge to the interior of the object; therefore, even if it has lower FP ratio, it achieves severally wrong segmentation. Nevertheless, the proposed method has much higher SI ratios than those of the IGAC model that demonstrate that the overall performance of the proposed method is much better.

4. Conclusions

In this paper, we have developed a novel level set active contour method based on fuzzy logic and variation theory. The proposed approach is more efficient than the level set methods in performing image segmentation due to its capability in handling fuzziness and uncertainty. Three popular area error metrics are used for evaluating segmentation performance. The proposed method and other popular methods (IGAC model and LIF method) are applied to the same images for comparison. The experimental results demonstrate that the proposed method is more accurate and robust even with weak boundaries, noise, and inhomogeneous intensities. This is because the proposed approach takes the advantages of both level set theory and fuzzy logic. It may find wide applications in the related areas.

Competing Interests

The authors declare that they have no competing interests.


This work is supported, in part, by National Natural Science Foundation of China and the Civil Aviation Administration of China (Grant no. U1433103).


[1] Y. Wu and C. He, "A convex variational level set model for image segmentation," Signal Processing, vol. 106, pp. 123-133, 2015.

[2] D. Lui, C. Scharfenberger, K. Fergani, A. Wong, and D. A. Clausi, "Enhanced decoupled active contour using structural and textural variation energy functionals," IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 855-869, 2014.

[3] X. Gao, B. Wang, D. Tao, and X. Li, "A relaylevel set method for automatic image segmentation," IEEE Transactions on Systems, Man, and Cybernetics PartB: Cybernetics, vol. 41, no. 2, pp. 518-525, 2011.

[4] C. Li, C. Xu, C. Gui, and M. D. Fox, "Level set evolution without re-initialization: a new variational formulation," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 430-436, San Diego, Calif, USA, June 2005.

[5] K. Zhang, H. Song, and L. Zhang, "Active contours driven by local image fitting energy," Pattern Recognition, vol. 43, no. 4, pp. 1199-1206, 2010.

[6] J. Lie, M. Lysaker, and X.-C. Tai, "A binary level set model and some applications to Mumford-Shah image segmentation," IEEE Transactions on Image Processing, vol. 15, no. 5, pp. 1171-1181, 2006.

[7] Z. Lu, G. Carneiro, and A. P. Bradley, "An improved joint optimization of multiple level set functions for the segmentation of overlapping cervical cells," IEEE Transactions on Image Processing, vol. 24, no. 4, pp. 1261-1272, 2015.

[8] L. Wang, C. Li, Q. Sun, D. Xia, and C.-Y. Kao, "Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation," Computerized Medical Imaging and Graphics, vol. 33, no. 7, pp. 520-531, 2009.

[9] A. Dubrovina-Karni, G. Rosman, and R. Kimmel, "Multi-region active contours with a single level set function," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 8, pp. 1585-1601, 2015.

[10] R. Ronfard, "Region-based strategies for active contour models," International Journal of Computer Vision, vol. 13, no. 2, pp. 229-251, 1994.

[11] C. Samson, L. Blanc-Feraud, G. Aubert, and J. Zerubia, "A variational model for image classification and restoration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 5, pp. 460-472, 2000.

[12] C. Li, C. Xu, C. Gui, and M. D. Fox, "Distance regularized level set evolution and its application to image segmentation," IEEE Transactions on Image Processing, vol. 19, no. 12, pp. 3243-3254, 2010.

[13] V. Caselles, R. Kimmel, and G. Sapiro, "Geodesic active contours," International Journal of Computer Vision, vol. 22, no. 1, pp. 61-79, 1997.

[14] A. Vasilevskiy and K. Siddiqi, "Flux maximizing geometric flows," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 12, pp. 1565-1578, 2002.

[15] M. Rousson and N. Paragios, "Shape priors for level set representations," in Proceedings of the 7th European Conference on Computer Vision (ECCV '02), pp. 416-418, IEEE, Copenhagen, Denmark, 2002.

[16] T. Chan and W. Zhu, "Level set based shape prior segmentation," in Proceedings of the IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR '05), pp. 1164-1170, San Diego, Calif, USA, June 2005.

[17] K.-F. Yang, C.-Y. Li, and Y.-J. Li, "Multifeature-based surround inhibition improves contour detection in natural images," IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5020-5032, 2014.

[18] C. Li, R. Huang, Z. Ding, J. Gatenby, D. N. Metaxas, and J. C. Gore, "A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI," IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 2007-2016, 2011.

[19] T. F. Chan and L. A. Vese, "Active contours without edges," IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266-277, 2001.

[20] A. Tsai, A. Yezzi Jr., and A. S. Willsky, "Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification," IEEE Transactions on Image Processing, vol. 10, no. 8, pp. 1169-1186, 2001.

[21] L. A. Vese and T. F. Chan, "A multiphase level set framework for image segmentation using the Mumford and Shah model," International Journal of Computer Vision, vol. 50, no. 3, pp. 271-293, 2002.

[22] B. Liu, H. D. Cheng, J. Huang, J. Tian, X. Tang, and J. Liu, "Probability density difference-based active contour for ultrasound image segmentation," Pattern Recognition, vol. 43, no. 6, pp. 2028-2042, 2010.

[23] Y. Zhang, H. D. Cheng, J. Tian, J. Huang, and X. Tang, "Fractional subpixel diffusion and fuzzy logic approach for ultrasound speckle reduction," Pattern Recognition, vol. 43, no. 8, pp. 2962-2970, 2010.

[24] H. D. Cheng and J.-R. Chen, "Automatically determine the membership function based on the maximum entropy principle," Information Sciences, vol. 96, no. 3-4, pp. 163-182, 1997.

[25] Q. D. Katatbeh, J. Martinez-Aroza, J. F. Gomez-Lopera, and D. Blanco-Navarro, "An optimal segmentation method using jensen-shannon divergence via a multi-size sliding window technique," Entropy, vol. 17, no. 12, pp. 7996-8006, 2015.

[26] F. Y. Nie, "Tsallis cross-entropy based framework for image segmentation with histogram thresholding," Journal of Electronic Imaging, vol. 24, no. 1, Article ID 013002, 2015.

[27] O. A. Kittaneh, M. A. Khan, M. Akbar, and H. A. Bayoud, "Average entropy: a new uncertainty measure with application to image segmentation," The American Statistician, vol. 70, no. 1, pp. 18-24, 2016.

[28] J.-M. Geusebroek, G. J. Burghouts, and A. W. M. Smeulders, "The Amsterdam library of object images," International Journal of Computer Vision, vol. 61, no. 1, pp. 103-112, 2005.

[29] M. Xian, Y. Zhang, and H. D. Cheng, "Fully automatic segmentation of breast ultrasound images based on breast characteristics in space and frequency domains," Pattern Recognition, vol. 48, no. 2, pp. 485-497, 2015.

[30] J. Shan, H. D. Cheng, and Y. Wang, "Completely automated segmentation approach for breast ultrasound images using multiple-domain features," Ultrasound in Medicine and Biology, vol. 38, no. 2, pp. 262-275, 2012.

[31] H. Shao, Y. Zhang, M. Xian, and H. D. Cheng, "A saliencymodel for automated tumor detection in breast ultrasound images," in Proceedings of the IEEE International Conference on Image Processing, pp. 1424-1428, Quebec City, Canada, September 2015.

Yingjie Zhang, (1) JianxingXu, (1) and H. D. Cheng (2)

(1) College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China

(2) Department of Computer Science, Utah State University, Logan, UT 84322, USA

Correspondence should be addressed to Yingjie Zhang;

Received 24 January 2016; Revised 1 May 2016; Accepted 10 May 2016

Academic Editor: Erik Cuevas

Caption: Figure 1: The flowchart of the proposed method bases on the fuzzy sets.

Caption: Figure 2: Results of Leaf. (a) Original image. (b) Result by LIF method. (c) Result by IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 3: Results of Rabbit. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 4: Results of Box. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 5: Results of Diabolo. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 6: Results of Coffee can. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 7: Results of Swan. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 8: Results of Boat. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 9: Results of BUS image 1. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.

Caption: Figure 10: Results of BUS image 2. (a) Original image. (b) Result by LIF method. (c) Result by the IGAC model. (d) Result by fuzzy based approach.
Table 1: Performance of IGAC method and fuzzy based approach.

                                                  TP      FP      SI
                                                  (%)     (%)     (%)

Experiment      Leaf           IGAC method       99.22   1.85    98.97
  1                        The proposed method   99.85   0.84    99.76

Experiment     Rabbit          IGAC method       24.93   1.63    24.01
  2                        The proposed method   99.79   1.20    99.27

                 Box           IGAC method       24.32   1.74    24.15
                           The proposed method   98.83   0.22    98.53
Experiment     Diabolo         IGAC method       68.37   0.05    68.35
  3                        The proposed method   99.68   0.04    99.66
             Coffee can        IGAC method       72.19   0.03    72.17
                           The proposed method   99.91   0.02    99.9

                Swan           IGAC method       94.73   14.24   83.51
Experiment                 The proposed method   97.26   3.53    94.74
  4             Boat           IGAC method       91.03   5.44    86.01
                           The proposed method   96.75   3.42    94.11

Experiment   BUS image 1       IGAC method        100    17.53   87.85
  5                        The proposed method   99.56   1.57    98.42
             BUS image 2       IGAC method       75.21   0.33    74.75
                           The proposed method   98.96   0.48    98.03
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Zhang, Yingjie; Xu, Jianxing; Cheng, H.D.
Publication:Mathematical Problems in Engineering
Article Type:Report
Date:Jan 1, 2016
Previous Article:An Efficient Imperialist Competitive Algorithm for Solving the QFD Decision Problem.
Next Article:Overtaking Safety Evaluation and Setting of Auxiliary Lane on Two-Lane Highway in China.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters