Printer Friendly

Selective Feature Fusion Based Adaptive Image Segmentation Algorithm.

1. Introduction

Image segmentation plays a significant role in computer vision and can be applied to various fields like region proposal generation [1-3], face recognition [4], and disease detection [5-8]. There are many kinds of image segmentation algorithms, such as edge-based, region-based, threshold-based, and graph-based image segmentation algorithm. Algorithms based on edges [7, 9, 10] achieve good performance on images where the boundary of the object is distinct, but these methods are less resistant to noise and require higher image quality. Region-based [11-13] and threshold-based [1416] segmentation methods merge pixels into regions by their features like color, texture, or their combination. However, for a certain image, there are some redundant features which do not play a role in segmentation. Merging these features with others demands a large amount of computation. Algorithms based on graph theory [8, 17, 18] often need foreground or background seeds to initialize the energy function. The seeds are often labeled manually, which means graph-based methods may not segment images automatically. Thus, a method which can choose not only the best feature combination but also the optimal threshold is in demand.

Recent years have witnessed the emergence of various feature selection and self-adaptive image segmentation algorithms. Feature selection algorithms [19, 20] can select an effective feature subset rather than the whole feature set so as to reduce time and space consumption. Most adaptive algorithms focus on detecting cluster centers of pixels adaptively [21], changing pixels' local information during segmenting [22], or determining the best threshold [14, 15]. However, few works focus on how to combine features and choose threshold adaptively according to different images, which may achieve accurate segmentation performance for images with high variations. Our intuition is that, for a certain image, merging some kinds of features will help to produce higher-quality image segments than using only one kind of feature, yet merging other features will not. To fully utilize multiple features of a certain image, it is crucial to select and combine the most discriminative features. After obtaining the best feature combination, the most important challenge for region-based segmentation algorithm is to determine the optimal threshold that decides whether to merge small regions or not.

We have previously introduced an optimized automatic seeded region growing (OASRG) [23] algorithm that locates seeds by affinity propagation (AP) clustering algorithm and grows regions by color and edge features. Although the algorithm can segment images without human interaction, it cannot adjust threshold adaptively or select best feature combination according to different images. Based on OASRG algorithm, we further propose a selective feature fusion based image segmentation (SFF-IS) algorithm to improve the performance of previous works. In order to segment images automatically, SFF-IS algorithm inherits the method of finding seeds and extracting features from OASRG algorithm. Distinct from preceding reports, SFF-IS algorithm involves (1) adding textural features to original features, (2) selecting effective feature combination adaptively for better performance, and (3) changing the threshold during region growing according to different images.

The performance of our algorithm is quantitatively measured on PASCAL VOC 2007 segmentation dataset [24]. We use LAB, LBP, Canny features, and their combination to help to segment images. Compared to OASRG algorithm, our method improves the performance by the selective feature fusion and the adaptive threshold. Moreover, experimental results demonstrate that our algorithm can produce superior image segments than some popular approaches.

The rest of this paper is organized as follows. In the part of Related Work, we will introduce previous works on feature selection and adaptive segmentation. In the part of Our Method, we will describe the pipeline of our algorithm in detail. In the part of Experiments, we will introduce experiments, including experimental settings and results. Finally, we will conclude this paper in the part of Conclusion

2. Related Work

2.1. Image Segmentation. In the past, some works have emerged as the state-of-the-art image segmentation methods. Watershed [25] and its improved version [26] performed a gradient ascent starting from local minima to produce segments, which were often highly irregular in shape and size. Deng et al. [27] proposed the joint systems engineering group (JSEG) algorithm, achieving consistent segmentation and tracking results on real images and videos. Comaniciu et al. [28] proposed a general nonparametric technique called Mean Shift. Felzenszwalb et al. [29] introduced a graph-based image segmentation algorithm which reserved details in low-variability regions while ignoring details in high-variability regions. Most recently, a certain degree of attention was given to hybrid features that combined two or more features. Arbelez et al. [10] developed the spectral clustering-based contour detector, and the output of the contour detector was combined with brightness, color, and texture gradients to generate a hierarchical region tree. Achanta et al. [30, 31] introduced a simple linear iterative clustering (SLIC) algorithm to improve segmentation performance with a good balance between speed and accuracy. Storath et al. [32] proposed a fast splitting approach to solve image segmentation problem, producing results of a quality comparable with that of graph cuts and the convex relaxation strategies.

2.2. Feature Selection in Image Segmentation. Feature selection was an important task in machine learning to increasing the performance of an algorithm [33]. The huge amounts of features described images in detail. On the other hand, not all features were essential for segmentation, since many of them were redundant or even irrelevant [34]. These redundant features might degrade the performance. Luo et al. [35] developed an effective feature selection method which selected a group of mixed color features or channels according to the principle of the least entropy of pixels frequency histogram distribution. Liang et al. [18] developed a genetic programming based segmentation method, achieving good performance on distinguishing object or background. Feng et al. [17] proposed the interactive segmentation algorithm which selected a single feature to determine a pixel's label locally. Their algorithm performed better on RGBD images with fewer user inputs. Cheng et al. [19] developed the hierarchical feature selection method which demonstrated its advantage in speed and segmentation quality. In this paper, we propose a selective feature fusion algorithm to choose the best feature set by evaluating the results of presegmentation. Our proposed algorithm fuses selected features and applies the fused features to image segmentation algorithm.

2.3. Adaptive Segmentation. The methods which adjusted parameters to segment different images were collectively called adaptive segmentation. The adaptive parameters were always used in clustering-based image segmentation to find the number of clusters and the correspond centers. In this spirit, some works aimed to develop adaptive clustering-based segmentation methods, such as [21, 22]. For threshold-based methods, there were always different thresholds for different kinds of data. Hence, automatically determining the optimal threshold was especially significant for segmentation. The adaptive threshold-based algorithm in [14] adjusted threshold while segmenting an image, which took the best of local and global image information. Guo et al. [15] presented a self-adaptive threshold approach. The threshold was obtained with two-dimensional entropy and optimized with different evolution. To get the best segments on each image, we propose an adaptive segmentation algorithm which changes the threshold during region growing according to different images.

3. Our Method

The SFF-IS algorithm proposed in this paper divides an image into multiple regions by aggregating pixels with similar characteristics around seed pixels. The pipeline of this method includes five major steps, which are shown in Figure 1. Firstly, for a given image, the seed pixels are automatically located by affinity propagation clustering algorithm. Secondly, features including color, texture, and edge are extracted for each pixel. Thirdly, a feature selector based on feedback judges which features will help to achieve better segmentation performance. Fourthly, an optimal threshold is calculated adaptively according to selected features. Finally, based on the feature combination of each pixel and the optimal threshold, a mask of segmented image is generated by region growing.

For region growing algorithm, the location of seeds is of significance. A seed point usually shares high similarity with its surrounding pixels and can represent the region where it grows. This characteristic of seed point is very similar to the cluster center that represents its cluster. In this spirit, we propose the automatic seeded (AS) algorithm to locate seed points in [23]. The AS method firstly divides an image into K super-pixel blocks and then utilizes affinity propagation clustering algorithm [36] on these blocks. The experimental result in [23] shows that our automatic seeded algorithm is more efficient than the manual seeded algorithm.

3.1. Feature Extraction. It is known to all that different features describe different characteristics of an image. Thus, it makes sense to extract various features to better depict the image. Figure 2 shows several mask results generated by segmenting different images with their different features, including color, texture, and edges feature.

Color is one of the most widely used features in image segmentation. Of all color spaces, RGB is one of the most commonly used color models. But for image segmentation task, CIELAB color space has been proved to be very useful due to its consistency with the human visual system [10].

The texture is another helpful property for segmentation, especially when they are combined with other features. Local binary patterns (LBP) [37] is a kind of powerful texture feature which emphasizes the local structure and thus has the advantage of robustness to rotation and nonuniform illumination [38].

Edge detectors produce contours of object. Canny edge detector augments itself with nonmaximum suppression and adaptive threshold with hysteresis [39]. Although these contours can not guarantee to be closed, they can provide a perfect complement to other features.

Therefore, LAB color, LBP texture, and Canny edge features will be extracted and fused for image segmentation.

3.2. Selective Feature Fusion. It is well known that different feature combination can provide different information. However, many features are redundant or even irrelevant for image segmentation. Selective feature fusion focuses on selecting the most relevant feature subset for different images and merging these features.

Our intuition is that the best feature set varies from different images. We find that color feature does well on most images, but is useless for some images whose colors are discontinuous and various. Texture feature, as a supplement, is helpful to segment these pictures. For Canny feature, the formula is given to calculate the importance of Canny edge (see (1)).

[mathematical expression not reproducible] (1)

where [p.sub.i] and [p.sub.j] represent the pixel ([x.sub.i], [y.sub.i]) and ([x.sub.j], [y.sub.j]) Canny is a set of pixels belonging to Canny edge. Img is a set of pixels belonging to an image.

Based on above idea, we propose a selective feature fusion (SFF) algorithm (see Algorithm 1) to choose which features are used to segment a certain image.
Algorithm 1: SSF.

Input: Image Img
Output: Flag set [[f.sub.(LAB)], [f.sub.(LBP)], [f.sub.(Can)]],
where f = 1 means the feature is selected.
Extract color feature of [F.sub.(LAB)] and edge feature [F.sub.(Can)];
Calculate [P.sub.c] by Eq. (1);
if [P.sub.c] < Th then
   [[f.sub,(LAB)], [f.sub.(LBP)], [f.sub.(Can)]] = [1,0,1];
   [[f.sub.(LAB)], [f.sub.(LBP)], [f.sub.(Can)]] = [1,0,0];
end if
Region growing with flag set [[f.sub.(LAB)], [f.sub.(LBP)],
Calculate number of segments [N.sub.R];
if [N.sub.R] = 1 then
   [[f.sub.(LAB)], [f.sub.(LBP)], [f.sub.(Can)]] = [0, 1 1];
end if

Algorithm 2: Adaptive threshold algorithm.

Input: Image feature F, seed [p.sub.i], initial threshold T,
restriction of region size N
Output: Adaptive threshold [T.sub.a]
Grow region [R.sub.i] from seed [p.sub.i] by using
[mathematical expression not reproducible];
Calculate the size of region [R.sub.i], get
[absolute value of ([R.sub.i])];
if [absolute value of ([R.sub.i])] < N then
   Calculate [T.sub.a] by Eq. (2) and Eq. (3);
   [T.sub.a] = T;
end if

3.3. Adaptive Threshold. Region growing calculates feature similarity between seed point and its neighbor point. By comparing similarity and threshold, the algorithm decides neighbor point whether to be merged or not. Therefore, the threshold is an important parameter in region growing. The fixed threshold lacks generalization ability and cannot achieve accurate segmentation for images with high variations. Therefore, we present an adaptive threshold method (see Algorithm 2). The similarity between two pixels is calculated by function S(A, B) (see (2))

S(A, B) = [square root of ([n-1.summation over (i=0)] [([A.sub.i] - [B.sub.i]).sup.2])] (2)

where [A.sub.i] and [B.sub.i] are the i-th dimension of features A and B. n is the dimension of the feature space. S(A, B) is a similarity function between features A and B. The adaptive threshold is calculated by

[mathematical expression not reproducible] (3)

where [mathematical expression not reproducible] and [mathematical expression not reproducible] are features of [p.sub.i] and [p.sub.j]. [R.sub.i] represents the region grown from seed [p.sub.i]. [absolute value of ([R.sub.i])] is the number of pixels in region [R.sub.i].

3.4. Selective Feature Fusion Based Image Segmentation. To automatically find the effective feature subset for different images, we propose a selective feature fusion based image segmentation (SFF-IS) algorithm in this section. The SFF-IS algorithm uses feature subset rather than all features to segment images and constantly merge seed pixels with their surrounding pixels by their similarity. The decision is made by calculating the value of M([p.sub.s], [p.sub.n]) by

[mathematical expression not reproducible] (4)

where [p.sub.s] and [p.sub.n] represent seed point and its neighbor point. [f.sub.LAB], [f.sub.LBP], and [f.sub.Can] are selection results, where f = 1 means that corresponding feature is selected. LABS and LB[P.sub.s] are features of [p.sub.s] and [p.sub.n]. [T.sub.2] and [T.sub.3] are thresholds. Canny is a set of pixels which belong to Canny edge. M([p.sub.s], [p.sub.n]) = True represents merging pixel [p.sub.s] and pixel [p.sub.n]. If a point belongs to Canny, the point is an edge point and can not be merged. The segmentation process is described as shown in Algorithm 3.
Algorithm 3: SFF-IS.

Input: Image Img
Output: Mask Mask
Extract LAB color, LBP texture, Canny edges feature for Img,
get feature
set [[F.sub.LAB], [F.sub.LBP], [F.sub.Can]];
Send [[F.sub.LAB], [F.sub.LBP], [F.sub.Can]] to SFF, get flag set
[[f.sub.LAB], [f.sub.LBP], [f.sub.Can]];
Extract seeds points {p | p is a seed};
Initialize Mask with zero;
while {p | p is a seed} [not equal to] 0 do
    Pop [p.sub.i] [member of] {p | p is a seed};
    Mask([p.sub.i]) = i
    if M([p.sub.s], [p.sub.n]) = True then
       Append {p | p is a seed} with [p.sub.n];
       Mask([p.sub.i]) = 1;
    end if
end while

4. Experiments

4.1. Feature Selection. We evaluate our algorithms on the PASCAL VOC 2007 segmentation dataset [24]. We use LAB, LBP, Canny features, and their combination to segment images and calculate Absolute Adjusted Rand Index (AARI) [40] to measure the pairwise similarity between proposed segmentation and ground truth. The AARI ranges from 0 to 1.0.

The graphical results of five example images are shown in Figure 3. It can be seen that the aeroplane in No. 005043 image is not segmented by LAB or Canny, neither is the No. 005043. By combining two or three features, the object is easier to be distinguished from its background. Moreover, SFF algorithm applies different feature combinations on the basis of different images, which helps to generate higher-quality segmentation masks than those of Merge feature.

Figure 4 shows the AARI values of five example pictures segmented by different feature combinations. We can see from Figure 4 that a single feature can not segment image sometimes. For example, LAB color feature does not segment No. 005043 at all, getting the lowest AARI in result. Canny feature has the same performance in No. 005043 and No. 006946. Fusing two features can increase the value of AARI, but different combinations have different influence on the same image. For No. 002619, LL and LA+C both improve the performance dramatically, while LB+C only gets the same value of AARI as LBP. On the other hand, Merge feature which consists of all three features does not get the highest AARI among five images, while SFF feature does well on all five images by choosing the effective feature subset. In conclusion, merging more features do not promise to get better performance, and selecting the best feature combinations will produce accurate segments.

To confirm our hypothesis, we conduct the same experiment on training data with and without SFF algorithm and then calculate the average AARI for each method. The result is shown in Figure 5. LAB, LBP, and Canny features get very small AARI value, because each of them cannot depict an image in detail. In addition, merging three features does not get better performance than two features, which proves the conclusion mentioned earlier. On the whole dataset, LA+C produces comparable segments to SFF method. One way or another, the average AARI of segmentation with SFF algorithm is the highest, outperforming the second best by 4.9%. The results show that SFF does improve the segmentation accuracy.

4.2. Adaptive Threshold. As we describe above, the adaptive threshold has to be initialized. In [23], our experiment showed that [T.sub.1] = 8.2 was the best threshold for LAB color feature. In this section, we use the same way to explore the optimal thresholds [T.sub.2] and [T.sub.3] for LBP and LAB+LBP. We segment training data with different thresholds and calculate average AARI accordingly. Figure 6 shows that the average AARI varies with thresholds. In addition, [T.sub.2] = 1.6 and [T.sub.3] = 8.6 are the optimal thresholds separately for LBP and LAB+LBP. Therefore, we set [T.sub.1] = 8.2, [T.sub.2] = 1.6, and [T.sub.3] = 8.6 to initialize the thresholds.

Then, we apply the adaptive threshold algorithm to different combinations of features and calculate the AARI for segmentation results. Figure 7 shows that our adaptive threshold algorithm achieves better performance than the fixed threshold on four kinds of feature combinations. The experimental results demonstrate that our method can improve their performance when plugged into other threshold-based approaches.

4.3. Image Segmentation. We compare the performance of our algorithm with other methods: OASRG [23], WS [25], GBIS [29], JSEG [27], SLIC [30], GPU-SLIC [31], and FPVVI [32]. We use the AARI [40] and the Segmentation Covering (Covering) [10] to measure the segmentation results. The Covering aims at evaluating the average overlap between proposed segmentation and ground truth. Figure 8 shows that our algorithm outperforms other approaches on both the AARI and the Covering. Equipped by selective feature selection method and adaptive threshold, the performance of original method has been improved. In conclusion, our proposed algorithm is testified that it can produce high-quality image segments in most case.

5. Conclusion

In this paper, we propose a selective feature fusion based image segmentation (SFF-IS) algorithm with an adaptive threshold, which consists of seed determination, feature extraction, selective feature fusion, adaptive threshold determination, and region growing based segmentation. Our algorithm can automatically select an effective feature subset for different images and adaptively change the threshold during region growing. We conduct experiments on PASCAL VOC 2007 dataset. The results demonstrate that our algorithm proposed in this paper improves the performance of previous works and outperforms other segmentation approaches.

The contribution of our proposed algorithm involves (1) fully utilizing image features by selecting an effective feature subset for better performance and (2) adaptively changing the threshold during region growing according to different feature combination, thus achieving better segmentation performance for different images.

Feature extraction is an important procedure for image segmentation. In future work, we will extract more features to provide more information for image segmentation and extend our selective feature fusion algorithm on those features. As feature extraction using deep learning has achieved good results for image segmentation, our future work will focus on integrating our method with deep learning model.

Data Availability

Previously reported PASCAL VOC 2007 data were used to support this study and are available at c2007/workshop/index.html or These prior studies (and datasets) are cited at relevant places within the text as [24].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This work is partially supported by the National Key Research and Development Project under Grant no. 213, the National Natural Science Foundation of China under Grant no. 61573259, and the Special Project of Ministry of Public Security under Grant no. 20170004.


[1] X. Chen, H. Ma, X. Wang, and Z. Zhao, "Improving object proposals with multi-thresholding straddling expansion," Computer Vision and Pattern Recognition, pp. 2587-2595, 2015.

[2] J. R. Uijlings, K. E. Sande, T. Gevers et al, "Selective Search for Object Recognition," International Journal of Computer Vision, vol. 134, no. 2, pp. 154-171, 2013.

[3] Y. Lee, B. M. Jun, and W. Jun, "Automatic Image Tagging Model Based on Multigrid Image Segmentation and Object Recognition," Advances in Multimedia, vol. 2014, Article ID 857682, 7 pages, 2014.

[4] J. Krause, H. Jin, J. Yang, and F.-F. Li, "Fine-grained recognition without part annotations," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pp. 5546-5555, IEEE, Boston, MA, USA, June 2015.

[5] S.-L. Jui, C. Lin, W. Xu, W. Lin, D. Wang, and K. Xiao, "Dynamic incorporation of wavelet filter in fuzzy C-means for efficient and noise-insensitive MR image segmentation," International Journal of Computational Intelligence Systems, vol. 8, no. 5, pp. 796-807, 2015.

[6] P. Maji, S. Roy, and C. Lenglet, "Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation," PLoS ONE, vol. 10, no. 4, p. e0123677, 2015.

[7] A. Khadidos, V. Sanchez, and C.-T. Li, "Weighted level set evolution based on local edge features for medical image segmentation," IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1979-1991, 2017.

[8] W. Ju, D. Xiang, B. Zhang, L. Wang, I. Kopriva, and X. Chen, "Random walk and graph cut for co-segmentation of lung tumor on PET-CT-images," IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5854-5867, 2015.

[9] P. Dollar and C. L. Zitnick, "Fast edge detection using structured forests," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 8, pp. 1558-1570, 2014.

[10] P. Arbel, M. Maire, and C. Fowlkes, "Contour detection and hierarchical image segmentation," IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 33, no. 5, pp. 898-916, 2011.

[11] J.-L. Rose, T. Grenier, C. Revol-Muller, and C. Odet, "Unifying variational approach and region growing segmentation," in Proceedings of the Signal Processing Conference, EUSIPCO 2015, pp. 1781-1785, IEEE, Nice, Cote d'Azur, France, 2015.

[12] M. A. Hasnat, O. Alata, and A. Tremeau, "Joint Color-Spatial-Directional Clustering and Region Merging (JCSD-RM) for Unsupervised RGB-D Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 11, pp. 2255-2268, 2016.

[13] Z. Li, G. Liu, D. Zhang, and Y. Xu, "Robust single-object image segmentation based on salient transition region," Pattern Recognition, vol. 52, pp. 317-331, 2016.

[14] Z. Ju, J. Zhou, X. Wang, and Q. Shu, "Image segmentation based on adaptive threshold edge detection and mean shift," in Proceedings of the 4th IEEE International Conference on Software Engineering and Service Science, ICSESS 2013, pp. 385-388, IEEE, Beijing, China, May 2013.

[15] P. Guo and N. Li, "Self-Adaptive Threshold Based on Differential Evolution for Image Segmentation," in Proceedings of the 2015 2nd International Conference on Information Science and Control Engineering (ICISCE), pp. 466-470, Shanghai, China, April 2015.

[16] F. Gallivanone, M. Interlenghi, C. Canervari, and I. Castiglioni, "A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions," Journal of Instrumentation, vol. 11, no. 1, pp. C01022-C01022, 2016.

[17] J. Feng, B. Price, S. Cohen, and S.-F. Chang, "Interactive segmentation on RGBD images via cue selection," in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pp. 156-164, IEEE, Las Vegas Valley, NV, USA, July 2016.

[18] F. Kang, C. Wang, J. Li et al., "A Multiobjective Piglet Image Segmentation Method Based on an Improved Noninteractive GrabCut Algorithm," Advances in Multimedia, vol. 2018, Article ID 1083876, 9 pages, 2018.

[19] M. M. Cheng, Y. Liu, and Q. B. Hou, "HFS: Hierarchical Feature Selection for Efficient Image Segmentation," in Proceedings of the European Conference on Computer Vision, pp. 867-882, Springer, Amsterdam, The Netherlands, 2016.

[20] Y. Liang, M. Zhang, and W. N. Browne, "Figure-ground image segmentation using genetic programming and feature selection," Evolutionary Computation, pp. 3839-3846, 2016.

[21] R. Hettiarachchi and J. F. Peters, "Voronoi region-based adaptive unsupervised color image segmentation," Pattern Recognition, vol. 65, pp. 119-135, 2017

[22] G. Liu, Y. Zhang, and A. Wang, "Incorporating adaptive local information into fuzzy clustering for image segmentation," IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3990-4000, 2015.

[23] Q. W. Li, Z. H. Wei, and C. R. Zhao, "Optimized Automatic Seeded Region Growing Algorithm with Application to ROI Extraction," International Journal of Image and Graphics, vol. 17, no. 4, p. 1750024, 2017

[24] M. Everingham, L. Van Gool, and C. K. I. Williams, "The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results," voc2007/workshop/index.html.

[25] L. Vincent and P. Soille, "Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 6, pp. 583-598, 1991.

[26] A. A. Yahya, J. Tan, and M. Hu, "A Novel Model of Image Segmentation Based on Watershed Algorithm," Advances in Multimedia, vol. 2013, Article ID 120798, 8 pages, 2013.

[27] Y. Deng and B. S. Manjunath, "Unsupervised segmentation of color-texture regions in images and video," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 800-810, 2001.

[28] D. Comaniciu and P. Meer, "Mean shift: a robust approach toward feature space analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603-619, 2002.

[29] P. F. Felzenszwalb and D. P. Huttenlocher, "Efficient-graph-based image segmentation," International Journal of Computer Vision, vol. 59, no. 2, pp. 167-181, 2004.

[30] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, "SLIC superpixels compared to state-of-the-art superpixel methods," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274-2281, 2012.

[31] C. Y. Ren, V. A. Prisacariu, and I. D. Reid, "SLIC superpixels at over 250Hz," Computer Science, 2015, .04232.

[32] M. Storath and A. Weinmann, "Fast partitioning of vector-valued images," SIAM Journal on Imaging Sciences, vol. 7, no. 3, pp. 1826-1852, 2014.

[33] S. Fong, R. Wong, and A. V. Vasilakos, "Accelerated PSO Swarm Search Feature Selection for Data Stream Mining Big Data," IEEE Transactions on Services Computing, vol. 9, no. 1, pp. 33-45, 2016.

[34] B. Xue, M. Zhang, W. N. Browne, and X. Yao, "A Survey on Evolutionary Computation Approaches to Feature Selection," IEEE Transactions on Evolutionary Computation, vol. 20, no. 4, pp. 606-626, 2016.

[35] J. Luo and J. Ma, "Effective selection of mixed color features for image segmentation," in Proceedings of the 13th IEEE International Conference on Signal Processing, ICSP 2016, pp. 794-798, IEEE, Chengdu, China, November 2016.

[36] B. J. Frey and D. Dueck, "Clustering by passing messages between data points," Science, vol. 315, no. 5814, pp. 972-976, 2007.

[37] T. Ojala, "Gray Scale and Rotation Invariant Texture Classification with Local Binary Patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.

[38] X. Cai, F. Nie, H. Huang, and F. Kamangar, "Heterogeneous image feature integration via multi-modal spectral clustering," in Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, pp. 1977-1984, IEEE, Colorado Springs, CO, USA, June 2011.

[39] J. Canny, "A computational approach to edge detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679-698, 1986.

[40] L. Hubert and P. Arabie, "Comparing partitions," Journal of Classification, vol. 2, no. 1, pp. 193-218, 1985.

Qianwen Li (iD), Zhihua Wei (iD), and Wen Shen (iD)

Department of Computer Science and Technology, Key Laboratory of Embedded System and Service Computing, Tongji University, Shanghai, China

Correspondence should be addressed to Zhihua Wei;

Received 6 June 2018; Revised 20 August 2018; Accepted 28 August 2018; Published 9 September 2018

Academic Editor: Marco Roccetti

Caption: Figure 1: The pipeline of our method.

Caption: Figure 2: Different features extracted for images. Right to left: original images, LAB color, LBP texture, and Canny edges. Top to bottom: No. 000121, No. 000480, and No. 000676.

Caption: Figure 3: Graphical results of five example images segmented by different features. Right to left: No. 002227, No. 002619, No. 005043, No. 005902, and No. 006946.

Caption: Figure 4: AARI of five example images segmented by different features, where LL represents LAB+LBP. LA+C represents LAB+Canny. LB+C represents LBP+Canny. Merge represents combination feature of LAB, LBP, and Canny.

Caption: Figure 6: Average AARI of training data varying with threshold.

Caption: Figure 7: Average AARI of results segmented by four kinds of features with fixed threshold and adaptive threshold.
Figure 5: Average AARI of training data with different features.


LAB     0.10
LBP     0.08
Canny   0.04
LL      0.14
LA+C    0.18
LB+C    0.09
Merge   0.17
SFF     0.19

Note: Table made from bar graph.

Figure 8: Average AARI and Covering for each method tested.


SFF-IS     0.56
SFF        0.52
FPVVI      0.52
OASRG      0.51
SLIC       0.38
JSEG       0.37
GPU-SLIC   0.35
WS         0.28
GBIS       0.23


SFF-IS     0.20
SFF        0.19
FPVVI      0.19
OASRG      0.18
SLIC       0.17
JSEG       0.17
GPU-SLIC   0.17
WS         0.12
GBIS       0.08

Note: Table made from bar graph.
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Li, Qianwen; Wei, Zhihua; Shen, Wen
Publication:Advances in Multimedia
Date:Jan 1, 2018
Previous Article:Corrigendum to "Impostor Resilient Multimodal Metric Learning for Person Reidentification".
Next Article:Digital Image Steganography Using Eight-Directional PVD against RS Analysis and PDH Analysis.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters