Printer Friendly

Efficient Shape Classification using Zernike Moments and Geometrical Features on MPEG-7 Dataset.


There is a remarkable surge of multimedia information, for which the need of multimedia content description is becoming necessary by automatic searching. CBIR [1] and MPEG-7 [2] are two multimedia applications which have emanated to address the issue. Shape being a fundamental property of an object, its description plays a key role in CBIR and MPEG-7. Other than color and texture, shape feature is more efficient for extracting important content from an image [3]. Searching for targeted information from a huge collection of images and then its retrieval has given rise to CBIR systems [4]. Some of the important features for CBIR are shape context [5], spectral features, contours curves, shape signature [6] and shape histogram. Not only researchers, professionals and educators, but also general users share the goal to find similar images from large collections or from remotely distributed databases. For shape classification, Zernike Moments are considered to be the best and most powerful shape descriptor. It is a region-based method according to [7]. It is insensitive to noise (holes) and somewhat sensitive to occlusion and deformation [8]. From the previous researches it is studied that Zernike Moments are extracted from spectral domain [9].

Other than time-consuming techniques, which discuss the classification problem from a retrieval point of view, many classifiers have been used which give better results such as kernel extreme learning machine (k-ELM) classifier along with the RBF kernel [10], K-NN rule [11] and Support Vector Machine [12]. Most present techniques are good enough and provide satisfactory accuracy levels.

Earlier, many researchers have proposed different frameworks for shape classification. In work of [12], more discriminative representation of a shape is obtained by a combination of local and global features, where local corresponds to shape context descriptor [5] and global corresponds to blurred shape model [13] descriptor. In order to achieve better accuracy for MPEG-7 dataset, k-NN classifier is then used for classification. Shape contexts are clustered to describe shapes as bag of shape context that acts as vocabulary [14] and clustering leads to codebook. Aligned training images are fed to blurred shape model for k-NN classification. Latest work in [11] proposes 2D shape classification using bioinformatics approach. Classification is performed on MPEG-7 by contouring 2D shape encoded through the chain code process, and then biological sequences conversion is done using three encoding strategies. A Biological Sequence alignment tool is then used with k-NN classifier.

MPEG-7 along with 99 shapes dataset [15] and articulated test shapes [16] have been used by [11]. The method proposed is based on combination of three descriptors i.e. simplified shape signature, region skeleton descriptor and region area descriptor, which deals with the collection of landmarks from contours using k-ELM classifier.

Blurred shape model for binary and grey shape images is used by [13] for MPEG-7 along with clefs and accidental dataset, real symbols dataset and 17 class dataset of grey level symbols. The research gives better accuracy on the dataset using Error Correcting Output Code (ECOC) along Adaboost as the base classifier.

Co-transduction algorithm for shape retrieval is introduced by [17] which involves combination of two distance metrics along with multiclass classifier.


The proposed approach is evaluated using MPEG-7 Core Experiment (CE)-Shape-1 Part B dataset [18]. MPEG-7 was standardized in ISO/IEC 15938 multimedia content description interface. The dataset consists of 1400 shape images belonging to 70 classes and each class contains 20 images [10] as shown in Table I.

The shapes are defined by a binary mask outlining the objects. A sample set images is shown in Fig. 1. This public dataset has been used widely for shape matching.

The proposed approach consists of three stages as shown in Fig. 2. First stage deals with the preprocessing on dataset images, second stage includes shape feature extraction and third and last stage is responsible for classification of the images.

During preprocessing, the input image is resized so that all images have same resolution (256X256). Two types of features are extracted form preprocessed images i.e. Zernike Moments and geometrical features. Classification accuracy is computed by using k-NN, K*, Bagging, RandomSubSpace and GaussianProcesses.

First type of features extracted from resized images are Zernike Moments [19] because they are invariant to rotation, scaling and translation but not invariant to affine transformations. Zernike Moments are selected because of their orthogonality property [20]. The matching complexity increases as the order of Zernike Moments increases [21]. Zernike Moments are derived from Zernike polynomials using Equation (1) and Equation (2).

[V.sub.nm] (X,y) = [V.sub.nm] ([rho]cos[theta], [rho]sin[theta]) = [R.sub.nm] ([rho])exp(jm[theta]) (1)

where [R.sub.nm]([rho]) is the orthogonal radial polynomial.

[mathematical expression not reproducible] (2)

where n-|m| = even, |m|[less than or equal to] n.

A complete set of complex valued function, orthogonal over the unit disk, i.e., [x.sup.2] + [y.sup.2] [less than or equal to] 1 yield Zernike Moments. The order n complex Zernike Moments with repetition m are stated in Equation (3).

[mathematical expression not reproducible] (3)

one of the drawbacks of moment descriptors is that it is difficult to relate high order moments with the shape. Zernike Moments are the most appropriate amongst many moment shape descriptors for the proposed shape description. Zernike Moments based shape description proves to give very encouraging results [22, 23].

The geometric features are used to categorize shapes with large dissimilarities among the images. In most of the previous works, they are usually combined with other shape descriptors in order to eliminate false hits, as they cannot be used as standalone shape descriptors [24]. In this research work, shape parameters used for extraction of geometric features are aspect ratio, convexity, rectangularity, circularity, circularity ratio, solidity and irregularity.

The closely fitted rectangular box that encloses a shape is known as bounding box. Height and width are significant parameters to characterize a shape. These parameters are made translation, rotation and resolution invariant by defining the bounding box with respect to principal axes of the shape. Principal axes are line segments intersecting each other orthogonally at the centroid of the shape. A principal component analysis (PCA) of the shape yields the directions of major and minor axis. Height is measured along the major axis and width along the minor axis. Aspect ratio is given as the ratio of length of major axes and length of minor axes of the shape in images. It is calculated by principal axis method using Equation (4).

[mathematical expression not reproducible] (4)

where [m.sub.1] is minor axes length and [m.sub.2] is major axes length of the shape as shown in the Fig. 3.

A convex hull, i.e. minimal convex covering area of an object, is calculated for all shape images as shown in Fig. 4.

Shape covered by the minimal convex polygon yields the convex hull of the shape. Convex hull is constructed by traversing the shape contour in such a manner as to minimize the turn angle in each step. Convexity is thought of as elastic ribbon stretching across the boundaries of the object and is defined as the ratio of perimeter of the convex hull of the shape [[omega].sub.convexhull] (in pixels) and perimeter of the original shape [omega] (in pixels) using Equation (5).

[mathematical expression not reproducible] (5)

Rectangularity, showing how much rectangular is a shape or how much it is in minimum bounding rectangle is calculated using Equation (6).

[mathematical expression not reproducible] (6)

where As is the shape area and Ar is the minimum bounding rectangle area.

Circularity, presenting how the shape is similar to circle, is calculated using Equation (7).

[mathematical expression not reproducible] (7)

where As is the shape area and Ac the area of the circle having the same perimeter as the shape.

Circularity ratio defined as the area of shape and shape's perimeter is calculated using Equation (8).

[mathematical expression not reproducible] (8)

Solidity defined as up to which extent the shape is convex or concave is calculated using Equation (9).

[mathematical expression not reproducible] (9)

where [A.sub.s] is the area of the shape region and H is the area of the convex hull of the shape.

Irregularity, another feature of geometry, deals with the irregularity of the shape and is defined in Equation (10).

[mathematical expression not reproducible] (10)

where (x, y) is the centroid of the shape, and ([x.sub.i], [y.sub.i]) is the coordinate of a pixel in the shape boundary. Equation (10) provides ratio of radius of the maximum and minimum circle enclosing the region within the shape.

In addition to above defined major features, some minor features like narrow factor, perimeter ratio of diameter and form factor are also calculated [25]. Narrow factor is given as the diameter d and Minor axes length [m.sub.2] ratio: ([d/m.sub.2]), whereas form factor is defined as the ratio between area A and perimeter p of the shape (4[pi]A/[p.sup.2]).

Five classifiers are used for classification in this research work including Lazy.KStar [26] also known as k*, Lazy.IBk [27] also known as k-NN, Meta.Bagging [28] also known as Bootstrap Aggregation, Meta.RandomSubSpace [29] also called feature bagging or attribute bagging and Functions.GaussianProcesses [30] also called kriging, using a publicly available free tool Weka [31].

k* is an instance based classifier. The class of a test instance is decided based on the class of those training instances similar to the test instance. Instance similarity is determined by some similarity function. k* is different from other instance based learners as it uses an entropy based distance function. K-NN classifier used with some suitable value of k is based on cross-validation and distance weighting. Bagging, a machine learning ensemble meta algorithm, is used to reduce variance and avoid overfitting. RandomSubSpace, also a machine learning ensemble meta algorithm is used to minimize the correlation between base classifiers by training them on random subsets of attributes. GaussianProcesses, belonging to supervised learning algorithms, is used to solve classification problem with the benefit that prediction interpolates the observations.


Two types of experiments were conducted. Experiment-I was performed using cross validation and Experiment-II using percentage split. Experiment-I was carried out using 10, 15 and 20 folds for cross validation. Cross validation divides the data in n parts (folds). The classification process is repeated n times, using n-1 folds for training and 1 fold for testing (each time a different fold is selected out of n folds). Experiment-II was conducted using percentage split of 50%, 66%, and 77%. In percentage split the data is split into two parts. The specified percentage of the data is used for training and the remaining for testing. Performance metrics used for evaluation of the proposed scheme were accuracy and root mean squared error (RMsE). Accuracy is the percentage of correctly classified instances and was calculated using Equation (11). RMsE is used to measure the difference between values predicted by the trained model and the actual values as defined in Equation (12). RMsE represents average errors taken on the features of a particular feature vector individually (Zernike moments, Geometrical, Zernike moments + Geometrical) used in the classification phase of the experiment. The results obtained are shown in Table II.

[mathematical expression not reproducible] (11)

where TP, FN, FP and TN represent the number of true positives, false negatives, false positives and true negatives, respectively

[mathematical expression not reproducible] (12)

Geometrical features (aspect ratio, convexity, rectangularity, circularity, circularity ratio, solidity, irregularity, narrow factor, perimeter ratio of diameter and form factor) and Zernike Moments were used as feature vector for this research work. Both experiments were conducted in three phases. In first phase, only Zernike Moments were used for classification. In second phase, geometrical features were used for classification. In third phase both types of features were combined (Zernike + Geometric) and classification was performed. It was observed that the combination of both descriptors produced better results on MPEG-7 CE-Shape-1 dataset. Best results achieved by using cross validation and percentage split were compared and presented in Table III.

The results obtained were improved when cross validation with 20 folds over combination of Zernike Moments and geometrical feature was used. An accuracy of 92.45% was achieved on the selected dataset. The accuracies using cross validation with 20 folds and 77% split were plotted graphically as shown in Fig. 5 (a)&(b), respectively.

Geometrical features, when extracted on the dataset, produced an accuracy of 85.95%. In order to improve the results, it was necessary to combine geometrical features with any other descriptor. Zernike Moments, when used alone, produced an accuracy of 88.13% using k-NN classifier with of 20-fold cross validation and 88.8% with percentage split of 77%, whereas [22] reported an accuracy of 87.223% using Zernike Moments. Research work of [13] reported 80% accuracy using Zernike Moments.

The comparison showed that results of this research are outperforming when Zernike Moments are used. In work of [11] the accuracy reported on MPEG-7 dataset was 77.24% using a different feature set. An accuracy of 43.64% was obtained by [32] using Zernike Moments with k-NN classifier where k=3 and with Adaboost it resulted in 51.29% accuracy. Shape matching using shape context [14] resulted in an accuracy of 76.51%. An accuracy of 93.32% on MPEG-7 dataset using Diffusion process [33] was achieved. In the latest work of [10], accuracy of 91.43% was achieved by Region, Skeleton and Contour based descriptors on MPEG-7 dataset using k-ELM. Comparing it to the proposed approach, it was observed that Zernike Moments presented better results. Inverse of the original shape using Zernike Moments Extended form was used by [8] producing better accuracy but on a small data set whereas the dataset considered in this research was larger and presented excellent accuracy comparative to other researches.


In this paper, we have proposed an approach for efficient shape classification using Zernike Moments and geometrical features. The paper addresses the problem of shape classification and retrieval, and has proposed a framework that combines Zernike Moments and geometric features. The described approach exploits the uniqueness of Zernike Moments and when combined with geometrical features yields better classification results. The images are classified using five classifiers but k-NN classifier has turned out to be the best for shape classification. The proposed approach has been analyzed on the challenging MPEG-7 CE Shape-1 dataset. The results concluded that the proposed approach is efficient and achieves high accuracy of 92.45% which outperforms the accuracies of existing state-of-the-art-methods.


[1] T. Dharani and I. L. Aroquiaraj, "A survey on content based image retrieval," in Pattern Recognition, Informatics and Mobile Engineering (PRIME), 2013 IEEE Conference on, 2013, pp. 485-490. doi:10.1109/ICPRIME.2013.6496719

[2] C. Iakovidou, N. Anagnostopoulos, A. C. Kapoutsis, Y. Boutalis and S. A. Chatzichristofis, "Searching images with MPEG-7 (& MPEG-7-like) powered localized descriptors: the SIMPLE answer to effective content based image retrieval," in Content-Based Multimedia Indexing, 2014 IEEE 12th International Workshop on, 2014, pp. 1-6. doi:10.1109/CMBI.2014.6849821

[3] M. Anvaripour and H. Ebrahimnezhad, "Accurate object detection using local shape descriptors," Pattern Analysis and Applications, vol. 18, no. 2, pp. 277-295, 2015. doi:10.1107/s10044-013-0342-x

[4] S. Seth, P. UpaRedhyay, R. Shroff and R. Komatwar, "Review of content based image retrieval systems," International Journal of Engineering Trends and Technology, vol. 19, no. 4, pp. 178-181, 2015.

[5] L. Zhao, Q. Peng and B. Huang, "Shape matching algorithm based on shape contexts," IET Computer Vision, vol. 9, no. 5, pp. 681-690, 2015. doi:10.1049/iet-cvi.2014.0159

[6] A. Barman and P. Dutta, "Facial expression recognition using shape signature feature," in Research in Computational Intelligence and Communication Networks, 2017 IEEE Third International Conference on, 2017, pp. 174-179. doi:10.1109/ICRCICN.2017.8234502

[7] G. Zhang and D. Lu, "Review of shape representation and description techniques," Pattern Recognition, vol. 37, no. 1, pp. 1-19, 2004. doi:10.1016/j.patcog.2003.07.008

[8] S. Pierard, A. Lejeune and M. V. Droogenbroeck, "Boosting shape classifiers accuracy by considering the inverse shape," Journal of Pattern Recognition Research, vol. 11, no. 1, pp. 41-54, 2016. doi:10.13176/11.727

[9] S. Sharma and P. Khanna, "Computer-aided diagnosis of malignant mammograms using zernike moments" Journal of Digital Imaging, vol. 28, no. 1, pp. 77-90, 2015. doi:10.1007/s10278-014-9719-7

[10] C. Lin, C. M. Pun, C. M. Vong and D. Adjeroh, "Efficient shape classification using region descriptors," Multimedia Tools and Applications, vol. 76, no. 1, pp. 83-102, 2017. doi:10.1007/s11042-015-3021-7

[11] M. Bicego and P. Lovato, "A bioinformatics approach to 2D shape classification," Computer Vision and Image Understanding, vol. 145, pp. 59-69, 2016. doi:10.1016/j.cviu.2015.11.011

[12] S. Battiato, G. M. Farinella, O. Giudice and G. Puglisi, "Aligning shapes for symbol classification and retrieval," Multimedia Tools and Applications, vol. 75, no. 10, pp. 5513-5531, 2016. doi:10.1007/s11042-015-2523-7

[13] S. Escalera A. Fornes, O. Pujol, P. Radeva, G. Sanchez, et al., "Blurred shape model for binary and grey-level symbol recognition," Pattern Recognition Letters, vol. 30, no. 15, pp. 1424-1433, 2009. doi:10.1016/j.patrec.2009.08.001

[14] S. Belongie, J. Malik and J. Puzicha, "Shape matching and object recognition using shape contexts," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 509-522, 2002. doi:10.1109/34.993558

[15] D. Sharvit, J. Chan, H. Tek and B. B. Kimia, "Symmetry-based indexing of image databases," in Content-Based Access of Image and Video Libraries, 1998 IEEE Workshop on, pp. 56-62, 1998. doi:10.1109/IVL. 1998.694496

[16] H. Ling and D. W. Jacobs, "Shape classification using the inner-distance," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 2, pp. 286-299, 2007. doi:10.1109/TPAMI.2007.41

[17] X. Bai, B. Wang, C. Yao, W. Liu and Z. Tu, "Co-transduction for shape retrieval," IEEE Transactions on Image Processing, vol. 21, no 5, pp. 2747-2757, 2012. doi:10.1109/TIP.2011.2170082

[18] L. J. Latecki, R. Lakamper and T. Eckhardt, "Shape descriptors for non-rigid shapes with a single closed contour," in Computer Vision and Pattern Recognition, 2000 IEEE Conference on, 2000, pp. 424-429. doi:10.1109/CVPR.2000.855850

[19] C. Lin and C. M. Pun, "Robust region descriptors for shape classification," in Computer Graphics, Imaging and Visualization, 2016 International Conference on, 2016, pp. 269-272. doi: 10.1109/CGiV.2016.59

[20] C. Pillai, "A survey of shape descriptors for digital image processing," International Journal of Computer Science and Information Technology and Security, vol. 3, no. 1, pp. 44-50, 2013.

[21] I. K. Kazmi, L. You and J. J. Zhang, "A survey of 2D and 3D shape descriptors," in Computer Graphics Imaging and Visualization, 2013 Tenth International Conference on, 2013, pp. 1-10. doi:10.1109/CGIV.2013.11

[22] W. Y. Kim and Y. S. Kim, "A region-based shape descriptor using zernike moments," Signal Processing: Image Communication, vol. 16, no. 1, pp. 95-102, 2000. doi:10.1016/S0923-5965(00)00019-9

[23] M. Murat, S.W. Chang, A. Abu, H. J. Yap and K. T. Yong, "Automated classification of tropical shrub species: a hybrid of leaf shape and machine learning approach," PeerJ, vol. 5, no. e3792, pp. 1-23, 2017. doi:10.777/peerj.3792

[24] M. Yang, K. Kpalma and J. Ronsin, "A survey of shape feature extraction techniques," Pattern Recognition Techniques, Technology and Applications, Austria, pp. 43-90, 2008.

[25] S. G. Wu, F. S. Bao, E. Y. Xu, Y. X. Wang, Y. F. Chang, et al., "A leaf recognition algorithm for plant classification using probabilistic neural network," in Signal Processing and Information Technology, 2007 IEEE International Symposium on, 2007, pp. 11-16. doi:10.1109/ISSPIT.2007.4458016

[26] J. G. Cleary and L. E. Trigg, "K*: an instance-based learner using an entropic distance measure," in Machine Learning, 1995 12th International Conference on, 1995, pp. 108-114. doi:10.1016/B978-1-55860-377-6.50022-0

[27] D. W. Aha, D. Kibler and M. K. Albert, "Instance-based learning algorithms," Machine Learning, vol. 6, no. 1, pp. 37-66, 1991. doi:10.1007/BF00153759

[28] E. Bauer and R. Kohavi, "An empirical comparison of voting classification algorithms: bagging, boosting, and variants," Machine Learning, vol. 36, no. 1, pp. 105-139, 1999. doi:10.1023/A:1007515423169

[29] T. K. Ho, "The random subspace method for constructing decision forests," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832-844, 1998. doi:10.1109/34.709601

[30] C. E. Rasmussen, "Gaussian processes in machine learning," Advanced Lectures on Machine Learning, Germany, pp. 63-71, 2004. doi:10.1007/978-3-540-28650-9_4

[31] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, et al., "The WEKA data mining software: an update," ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, pp. 10-18, 2009. doi:10.1145/1656274.1656278

[32] S. Escalera, A. Fornes, O. Pujol, J. Llados and P. Radeva, "Circular blurred shape model for multiclass symbol recognition," IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 41, no. 2, pp. 497-506, 2011. doi:10.1109/TSMCB.2010.2060481

[33] X. Yang, S. Koknar-Tezel and L. J. Latecki, "Locally constrained diffusion process on locally densified distance spaces with applications to shape retrieval," in Computer Vision and Pattern Recognition, 2009 IEEE Conference on, 2009, pp. 357-364. doi:10.1109/CVPR.2009.5206844

Sameera ABBAS, Saima FARHAN, Muhammad Abuzar FAHIEM, Huma TAUSEEF

Lahore College for Women University, Lahore, Pakistan

SN  Attributes        Values

1.  Name              MPEG-7 CE-Shape-1 Part B
2.  No. of images     1400
3.  No. of classes      70
4.  Images per class    20
5.  Resolution        Variable
6.  Format            JPEG
7.  Type              Binary images
8.  File extension    jpg


Folds / Splits                   Classifiers        Zernike Moments
                                                    Accuracy  RMSE

                                 k-NN               85.38     10.9977
                                 K (*)              87.64      9.9985
Cross Validation with 10 Folds   Bagging            79.11     12.6642
                                 RandomSubSpace     78.13     13.0708
                                 GaussianProcesses  75.92     13.4075
                                 k-NN               86.25     10.6716
                                 K (*)              88.05      9.8376
Cross Validation with 15 Folds   Bagging            79.44     12.5445
                                 RandomSubSpace     77.66     13.1445
                                 GaussianProcesses  76.25     13.3344
                                 k-NN               85.95     10.7782
                                 K (*)              88.13      9.805
Cross Validation with 20 Folds   Bagging            79.51     12.5398
                                 RandomSubSpace     78.79     12.8805
                                 GaussianProcesses  76.13     13.3552
                                 k-NN               82.65     12.0736
                                 K (*)              81.1      12.4118
Percentage Split with 50% Split  Bagging            74.06     14.0264
                                 RandomSubSpace     72.26     14.8462
                                 GaussianProcesses  75.42     14.0551
                                 k-NN               83.03     12.094
                                 K (*)              86.8      10.3827
Percentage Split with 60% Split  Bagging            78.55     12.9272
                                 RandomSubSpace     76.66     13.6474
                                 GaussianProcesses  76.95     13.502
                                 k-NN               83.9      11.6924
                                 K (*)              88.84      9.4865
Percentage Split with 77% Split  Bagging            81.18     12.3551
                                 RandomSubSpace     78.99     12.9522
                                 GaussianProcesses  78.04     13.0421

Folds / Splits                  Classifiers        Geometrical features
                                                   Accuracy  RMSE

                                k-NN               81.03     12.4286
                                K (*)              76.09     13.6452
Cross Validation with 10 Folds  Bagging            76.3      13.276
                                RandomSubSpace     75.37     13.6538
                                GaussianProcesses  54.06     17.2428
                                k-NN               81.47     12.2878
                                K (*)              76.55     13.5243
Cross Validation with 15 Folds  Bagging            76.65     13.1799
                                RandomSubSpace     76.01     13.5852
                                GaussianProcesses  54.56     17.1873
                                k-NN               81.47     12.2936
                                K (*)              76.85     13.4305
Cross Validation with 20 Folds  Bagging            77.72     12.9933
                                RandomSubSpace     75.93     13.5632
                                GaussianProcesses  54.53     17.1844
                                k-NN               74.71     14.4807
                                K (*)              71.44     14.9807
Percentage Split with 50% Split Bagging            64.7      15.662
                                RandomSubSpace     61.97     16.1656
                                GaussianProcesses  50.49     17.8934
                                k-NN               72.82     15.1231
                                K (*)              69.97     15.3922
Percentage Split with 60% Split Bagging            70.96     14.5568
                                RandomSubSpace     70.27     14.9192
                                GaussianProcesses  55.66     17.3467
                                k-NN               72.58     15.2361
                                K (*)              72.24     14.8736
Percentage Split with 77% Split Bagging            73.05     13.9657
                                RandomSubSpace     75.37     13.7419
                                GaussianProcesses  57.41     17.0133

Folds / Splits                  Classifiers        Zernike + Geometric
                                                   Accuracy  RMSE

                                k-NN               92.12      8.0757
                                K (*)              89.79      9.1119
Cross Validation with 10 Folds  Bagging            82.07     11.9472
                                RandomSubSpace     80.62     12.4797
                                GaussianProcesses  83.72     11.3976
                                k-NN               92.32      7.972
                                K (*)              89.97      9.0298
Cross Validation with 15 Folds  Bagging            80.7      12.2575
                                RandomSubSpace     81.37     12.3223
                                GaussianProcesses  83.92     11.3262
                                k-NN               92.45      7.8965
                                K (*)              89.92      9.0571
Cross Validation with 20 Folds  Bagging            82.08     11.8849
                                RandomSubSpace     81.83     12.1778
                                GaussianProcesses  83.86     11.3405
                                k-NN               86.26     10.7573
                                K (*)              85.11     11.1222
Percentage Split with 50% Split Bagging            75.96     13.7093
                                RandomSubSpace     73.53     14.299
                                GaussianProcesses  81.61     12.4223
                                k-NN               89.27      9.5503
                                K (*)              85.41     11.0207
Percentage Split with 60% Split Bagging            80.9      12.4138
                                RandomSubSpace     78.86     13.2004
                                GaussianProcesses  83.24     11.7282
                                k-NN               90.23      9.0879
                                K (*)              87.25     10.1066
Percentage Split with 77% Split Bagging            81.96     12.0208
                                RandomSubSpace     80.17     12.6404
                                GaussianProcesses  84.32     11.2634

(*) Values are given in percentages


Classifiers                  Zernike Moments
                   Cross validation  Percentage split
                   (20 folds)        (77%)

k-NN               85.95%            83.9%
K*                 88.13%            88.84%
Bagging            79.51%            81.18%
RandomSubSpace     78.79%            78.99%
GaussianProcesses  76.13%            78.04%

Classifiers            Geometrical features
                   Cross validation  Percentage split
                   (20 folds)        (77%)

k-NN               81.47%            72.58%
K*                 76.85%            72.24%
Bagging            77.72%            73.05%
RandomSubSpace     75.93%            75.37%
GaussianProcesses  54.53%            57.41%

Classifiers                  Zernike + Geometric
                   Cross validation  Percentage split
                   (20 folds)        (77%)

k-NN               92.45%            90.23%
K*                 89.92%            87.25%
Bagging            82.08%            81.96%
RandomSubSpace     81.83%            80.17%
GaussianProcesses  83.86%            84.32%
COPYRIGHT 2019 Stefan cel Mare University of Suceava
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Abbas, Sameera; Farhan, Saima; Fahiem, Muhammad Abuzar; Tauseef, Huma
Publication:Advances in Electrical and Computer Engineering
Article Type:Report
Date:Feb 1, 2019
Previous Article:A Novel Approach for Activity Recognition with Down-Sampling 1D Local Binary Pattern Features.
Next Article:Circular Derivative Local Binary Pattern Feature Description for Facial Expression Recognition.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters