Printer Friendly

RowAMD Distance: A Novel 2DPCA-Based Distance Computation with Texture-Based Technique for Face Recognition.

1. Introduction

Biometric technology applications are growing because of the increasing needs of security in different life aspects. Hietmeyer considered six different biometric technologies. Among these technologies, face recognition has several advantages [1]. It is user-friendly, in that its facial features have the highest compatibility scores among the other biometric technologies and it is harmonious with human visual perception [2]. Due to these reasons along with its potential use in law enforcement and commercial applications, it has been a research topic for decades. However, recognizing a face is a difficult task because it is a three-dimensional object that is subject to variations in many factors, e.g., variations in illumination, facial expression, aging, make-up, hair style, pose variation, occlusion, background variation and low resolution images. Many approaches have been proposed to build a robust face-recognition system. One of the most successful techniques that has been used in face recognition is Principal Component Analysis (PCA) [3-8] because of the ease of its implementation, its reasonable performance level [4, 5], and effectiveness in large databases [9].

Despite its merits, PCA still demands improvement. Hence, many researchers have focused on improving PCA. However, the majority of the approaches convert the 2D image matrix to 1D vector and this transformation has limited the improvement of PCA [10-12]. Yang et al. proposed a novel approach that deals with the original 2D image matrices as it is without any conversion [11]. This method is called 2DPCA. Using 2DPCA to find the eigenvectors of the so-called covariance matrix of the gallery images is more efficient than PCA in terms of accuracy and computational time [11, 12]. Many approaches use the idea of two-dimensional matrices instead of vectors have been proposed as well, such as DiaPCA [12], Selected 2DPCA Coefficients [13] and Variation 2DPCA [14].

However, many 2D image matrices based methods concentrate on feature extraction, while similarity functions methods are still quite few. In the original 2DPCA method [11], the sum of the Euclidean distance between columns of feature matrices is used to measure the distance between two feature matrices. However, this method does not work with the entire subspace. Instead, it considers them independently. In this way, this distance deals with matrix distance for individual dimensions. Consequently, this method gives an equal weight for each dimension in the subspace. Some dimensions in the subspace may contain indiscriminating information such as illumination [5, 15]. Thus, its performance will be independent of the training data variations. DiaPCA uses the Euclidean distance between two feature matrices [12]. However, the Euclidean distance between matrices is unsuitable for all applications [16] and it does not consider the relationships between pixels [17]. In [18], an assembled matrix distance metric is proposed to measure the distance between the feature matrices. This distance is the generalized form of the previous distances. It also deals with each individual subspace separately. Meng and Zhang [19] proposed a volume matrix as a classification measurement called the volume distance. It is then shown in [20] that the volume distance method is superior to the other methods in terms of accuracy. It can be noted that all the previous methods deal with feature matrix by merely considering the relationship between columns with respect to the original face matrices. As such, the concept of subspace is restrained to only one scope. Therefore, a new distance method, which utilizes the row (instead of the column) of feature matrix, is introduced [21]. This method is more compatible with eigenvectors calculation than from the covariance matrix constructed from the outer product of the rows of face images.

Regardless of the successes of 2DPCA-based algorithms as a holistic approach, they are sensitive to illumination variation, which has a greater impact on image variation. As a result, the viewing direction of the same class is larger than the variations due to change in different classes [22-26]. Hence, many approaches including texture-based techniques have been suggested to overcome the problem of illumination variation with regard to face recognition [5, 27-33]. Among the approaches, Local Binary Pattern (LBP) [35] has become a popular technique for face representation, because it is invariant to monotonic grayscale transformations. The LBP descriptor assigns a binary string or a decimal number to a pixel of an image by thresholding the intensity values of the eight neighborhood pixels with the value of the central pixel using 3 x 3 Kernel matrix. Since then, numbers of LBP variants are offered [31, 35-42]. In [35], Jin et al. proposed Improve LBP (ILBP) by giving the central pixel the largest weight since the central pixel always has more information than its neighbor pixels. The ILBP operator also reveals the local shape by redefining the threshold, which is the mean of a 3 x 3 patch. To produce a more compact binary pattern, a Center Symmetric Local Binary Pattern (CS-LBP) modifies the description of interest regions [36]. Here, only 4-pairs of center-symmetric pixels are compared. As a result, the coding number is reduced considerably but, important texture information contains in the central pixel are discarded. This is considered one of the drawbacks of CS-LBP. Besides, choosing an adaptable threshold is a burdensome job [37]. To conquer these issues, an improvement to CS-LBP, called Direction LBP (D-LBP) [37], has been proposed. The local pattern is classified by D-LBP descriptor based on the relation of the center pixel and the center-symmetric pixels, i.e., the pairs of the opposed pixels in a circular neighborhood.

Looking to the previous texture methods, a common property can be noticed. They consider only the gray disparity between the pixels in a local region, which gives the noise a great effect in the calculation [38]. In order to cope with this problem, Junding and Shisong introduced an improvement to D-LBP (ID-LBP) [35], which considers the relationship between the center-symmetric pixels and the local gray mean. Another direction of LBP improvement is introduced in [39], which is called a Multi-scale Block Local Binary Pattern (MB-LBP). It avoids the locality of LBP descriptor by replacing the single pixel computation with a comparison to average grayvalues of a block of sub-regions. Hence, more information of image representation is captured. Recently, a high-order local pattern descriptor, called Local Derivative Pattern (LDP) [40], is proposed for face recognition. It encodes the directional pattern features hold in a particular region by extracting high-order local information. Another approach to overcoming the drawbacks of LBP, which is also more suitable for face recognition, is proposed by Jabid et al. [41]. They introduce a new local feature descriptor, called Local Directional Pattern. Their descriptor produces the local features by computing the edge response values in eight directions for each pixel and generating a code from the relative strength magnitude. A more discriminant and less sensitive to noise in uniform regions descriptor called Local Ternary Patterns (LTP), is introduced by Tan and Triggs [31]. Then, Petpon and Srisuk propose a novel face representation method for face recognition called Local Line Binary Pattern (LLBP) [42]. It summarizes the local spatial structure of an image by thresholding the local window with binary weight and introduces the decimal number as a texture presentation. The basic idea of LLBP is to compute horizontal and vertical line binary codes, and its magnitude independently so that the change in image intensity can be captured.

It can be noticed from the various proposed texture-based techniques that each technique has its significant performance in compensating illumination variation. However, there is no detail study on how each technique performs when combined with 2DPCA for face recognition application. Hence, there is a need to study the effect of texture-based techniques on 2DPCA features. Thus, in this paper, we deal with the problem of compensating for the changes of illumination direction as a prepossess stage to 2DPCA and explore the texture-based approaches as the feature for classification with different distance computation methods [11-12, 18-19, 21]. We also propose a new distance computation method called Row Assembled Matrix Distance (RowAMD) which is a generalized form of row distance method [21]. Besides, we consider these LBP variants to examine the performance of the distance computation methods. From the experiments, it is observed that the proposed RowAMD achieves the highest accuracy if combined with LLBP for face recognition using Yale Face database, Extended Yale Face database B, AR database and LFW database. The results also demonstrate that the proposed method gives the best recognition accuracy compared to the existing distance methods with regard to 2DPCA algorithm.

The rest of this paper is organized as follows. In Section 2, 2DPCA algorithm is reviewed. Section 3 briefly describes LBP and its variants. The distance computation methods are introduced briefly in Section 4. Then, in Section 5, the proposed RowAMD distance method is described in detail. Section 6 provides experimental results and discussion. Finally, the paper is concluded in Section 7.

2. Texture-Based Methods for Face Recognition

2.1 Local Binary Pattern

The Local Binary Pattern (LBP) descriptor summarize the local spatial structure of an image using a non-parametric 3x3 Kernel matrix. The fundamental principle of the LBP descriptor was initially proposed by Ojala et al. [34]. It assigns a binary string or a decimal number to a pixel of an image by thresholding the intensity values of the eight neighborhood pixels with the value of the central pixels as depicted in Fig. 1. The basic LBP encodes 256 simple feature detectors in a single 3x3 operator. During the LBP operation, the value of the LBP code of a pixel ([x.sub.c],[y.sub.c]) is given by Eq. (1),

[mathematical expression not reproducible] (1)

where [g.sub.c] is the gray value of the center pixel ([x.sub.c], [y.sub.c]), [g.sub.i] refers to gray values of 8 equally spaced pixels, and s(x) function defines a thresholding function as in Eq. (2).

[mathematical expression not reproducible] (2)

LBP is obtained by first concatenating these binary numbers and then converting the sequence into the decimal representation.

2.2 Multi-scale Block Local Binary Pattern

Liao et al. introduce Multi-scale Block Local Binary Pattern (MB-LBP) as an extension to the basic LBP due to it provides a complete image representation by encoding not only microstructures but as macrostructures of image patterns [39]. Their experiments show that the MB-LBP method significantly outperforms other LBP based face recognition algorithms. In MB-LBP, the comparison descriptor between single pixels in LBP is simply replaced by a comparison descriptor between average gray-values of square blocks, refer Fig. 2.

The size of the kernel can be 3 x 3, 9 x 9, 15 x 15 and so on (3x3 MB-LBP is equivalent to original LBP). An output value of the MB-LBP descriptor can be obtained from Eq. (3),

[mathematical expression not reproducible] (3)

where [g.sub.c] is the average gray-value obtained at central block and [g.sub.i] the average gray-value obtained at its neighboring block and s(.) function is the same as in Eq. (2).

2.3 Local Line Binary Pattern

Petpon and Srisuk proposed a new face representation technique named Local Line Binary Pattern (LLBP) [42]. Its descriptor has horizontal and vertical components. The magnitude of LLBP is obtained by the binary codes for both components as illustrated in Fig. 3 and the descriptors are mathematically defined in Eqs. (4) to (7).

[mathematical expression not reproducible] (4)

[mathematical expression not reproducible] (5)

[mathematical expression not reproducible] (6)

[mathematical expression not reproducible] (7)

where LLB[P.sub.h], LLB[P.sub.V] and LLB[P.sub.m] are LLBP on horizontal direction, vertical direction and its magnitude, respectively. N is the length of the line in pixels, [h.sub.n] is the pixel intensities along with the horizontal line and [v.sub.n] is the pixel intensities along with the vertical line, c = N/2 is the position of the center pixel, [h.sub.c] on the horizontal line and [v.sub.c] on the vertical line. One of the benefits of LLBP descriptor is that it emphasizes the change in image intensity such as vertices, edges and corners.

3. Two-Dimensional Principal Component Analysis

3.1 Overview of Two-Dimensional Principal Component Analysis (2DPCA)

Suppose a random image matrix A with m x n dimension projects on an X n -dimensional unitary column vector using linear transformation as shown in Eq. (8).

Z = AX (8)

Z is called the projected feature vector of the image A. The best projection vector X, which contains the total scatter of the projection samples, can be determined by tracing of the covariance matrix criterion as shown in Eq. (9),

J(X) = tr ([S.sub.x]) (9)

where J(X) is the trace of [S.sub.x], and [S.sub.x] is the covariance matrix of the projected feature vectors of the training images. Thus, [S.sub.x] can be denoted as in Eq. (10).

[S.sub.x] = [E[(Z - EZ)(Z - EZ).sup.T]] (10)

Here, E is the expected value of the multiplication of the training images projected feature vectors with its transpose and EZ is the mean of feature vectors. Hence,

tr([S.sub.x]) = tr ([X.sup.T]E [(A - E(A).sup.T] (A - E(A))X) (11)

The covariance matrix symbolized as [G.sub.t] of M training face images is defined as in Eq. (12).

[mathematical expression not reproducible] (12)

Here, [A.sub.i] is the ith training image, M is the number of training images and A is the average image of all training data. By calculating the eigenvectors of [G.sub.t] and choosing the eigenvectors [X.sub.1],..., [X.sub.d] corresponding to the largest eigenvalues, we get the optimal projection axes. Since the size of [G.sub.t] is only nxn, computing its eigenvectors is more efficient than PCA. The feature matrix is extracted for each image in the gallery through multiplying it by the chosen eigenvectors as given in Eq. (8). The same equation is then applied for the test image. Finally, the nearest neighbor distance is used for classification.

3.2 Distance Computation Methods in 2DPCA

There are six distance computation methods in 2DPCA used in this work. Each of them is explained below.

3.2.1 Two-dimensional (2D) Euclidean Distance

The 2D Euclidean Distance computation method is defined in Eq. (13),

[d.sub.2D]([Y.sub.t], [Y.sub.i]) = || [Y.sub.t] - [[Y.sub.i]||.sub.2] (13)

where, [Y.sub.t] is the feature matrix of a test image, [Y.sub.i] is a feature matrix of a train image.

3.2.2 Column Euclidean Distance

The Column Euclidean Distance computation method is defined in Eq. (14),

[mathematical expression not reproducible] (14)

where, [Y.sub.t] is the feature matrix of a test image, [Y.sub.i] is a feature matrix of a train image, d is the number of columns and [y.sup.t.sub.c], [y.sup.i.sub.c] are a specific column vector of the test and the train feature matrices, respectively.

3.2.3 Assembled Matrix Distance (AMD)

The Assembled Matrix Distance computation method is defined in Eq. (15),

[mathematical expression not reproducible] (15)

where, [Y.sub.t] is the feature matrix of a test image, [Y.sub.i] is a feature matrix of a train image, d is the number of columns and [y.sup.t.sub.c], [y.sup.i.sub.c] are a specific column vector of the test and the train feature matrices, respectively. p is control variable, which is computed empirically and is equal to 0.125.

3.2.4 Volume Distance

The Volume Distance computation method is defined in Eq. (16),

[mathematical expression not reproducible] (16)

where, [y.sub.t] is the feature matrix of a test image, [y.sub.i] is a feature matrix of a train image.

3.2.5 Row Euclidean Distance

The Row Euclidean Distance computation method is defined in Eq. (17),

[mathematical expression not reproducible] (17)

where, [y.sub.t] is the feature matrix of a test image, [y.sub.i] is a feature matrix of a training image, m is the number of rows and [y.sub.r.sup.t], [y.sub.r.sup.i] are specific row vector of the test and train feature matrices, respectively.

3.2.6 RowkNN Euclidean Distance

This distance uses the same idea of row features with rowkNN like computation. Considering face images, it can be noticed from row distance that some rows in the feature matrix may hold some redundancies from another row. Consequently, not all rows should be used for classification. The easiest evidently path toward choosing the best rows is to take one row and jump j row(s) because two consecutive rows usually carry almost the same information. Besides, some areas in face images such as forehead have high similarity with different faces and some are prone to face variations such as occlusion. Due to this, k-nearest neighbor is used for each row candidate. Thus, kNN between rows is used and then a sub-feature matrix is selected depending on the value of j.

3.3 Row Assembled Matrix Distance (RowAMD)

Since the feature of 2DPCA-based algorithms is a matrix, alternative ways for distance methods can be dealt with. One of them is to utilize the rows of feature matrix. Besides this, it is proven in [18] that using control variable p with 2DPCA-based algorithms is more efficient because p generalizes the matrix norm. Besides, using the control variable p, the outliers values affects are suppressed. These works motivated us to define a new distance computation method that considers the control variable p in combination with row distance. This method is called RowAMD distance. Since the feature is still the same matrix and the only difference in mathematic is changing the order of summation of rows and columns, there is no need to prove that RowAMD is a matrix norm. Hence, the new measuring distance can be defined as in Eq. (18).

[mathematical expression not reproducible] (18)

Here, [Y.sub.t] is the feature matrix of a test image, [Y.sub.i] is the feature matrix of a training image, m is the number of rows and [y.sup.t.sub.r], [y.sup.i.sub.r] are specific row vector of the test and train feature matrices, respectively. p is the control variable, which is computed empirically with different databases and is set to 0.47 in our work.

4. Experiments and Discussions

4.1 Yale B and Extended Yale B Face Databases

In this section, we validate the performance of the proposed distance method with several texture-based techniques on two standard face databases, namely Yale Face Database B [43], and the Extended Yale Face Database B [44]. We also compare the proposed distance method with the state-of-art distance methods in 2DPCA algorithm. The data format of both databases is same. Yale B comprises of 10 subjects while the latter comprises of 28 subjects. Extended Yale Face Database B contains 16,128 images of 28 subjects. Fig. 4 shows the examples of face images for one of the subjects from Yale Face Database B.

In our experiment, both Yale B Face Database and its extended version are combined into one large database so that the robustness of the proposed technique can be evaluated. The new database consists of 2,432 (excluding ambient images) frontal face images of 38 subjects. From these 2,432 images, 25 corrupted and bad images are discarded. The corrupted images refer to the images which during acquisition, there was a small imbalance in the intensities of the odd and even fields in each frame. The bad images refer to the images whose corresponding strobe did not go off. Fig. 5 shows the example of corrupted and bad images. For training set, a single image from each subject (38 subjects) is selected. The remaining 2,369 images are used as the test images.

Table 1 shows the recognition rate using PCA with nearest neighbour (NN) (PCA-NN) and 2DPCA with different distance computational methods among several texture-based techniques. Texture-based techniques have been tested with different parameters values. The best recognition rates with the corresponding parameter values are also depicted in Table 1. It can be noticed from Table 1 that RowAMD and volume distance are the most robust methods after tested with different texture-based techniques. However, the best accuracy is given by the proposed method associated with LLBP method.

The results demonstrate how sensitive PCA-based and 2DPCA-based algorithms towards illumination variation. This sensitivity limits PCA-based methods in a real application, which is posed to illumination variation. It also shows that by using texture-based approaches as a preprocessing step, the PCA and 2DPCA performance can be enhanced. The result in Table 1 shows that 2DPCA gives better rates than PCA in almost all cases. It can also be noticed from Table 1 that the proposed method (RowAMD) and volume distance are the most robust methods after tested with different texture-based techniques. However, in overall, the best accuracy is given by RowAMD. It is obvious from Table 1 that LLBP is the best texture-based technique, which increases the accuracy rate about 2.7 times compared to the raw data. MB-LBP is the second best texture-based technique. From the raw data, we may notice that the row-based distance methods give the worst accuracy rates compared to other rates. This is because the illumination variations affect almost all rows of face images. Hence, it is difficult to get good result. Nevertheless, when the effect of these variations is taken out with different texture-based methods, their performance is enhanced and becoming more effective than others. This is because the row of feature matrices in 2DPCA is actually the same row of the original face image. Thus, we find the principal component which gives the most correlated feature and at the same time we still trace some of the geometric properties of face images. This explains the robustness of the proposed RowAMD. It can also be noted from the database images that all faces are frontal. Hence, LLBP is considered as the best texture-based technique compared to others when there are no other face variations except illumination. It means that it is more suitable for face authentication when there is cooperation from the users.

4.2 AR Face Database

AR face database contains 2,600 warped frontal color images of 100 individuals [45]. Each subject has 26 different images recorded in two different sessions separated by two weeks. Each session consists of 13 images of neutral expression, smile, anger, scream, left light on, right light on, all side lights on, wearing sunglasses, wearing sunglasses and left light on, wearing sunglasses and right light on, wearing scarf, wearing scarf and left light on, wearing scarf and right light on. The images were taken in controlled conditions of illumination and viewpoint using the same camera. All images were converted to grayscale with the size of 165 x 120. Some examples of AR database are shown in Fig. 6.

In this experiment, the neutral expression images are used for training and the other face variations are used for testing. In addition, PCA results have been removed because it shows poor recognition rates in all cases compared to 2DPCA. This is illustrated in Table 1 using Yale B and Extended Yale B Face Databases. Table 2 shows the 2DPCA recognition rate with different distance methods using several texture-based techniques. It can be noticed clearly from Table 2 that RowAMD distance is the most robust method when tested with different texture-based techniques. Besides, the best accuracy is given by the proposed method associated with the MB-LBP method.

It can be noted from Table 2 that the best results are obtained with MB-LBP method compared to the raw data and other texture-based techniques. It is also obvious that MB-LBP associated with the proposed method gives the best recognition, in this particular database, which increases the accuracy rate about two times compared with the raw data. RowkNN has a comparable recognition rate associated with MB-LBP if it is compared to the proposed method. This is because the face images are well aligned, which therefore, increases the possibility of matching rows with the same class. Besides, the effects of illumination variations are relieved by texture-based techniques. In addition, the effect of wearing sunglasses and scarf is also relieved by skipping some outlier feature matrix rows. Here again, the face images are frontal. However, other face variations, wearing sunglasses and scarf, are incorporated with illuminations. This is considered as a lack of cooperation from the users because they try to hide their identities. Thus, it is more suitable for face identification.

4.3 Labeled Faces in the Wild (LFW) Face Database

Labeled Faces in the Wild (LFW) [46] was developed by University of Massachusetts, Amherst. It contains 13,233 images of 5,749 individuals with large pose, occlusion, illumination and expression variations. The database is a collection of famous people from the internet and the purpose of this database is to study face recognition problem in an unconstrained environment. There are 1,680 persons having two or more distinct images in the database. The remaining people have just one image in the database. The images are in JPEG format with size 250 x 250 each. Few images are grayscale, however, most of them are color images. A sub-database is derived from this data set. People who have 10 images or more are selected and 10 images among them are chosen. Thus, the new database consists of 100 individuals with 10 images each. Images are cropped, resized to 80 x 60 and aligned using [47]. Some examples of LFW face database are shown in Fig. 7.

In this experiment, the first image per class are used for training, while the remaining images are used for testing. Same as AR database experiment, PCA results have been removed due to its poor performance. Table 3 shows the 2DPCA recognition rate with different distance methods using several texture-based techniques. From Table 3, it can be distinctly noted that the proposed RowAMD distance method shows significant robustness with different texture-based techniques. In addition, the best accuracy is also given by RowAMD distance method associated with MB-LBP method.

It can be noticed from Table 2 that the best results are obtained with MB-LBP method as well compared with the raw data and other texture-based techniques. It is also obvious that MB-LBP associated with the proposed RowAMD gives the best recognition in this particular database, which increases the accuracy rate about two times compared with the raw data. The results evidently show the durability of RowAMD with different texture-based methods. It can also be noticed distinctly from the results in Tables 1, 2 and 3 that RowAMD associated with texture-based techniques in 2DPCA subspace show superiority compared to other distance methods. These results also demonstrate the sensitivity of 2DPCA against illumination variations. Alleviating illumination variations effect using texture-based techniques associated with 2DPCA improves face recognition performance compared to raw data.

From Table 1, it can be evidenced that LLBP has outperformed other texture-based methods. However, when tested using AR and LFW databases which have other face variations associated with illumination variations, MB-LBP gives better results. This is because LLBP is more descriptive to small changes (microstructure). Therefore, LLBP is found to be more expressive to small changes with different face variations. In comparison, MB-LBP describes the texture pattern of microstructures and macrostructures of an image. Hence, it represents a complete image pattern. As a result, it can be concluded that RowAMD distance method is best used with LLBP for face authentication problems when the main issue is illumination variations. For face identification, the best used is by combining the RowAMD with MB-LBP.

5. Conclusion

While PCA-based and 2DPCA-based methods are considered among the most successful algorithms for face recognition, they are still very sensitive to illumination variations. Texture-based methods can be used to enhance their performance in recognition systems. In this paper, different texture-based techniques are examined to determine the most suitable one with 2DPCA. In addition, a new distance method called Row Assembled Matrix Distance (RowAMD) is proposed. From the experiments, LLBP can be considered as the best texture-based technique among several texture-based techniques for face authentication. The recognition rate is increased about 2.7 times compared to raw data, i.e., without any face descriptor. Additionally, MB-LBP is observed to be better for face identification. It increases the recognition rate about two times compared to raw data. The results also show the robustness of the proposed RowAMD with several texture-based techniques while confirming that it offers better accuracy than existing methods.

Acknowledgement

This research is fully supported by the Malaysia Ministry of Higher Education Fundamental Research Grant Scheme (FRGS) No. 203/PELECT/6071294.

References

[1] Heitmeyer, R. "Biometric Identification Promises Fast and Secure processing of Airline Passengers," The International Civil Aviation Organization Journal, 55(9): 10-11, 2000.

[2] MRTD, Machine-readable travel documents (MRTD), 2000. Available from: <http://www.icao.int/mrtd/overview/overview.cfm>.

[3] Sirovich, L., & Kirby, M., "Low-dimensional procedure for the characterization of human faces.," Josa a, 4(3): 519-524, 1987. Article (CrossRef Link)

[4] Moon, H. and Phillips, P. J., "Computational and Performance Aspects of PCA-Based Face-Recognition Algorithms," Perception-London, 30(3): 303-322, 2001. Article (CrossRef Link)

[5] Tjahyadi, R., Liu, W. and Venkatesh, S., "Automatic Parameter Selection for Eigenfaces," Pacific Journal of Optimization, 2(2): 277-288, 2006.

[6] Meng, H. and Ke, X., "Further Research on Principal Component Analysis Method of Face Recognition," in Proc. of IEEE International Conference on Mechatronics and Automation, pp. 421-425. 2008. Article (CrossRef Link)

[7] Barnouti, N. H., Al-Dabbagh, S. S. M., and Al-Bamarni, M. H. J., "Real-Time Face Detection and Recognition using Principal Component Analysis (PCA)- back Propagation Neural Network (BPNN) and Radial Basis Function (RBF)," Journal of Theoretical and Applied Information Technology, 91(1): 28-34, 2016.

[8] Turk, M. A., & Pentland, A. P., "Face recognition using eigenfaces," in Proc. of IEEE Proceedings CVPR'91., IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586-591, 1991. Article (CrossRef Link)

[9] Zhao, W. and Chellappa, R., Face Processing: Advanced Modeling and Methods, Inc. Academic Press, 2006.

[10] Wiskott, L., Fellous, J. M., Kuiger, N. and Von der Malsburg, C. "Face Recognition by Elastic Bunch Graph Matching," Computer Analysis of Images and Patterns, Springer, pp. 456-463, 1997. Article (CrossRef Link)

[11] Yang, J., Zhang, D., Frangi, A. F. and Yang, J. Y., "Two-dimensional PCA: A New Approach To Appearance-Based Face Representation and Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(1): 131-137, 2004. Article (CrossRef Link)

[12] Zhang, D., Zhou, Z. H. and Chen, S., "Diagonal Principal Component Analysis for Face Recognition, Pattern Recognition, 39(1): 140-142, 2006. Article (CrossRef Link)

[13] Koerich, A., Oliveira S, L. and Britto Jr, A., "Face Recognition Using Selected 2DPCA Coefficients," in Proc. of 1IWSSIP 17th International Conference on Systems, Signals and Image Processing, pp. 490-494, 2010.

[14] Zeng, Y., Feng, D. and Xiong, L., "An Algorithm of Face Recognition Based on The Variation of 2DPCA," Journal of Computational Information Systems 7(1): 303-310, 2010.

[15] Draper, B. A., Yambor, W. S. and Beveridge, J. R., "Analyzing PCA-Based Face Recognition Algorithms: Eigenvector Selection and Distance Measures," Empirical Evaluation Methods in Computer Vision, 2002. Article (CrossRef Link)

[16] Meyer, C., Matrix Analysis and Applied Linear Algebra, Vol. 2, Society for Industrial and Applied Mathematics, 2000. Article (CrossRef Link)

[17] Liwei, W., Yan, Z. and Jufu, F., "On The Euclidean Distance of Images," IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8): 1334-1339, 2005. Article (CrossRef Link)

[18] Zuo, W., Zhang, D. and Wang, K., "An Assembled Matrix Distance Metric for 2DPCA-Based Image Recognition," Pattern Recognition Letters, 27(3): 210-216, 2006. Article (CrossRef Link)

[19] Meng, J. and Zhang, W., "Volume Measure in 2DPCA-Based Face Recognition," Pattern Recognition Letters, 28(10): 1203-1208, 2007. Article (CrossRef Link)

[20] Xu, Z., Zhang, J. and Dai, X., "Boosting for learning a similarity measure in 2DPCA based face recognition," World Congress on Computer Science and Information Engineering, Vol. 7, IEEE, pp. 130-134, 2009. Article (CrossRef Link)

[21] Al-Arashi, W. and Suandi, S., "Row-KNN Distance Computation in 2DPCA Based for Face Recognition," International Journal of Academic Research Part A, 5(1): 139-145, 2013. Article (CrossRef Link)

[22] Heusch, G. Cardinaux, F. and Marcel, S., "Lighting Normalization Algorithms for Face Verification," Instituto de Investigacion Agropecuaria de Panama Communications (IDIAP-com) 3, 2005.

[23] Phillips, P.J. Moon, H., Rauss, P. and Rizvi S. A., "The Feret Evaluation Methodology for Face-Recognition Algorithms," Image and Vision Computing 16(5): 295-306, 1998. Article (CrossRef Link)

[24] Kim, T. K. and Kittler, J., "Locally Linear Discriminant Analysis for Multimodally Distributed Classes for Face Recognition with A Single Model Image," IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3): 318-327, 2005. Article (CrossRef Link)

[25] Zhao, W., Chellappa, R., Phillips, P. J. and Rosenfeld, A., "Face recognition: A Literature Survey," ACM Computing Surveys 35(4): 399-458, 2003. Article (CrossRef Link)

[26] Moses, Y. Adini, Y. and Ullman, S., "Face Recognition: The Problem of Compensating for Changes in Illumination Direction," in Proc. of the 3rd European Conference on Computer Vision, Lecture Notes in Computer Science, Springer, pp. 286-296, 1994. Article (CrossRef Link) [27] Anbang, X., Xin, J., Yugang, J. and Ping, G., "Complete two-dimensional PCA for face recognition," in Proc. of 18th International Conference on Pattern Recognition, 2006. ICPR 2006., Vol. 3, pp.481-484, 2006. Article (CrossRef Link)

[28] Shao, M. and Wang, Y., "Joint Features for Face Recognition Under Variable Illuminations," in Proc. of Fifth International Conference on Image and Graphics, IEEE Computer Society, pp. 922-927, 2009. Article (CrossRef Link)

[29] Wang, Y., Zhang, L., Liu, Z., Hua, G.,Wen, Z., Zhang, Z. and Samaras, D., "Face Relighting From A Single Image Under Arbitrary Unknown Lighting Conditions," IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(11): 1968-1984, 2009. Article (CrossRef Link)

[30] O'Toole, A. J., Phillips, P. J., Jiang, F., Ayyad, J., Penard, N. and Abdi, H. (2007). Face Recognition Algorithms Surpass Humans Matching Faces Over Changes in Illumination, IEEE Transactions on Pattern Analysis and Machine Intelligence 29(9): 1642-1646. Article (CrossRef Link)

[31] Tan, X. and Triggs, B., "Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions," IEEE Transactions Image Processing, 19(6): 1635-1650, 2010. Article (CrossRef Link)

[32] Mian, A., "Illumination Invariant Recognition and 3D Reconstruction of Faces Using Desktop Optics," Optics Express, 19(8): 7491-7506, 2011. Article (CrossRef Link)

[33] Bozorgtaber, B., Azami, H. and Noorian, F., "Illumination Invariant Face Recognition using Fuzzy lda and ffnn," Journal of Signal and Information Processing, 3(1):45-50, 2012. Article (CrossRef Link)

[34] Ojala, T., Pietikainen, M. and Harwood, D., "A Comparative Study of Texture Measures with Classification Based On Featured Distributions," Pattern Recognition 29(1): 51-59, 1996. Article (CrossRef Link)

[35] Jin, H., Liu, Q., Lu, H. and Tong, X., "Face Detection using Improved LBP Under Bayesian Framework," International Conference on Image and Graphics, IEEE Computer Society, Los Alamitos, CA, USA, pp. 306-309, 2004. Article (CrossRef Link)

[36] Heikkila, M., Pietikainen, M. & Schmid, C., "Description of Interest Regions with Local Binary Patterns," Pattern Recognition, 42(3): 425-436, 2009. Article (CrossRef Link)

[37] Xiaosheng, W. and Junding, S., "An Effective Texture Spectrum Descriptor," in Proc. of IEEE 5th International Conference on Information Assurance and Security (IAS'09), Vol. 2, pp. 361-364, 2009. Article (CrossRef Link)

[38] Junding, S. Shisong, Z. and Xiaosheng, W., "Image Retrieval Based On An Improved CS-LBP Descriptor," in Proc. of The 2nd IEEE International Conference on Information Management and Engineering (ICIME), pp. 115-117, 2010. Article (CrossRef Link)

[39] Liao, S. Zhu, X., Lei, Z., Zhang, L., and Li, S. Z., "Learning Multi-scale Block Local Binary Patterns for Face Recognition," in Proc. of 2nd IAPR/IEEE International Conference on Biometrics, Lecture Notes on Computer Science, 4642, pp. 828-837, 2007. Article (CrossRef Link)

[40] Zhang, B. Gao, Y., Zhao, S. and Liu, J., "Local Derivative Pattern Versus Local Binary Pattern: Face Recognition with High-order Local Pattern Descriptor," IEEE Transactions Image Processing 19(2): 533-544, 2010. Article (CrossRef Link)

[41] Jabid, T., Kabir, M. H. and Chae, O., "Local Directional Pattern (LDP) for Face Recognition," in Proc. of Consumer Electronics (ICCE), International Conference on Digest of Technical Papers, pp. 329-330, 2010. Article (CrossRef Link)

[42] Petpon, A. and Srisuk, S., "Face Recognition with Local Line Binary Pattern," in Proc. of Fifth International Conference on Image and Graphics (ICIG'09), IEEE Computer Society, Washington, DC, USA, pp. 533-539, 2009. Article (CrossRef Link)

[43] Georghiades, A. S., Belhumeur, P. N. and Kriegman, D. J., "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose," IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6): 643-660, 2001. Article (CrossRef Link)

[44] Lee, K. C., Ho, J. and Kriegman, D. J., "Acquiring Linear Subspaces for Face Recognition under Variable Lighting," IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5):684-698, 2005. Article (CrossRef Link)

[45] Martinez, A. M. and Kak, A. C., "PCA Versus LDA," IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2): 228-233, 2001. Article (CrossRef Link)

[46] Huang, G. B., Mattar, M., Berg, T. and Learned-Miller, E., "Labeled Faces in The Wild: A Database for Studying Face Recognition in Unconstrained Environments," Technical report, University of Massachusetts, Amherst pp. 07-49, 2007.

[47] Peng, Y., Ganesh, A., Wright, J., Xu,W. and Ma, Y., "RASL: Robust Alignment by Sparse and Low-Rank Decomposition for Linearly Correlated Images," IEEE Transactions on Pattern Analysis and Machine Intelligence 34(11): 2233-2246, 2012. Article (CrossRef Link)

Waled Hussein Al-Arashi (1), Chai Wuh Shing (2) and Shahrel Azmin Suandi (2)

(1) Electronics Engineering Department, University of Science and Technology, Sana'a, Yemen

[e-mail: w.alarashi@ust.edu]

(2) Intelligent Biometric Group, School of Electrical and Electronic Engineering, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang, Malaysia

[E-mail:seng177@gmail.com, shahrel@usm.my]

(*) Corresponding author: Shahrel Azmin Suandi

Waled Hussein Al-Arashi is currently an Assistant Professor of Computer Engineering, University of Science and Technology, Sana'a, Yemen. He received his B.Sc. degree in Computer and Control Engineering, at Sana'a University, Yemen, in June 2004, and his M.Sc. degree in Embedded Systems, at Yarmouk University, Jordan, in September 2006, and his Ph.D degree in Imaging from Universiti Sains Malaysia in April 2014. Since 2014, he works as a assistant professor at electronic department, University of Science and Technology, Yemen. His research interests include face recognition, computer vision, pattern recognition, and neural computation.

Chai Wuh Shing received the B.Eng. degree in Electronic Engineering and M.Eng. degree from Universiti Sains Malaysia in 2010 and 2013, respectively. He is currently an entrepreneur in electronic and computer vision systems products.

Shahrel Azmin Suandi received his B.Eng. in Electronic Engineering, M.Eng. and D.Eng. degrees in Information Science from Kyushu Institute of Technology, Fukuoka, Japan, in 1995, 2003 and 2006, respectively. He is currently an Associate Professor and the Coordinator of Intelligent Biometric Group (IBG) at School of Electrical and Electronic Engineering, Universiti Sains Malaysia (USM), Engineering Campus, Penang, Malaysia. He is also the Editor at Institute of Postgraduate Study, USM. Prior to joining the university, he worked as an engineer at Sony Video (M) Sdn. Bhd. and Technology Park Malaysia Corporation Sdn. Bhd. for almost six years. His current research interests are face based biometrics, real-time object detection and tracking, and pattern classification. He has served as a reviewer to well-known international conferences and journals such as WACV, AVSS, IET Computer Vision, IET Biometrics, SPIE Journal of Electronic Imaging, Applied Soft Computing, Multimedia Tools and Applications and others.

Received January 31, 2017; revised April 20, 2017; revised June 2, 2017; accepted July 24, 2017; published November 30, 2017
Table 1. Recognition rates using different texture-based techniques and
distance computation methods with PCA and 2DPCA on Yale B and Extended
Yale B database

Technique  Parameters    PCA                              2DPCA
                       PCA-NN  Euclidean  Column   AMD    Volume  Row

Raw        -           32.04   32.04      38.30   42.31  47.33   32.08
LBP        P=8, R=1    49.29   50.46      51.38   52.17  52.84   51.00
ILBP       -           39.60   42.98      43.48   42.90  41.56   43.19
MB-LBP      9x9        70.84   70.43      74.69   75.86  79.03   70.97
CS-LBP     P=8, R=2,   65.83   65.62      69.67   68.88  70.89   66.67
            T=0.01
D-LBP        -         32.16   44.24      43.78   43.07  43.32   43.57
ID-LBP       -         25.14   37.09      36.93   36.68  36.59   37.09
LDiP       k=3         33.63   37.51      37.13   36.80  35.96   37.80
LDP        2nd order   47.24   51.25      51.59   51.13  52.13   51.42
LTP        T=11        43.11   46.53      47.58   46.95  43.57   48.16
LLBP       N=23        86.67   88.43      86.74   85.09  85.13   87.84

Technique
           RowkNN  RowAMD

Raw        30.28   32.04
LBP        50.00   51.37
ILBP       41.10   43.32
MB-LBP     66.29   71.26
CS-LBP     59.61   67.21

D-LBP      41.14   44.24
ID-LBP     35.05   37.30
LDiP       37.88   38.22
LDP        48.04   51.59
LTP        48.41   48.29
LLBP       80.86   88.84

Table 2. Recognition rates using different texture-based techniques and
distance computation methods with 2DPCA on AR database

Technique  Parameters  Euclidean  Column    AMD   Volume   Row   RowkNN

Raw        -           35.45      49.12    8.67   7.62    1.41    3.50
LBP        P=8, R=4    78.67      81.50    1.17   2.54    3.54    8.33
ILBP       -           63.37      64.50    5.04   1.54    7.54    8.08
MB-LBP     6 x 6       85.91      86.62    6.62   7.00    8.62    9.90
CS-LBP     P=8, R=2,   49.25      51.29    1.58   3.29    6.33    1.58
           T=0.01
           -
D-LBP      -           76.37      79.95    8.33   1.12    0.79    0.91
ID-LBP     k=3         72.91      75.58    5.57   5.54    6.58    4.45
LDiP       2nd order   52.67      53.17    3.70   2.62    6.54    2.60
LDP        T=11        76.70      79.70    5.08   8.50    0.75    6.58
LTP        N=17        71.62      70.67    7.45   7.87    5.79    6.79
LLBP                   75.17      75.67    6.13   7.29    9.29    6.54

Technique  RowAMD

Raw        45.87
LBP        86.54
ILBP       69.41
MB-LBP     89.91
CS-LBP     53.08

D-LBP      82.70
ID-LBP     78.95
LDiP       58.95
LDP        82.08
LTP        72.50
LLBP       81.42

Table 3. Recognition rates using different texture-based techniques
and distance computation methods with 2DPCA on LFW Face Database

Technique  Parameters  Euclidean  Column  AMD    Volume  Row    RowkNN

Raw        -           39.21      41.95   43.63  47.16   38.70  34.99
LBP        P=8, R=1    69.00      69.10   68.80  68.90   70.30  65.30
ILBP       -           62.70      62.30   61.50  61.00   63.30  55.20
MB-LBP     9 x 9       70.40      70.90   70.40  70.70   71.80  66.29

CS-LBP     T=0.01      58.40      58.40   58.00  56.80   60.00  43.11
D-LBP      -           36.20      35.70   35.60  35.50   38.80  33.60
ID-LBP     -           36.20      35.20   34.70  32.50   38.30  27.60
LDiP       k=3         56.50      55.60   55.90  54.20   58.00  40.70
LDP        2nd order   51.70      52.00   50.40  48.90   52.30  39.70
LTP        T=11        65.10      65.20   63.80  63.40   63.50  54.60
LLBP       N=13        64.20      64.50   64.30  64.80   65.40  61.50

Technique  RowAMD

Raw        37.15
LBP        71.70
ILBP       63.70
MB-LBP     73.00

CS-LBP     57.20
D-LBP      40.80
ID-LBP     40.10
LDiP       58.60
LDP        53.40
LTP        62.90
LLBP       66.10
COPYRIGHT 2017 KSII, the Korean Society for Internet Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Al-Arashi, Waled Hussein; Shing, Chai Wuh; Suandi, Shahrel Azmin
Publication:KSII Transactions on Internet and Information Systems
Article Type:Report
Date:Nov 1, 2017
Words:7312
Previous Article:Object tracking based on adaptive updating of a spatial-temporal context model.
Next Article:Novel Partitioning Algorithm for a Gaussian Inverse Wishart PHD Filter for Extended Target Tracking.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters