Printer Friendly

Biometric Face Recognition Based on Enhanced Histogram Approach.

1. Introduction

Biometric face recognition technology is a new measure and evolving as governments and businesses use to identify offenders and protect innocent people. However, manufacturers of this expensive biometric technology must address the inevitable ethical issues: what if the wrong person or technology identifies the infringe on individual rights? Researchers and designers are constantly measuring and testing biometric methods to ensure that the right person identifies, even though the ACLU (American Civil Liberties Union) says that technology is superior to our basic rights to private life.

However, public attitudes are becoming less negative, due in part to 9/11 and the London bombings, and to the prevalence of DNA testing. Some of these concerns have kept face recognition products from reaching their full potential, but these concerns will fall by the wayside when governments and firms acknowledge that face recognition technology is the best passive and non-intrusive recognition technology available [1,2].

The biometric identification equipment assigns a numerical value to every subject captured by hi-tech cameras. Biometrics identify spacing between the ears, eyes, and nose, as well as allowing for variants such as facial hair and glasses. But biometric technologies are not yet as accurate as fingerprinting. A positive ID can be made with biometrics 95% of the time, as opposed to 99% of the time with fingerprinting, but biometrics has the advantage through image data volume: there are 1.3 billion photographs of individuals on official databases, versus only several hundred million sets of fingerprints on file. Biometrics comprises methods for recognizing humans only based on one or more intrinsic physical or behavioral traits. In computing biometrics is used as a form of access management for access control and identity. It is also used to identify individuals in the groups under surveillance [3,4].

Thus, the aim of this research is to recognize the biometric face. Preprocessing operation is applied on the captured face image is the first step in this study. Then extract the features after applying the histogram equalization. Results obtained from the implementation of this algorithm on different types of face images via face database (face94) are good. The execution rate of this algorithm depends mainly on the number of operations needed to recognize the indicated face image. Introducing equalization technique via implemented algorithm improved the recognition rate up to effective value according False Accept Rate (FAR) and False Match Rate (FMR).

The rest of the paper is structured as follows. In section 2, we present the literature review. Section 3 describes the proposed biometric face recognition system. The results and analysis are presented in section 4. Finally, Section 5 concludes the paper.

2. Literature Review

Many researches are published that related to face recognition and below some of them: Gepner et al. specified a clear facial deterioration treatment in autistic children, especially in the field of emotion. The aim of this study was to evaluate the influence of movement on the recognition of facial expression in autistic youth. The results indicate that children with autism are not significantly worse than their controls in one of our experimental conditions. Compared with previous studies showing lower performance in autistic children than in control children when presented with static faces, our data suggest that slow dynamic presentations facilitate the recognition of facial expressions by autistic children [5].

Stone studied the secret facial recognition among neurologically intact participants using a very brief stimulus presentation to avoid stimulus awareness. In experiment 1, skin conductance response (SCR) photographs of celebrities and unknown faces were recorded; the faces are displayed for 220 msec. and 17 msec. in a design of participants in the interior. In Experiment 2, associative priming was found in a decision-making task with familiarity when the main face has shown for 220 msec., but has not occurred when the facilitation bonuses were presented for 17 msec. In Experiment 3, participants could differentiate between good and bad faces that were shown without knowledge in a selection decision forced two variants [6].

Posamentier et al. proposed a review treatment of identity and facial expressions. The treatment of basic facial expressions is detailed in the light of behavioral and neuroimaging data. While experimental studies and neuropsychological data support the existence of two systems, neuroimaging literature gives a less clear picture because it shows a considerable overlap in activation patterns in response to different facial processing tasks [7].

Mignault et al. showed that based on the premise that the inclination of the human head is homologous to the animal dominance screen, one can hypothesize that when a head is bent, the face should be viewed as submissive, sad, where they occur the feelings of inferiority and, paradoxically, as the contraction of the greater zygomatic muscle. On the contrary, a raised head must be perceived as more dominant and a greater superiority of emotion. In this work, we carried out two experiments that show the 3D models of faces to 64 participants. The results confirmed our hypothesis and showed that the raised head connotes happiness. [8]

Gross has shown that children who have experienced autism, mental retardation and language disorders; The children of a clinical control group showed photographs of female faces, orangutans, and dogs (boxers) expressing joy, sadness, anger, surprise, and neutral expression. Each species face, they asked the children to identify expressions happy, sad, angry, or surprised. Children with autism had a better performance when seeing the whole face than when the partial faces are seen; and he did not do any better than chance just by looking at the top side [9].

Fabregas et al. have shown that people with autistic spectrum disorders (ASD) show normal activation in the fusiform when they look at familiar faces but known. This study used eye tracking to study the underlying models of care familiar with the treatment of unknown face in ASD. Eye movements of 18 participants and 17 individuals with typical development with ASD were recorded by passively looking at three categories: unfamiliar faces, a familiar face that was repetitive and repetitive that was unfamiliar [10].

Smach et al. applied generalized Fourier descriptors and application of invariant search under group actions. The application of move groups provides a general methodology of recognition. This methodology generalizes the basic classical method of the Fourier invariants of the contours of objects. In this paper, we have used the results of this theory in a support-vector-machine context for 3D object recognition. As is customary in practice this document classified 3D objects from the 2D information. However, our method is quite general and can be applied directly to 3D data in other contexts [11].

Wang et al. proposed analysis of major component dimensions (2DPCA) as a new method based on the eigenvectors proved to be an efficient technique for extracting and representing a characteristic image. In this article, starting from a space of the parametric image Gaussian distribution and a spherical model of Gaussian noise to the image. In addition, using the P2DPCA probabilistic perspective, P2DPCA was extended to a mixture of local P2DPCA models (MP2DPCA). The MP2DPCA faces offer a modeling method in an unrestricted (complex) environment [12].

Tang presented a brief overview of pattern recognition with wave theory. It contains the following aspects: the analysis and detection of wave singularities; Wavelet descriptors for object shapes; Models of invariable representation; Recognition of handwritten and printed characters; Analysis and classification of textures; Indexing and retrieval of images; Classification and grouping; Analysis of documents with waves; Iris recognition units; Facial recognition using the wavelet transform; Classification of hand gestures; The character process with the B-spline wavelet transform; Fusion of images of small waves, and others [13].

Van Belle et al. developed a new stimulus of 60 male stimuli in seven directions in depth. The set can be used in the investigation of the structural mechanisms versus features of facial treatment. Changes based on configuration or integrals are produced by changing the general shape of the face, while phonetic or partial changes are achieved by changing the local form of internal facial features. In all faces, the extra facials indexes were eliminated or standardized. The entire stimulus also contains color-coded division of each face in the areas of interest, which is useful for investigating eye movement in facial scanning strategies 5 [14].

Delac, et al. considered the potential to perform facial recognition in the JPEG and JPEG2000 compressed domain. This is achieved by avoiding complete decompression and the use of transformation coefficients as input to face recognition algorithms. In this article, we propose a new methodology to compare and use it show that face recognition can be effectively implemented directly in the compressed domain. The first part uses all the available transformation coefficients and shows the recognition rates are comparable and, in some cases, even higher than the recognition rates obtained using non-compressed image pixels. The second part proposed an efficient method of detecting coefficients. The results show that using the proposed method, the recognition rate can be significantly improved while reducing the calculation time. Finally, it is proposed that it should be like a hypothetical compressed domain facial recognition system [15].

Al-Shayea et al. designed an algorithm via discrete wavelet transform (DWT) and principle component analysis (PCA) that minimizes the images size. A complexity reduction is achieved by optimizing the number of operations needed. This optimization does not increase the recognition rate only, but also reduce the execution time. By introducing DWT through PCA algorithms, a significant recognition rate improvement was achieved [16].

Al-Ani and Al-Waisy performed multi-view face detection approach by classifying each input image into face or non-face class using two classes of Kernel Support Vector Classifier (KSVC). Experimental results demonstrate successful face detection over a wide range of facial variation in color, illumination conditions, position, scale, orientation, 3D pose, and expression in images from several photo collections [17].

Al-Ani et al. implemented an algorithm based on the similarities embedded in the images that utilize the wavelet-curvelet technique to extract facial features. The implemented technique can overcome on the other mathematical image analysis approaches. In this work, three major experiments were done, two face databases (MAFD & ORL, and higher recognition rate is obtained by the implementation of this techniques) [18].

Prasad et al. presented an efficient approach to address the representation issue and the matching issue in face recognition process. The implemented face recognition approach is concentrated on the image performance to generate required features from the face image. The obtained result indicated a good performance of face image is generated from the tested system [19].

Chen et al. designed a multimodal fusion framework for face and fingerprint images using block-based feature-image matrix and extract a type of middle-layer semantic feature from local features a local fusion visual feature [20].

Nguyen et al. mentioned the importance of super-resolution in computer vision which has been affect for biometrics not only to improve the clarity and visual appearance of the images, but to improve the recognition performance of the system [21].

Future research efforts ought to also reflect on consideration on the continuation on biometric cryptosystems because of the protection on sensitive biometric statistics into Internet of Things (IoT) [22]. This will furnish adequate privacy for human components in these systems.

Jegede et al. presented a clear understanding of the current and emerging trends in key-based biometric cryptosystems [23].

This paper presents an algorithm that preprocess the image and then extract the features after applying the histogram equalization. Good results are obtained from the implemented of this algorithm. The execution rate of this algorithm depends mainly on the number of operations needed to recognize the indicated face image.

3. The Proposed Biometric Face Recognition System

Biometric characteristics can be divided into two main classes:

* Physiological is related to the shape of the body, including fingerprint, face recognition, DNA, palm print, hand geometry, iris recognition, retina, and odor.

* Behavioral is related to a person's behavior, including writing, pacing, voice, and gait.

The standard biometric face recognition system is shown in figure (1) that can be divided into the following main parts: firstly, preprocessing is implemented on the data received from sensors, then the biometric system starts by extracted the features from the received image, after that a template must be generated to be compared with the stored templates, lastly a decision will be taken to select the similar template.

3.1 Performance Measurements

The following points are used as performance metrics for biometric systems:

1. False accept rate or false match rate (FAR or FMR): the probability that the system incorrectly matches the input pattern to a non-match in the database model. Measures the percentage of invalid entries that are incorrectly accepted.

2. False reject rate or false non-match rate (FRR or FNMR): the probability that the system cannot detect a match between the input pattern and a template in the database. Measures the percentage of valid entries that are rejected incorrectly.

3. Receiver operating characteristic or relative operating characteristic (ROC): it is a visual characterization of compromise between FAR and FRR. Means the coincidence algorithm decides based on a threshold that determines the proximity of a model whose input is to be considered a match. If the threshold is reduced, there will be less false mismatches, but falser accept. Consequently, a higher threshold will reduce the FAR, but increase the FRR. A common variation is the error detection error (DET), which is obtained by using the standard deviation scales in both axes. This more linear graph illuminates the differences of higher yields.

4. Equal error rate or crossover error rate (EER or CER): the rate of acceptance and rejection errors are the same. EER value can be easily obtained from the ROC curve. Effectiveness is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest effectiveness is the most accurate.

5. Failure to enroll rate (FTE or FER): the rate at which you try to create a template from a failure entry. This is most often caused by poor quality inputs.

6. Failure to capture rate (FTC): in automated systems, the probability that the system does not detect a biometric entry when presented correctly.

7. Template capacity: the maximum number of data sets that can be stored in the system.

3.2 Implemented Database

The implemented database was retrieved from the following site in which a message publisher is as bellow: "You may freely download this data for your own exclusive research purposes". A sample of the implemented database (face94) is shown in figure (2) [3].

Description of these images is illustrated below:

* Total number of individuals: 395

* Number of images per individual: 20

* Total number of images: 7900

* Gender: contains images of male and female subjects

* Race: contains images of people of various racial origins

* Age Range: most individuals are between 18-20 years' old

* Glasses: Yes

* Beards: Yes

* Image format: 24bit color JPEG

* Camera used: S-VHS camcorder

* Lighting: artificial, mixture of tungsten and fluorescent overhead

3.3 Implemented System

The implemented system as shown in figure (3) can be divided into the following steps: preprocessing, histogram technique, feature extraction and face verification.

First step is preprocessing that includes:

Image Enhancement, in which acquires the input image and that converted into digital image.

Size Normalization, in which the image size is adjusted to a uniform size.

RGB to Gray Scale, in which the normalized image size is converted into gray scale.

Noise Reduction, in which an adequate filter is applied to minimize nose as possible.

Second step is histogram technique that includes:

Histogram Enhancement, in which the histogram process is applied to enhance the distribution of pixels over the face image.

Histogram Improvement, in which the histogram is modified to improve quality of the face image.

Third step is feature extraction that includes:

Feature Extraction, in which an appropriate technique is applied to generate specific features.

Fourth step is face verification that includes:

Template Generation, in which a template is generated to represent a specific person.

Template Comparison, in which compared the generated template with the stored database.

Decision Occurred, in which take a decision via a certain selected threshold.

4. Results and Analysis

Face recognition algorithm is designed and implemented via face images database (face94). Figure (4) shows one selected image from the tested database. In this figure, we can see that there are three images; the original image, the image after adding salt and pepper noise and the last image after adding Gaussian noise respectively. Then the histogram of each image is illustrated in the mentioned figure.

After resizing the input face image, we converted the adequate size color image into gray scale image to implement the noise removal filter. It is clear from figure (5) that median filter is more affective in the case of salt and pepper noise than that of Gaussian noise. The blurred appeared in part (b) of figure (4) may causes false values that affected the features of faces.

It is clear from figure (6) that wiener filter is more affective in the Gaussian noise case of than that of salt and pepper noise. Some noise still appears in part (a) of figure (5) and that may cause false values that affected the features of faces. Figure (7) shows the implementation of histogram equalizing after applying median filter, this illustrates a normal distribution of pixels over the gray scale axis. It is clear from this figure that this operation has more effects on the face image with salt and pepper noise.

Figure (8) shows the implementation of histogram equalizing after applying wiener filter, this illustrates a normal distribution of pixels over the gray scale axis. It is clear from this figure that this operation has more effects on the face image with salt and pepper noise but also remains some noise. Comparing median and wiener filters results shown in figure (7) and figure (8) respectively, we can indicate that median filter gives better results in this case of face recognition.

5. Conclusions

Face recognition becomes an important field via the revolution in biometric technology and computer vision, because it is the most visible human identifier. The implemented started by applying preprocessing operation applied on the captured face image, and then extract the features after applying the histogram equalization. Good results are obtained from the implementation of this algorithm on different types of face images via face database (face94). The execution rate of this algorithm depends mainly on the number of operations needed to recognize the indicated face image. Introducing equalization technique via implemented algorithm improved the recognition rate up to effective value according False Accept Rate (FAR) and False Match Rate (FMR).

References

[1] J. Fabregas, and M. Faundez-Zanuy, "Biometric Recognition Performing in a Bioinspired System," Cognition Computing, 1:257-267, 2009.

[2] L. Sterling, G. Dawson, S. Webb, M. Murias, J. Munson, H. Panagiotides, and E. Aylward, "The Role of Face Familiarity in Eye Tracking of Faces by Individuals with Autism Spectrum Disorders," Journal of Autism and Developmental Disorders, Vol. 38, No. 9, pp. 1666-1675, 2008.

[3] L. Spacek, "Description of the Collection of Facial Images", Updated Friday, 20-Jun-2008 12:17:59 BST, http://cswww.essex.ac.uk/mv/allfaces/index.html.

[4] R. Jafri, and H. R. Arabnia, "A Survey of Face Recognition Techniques," Journal of Information Processing Systems, Vol.5, No.2, pp. 41-68, 2009.

[5] B. Gepner, C. Deruelle, and S. Grynfeltt, "Motion and Emotion: A Novel Approach to the Study of Face Processing by Young Autistic Children," Journal of Autism and Developmental Disorders, 2001, Vol. 31, No. 1, pp. 37-45, 2001.

[6] A. Stone, T. Valentine, and R. Davis, "Face recognition and emotional valence: processing without awareness by neurologically intact participants does not simulate covert recognition in prosopagnosia," Cognitive, Affective, & Behavioral Neuroscience, Vol. 1, No. 2, pp. 183-191, 2001.

[7] M. T. Posamentier, and H. Abdi, "Processing Faces and Facial Expressions," Neuropsychology Review, Vol. 13, No. 3, pp. 113-143, 2003.

[8] A. Mignault, and A. Chaudhuri, "The Many Faces of a Neutral Face: Head Tilt and Perception of Dominance and Emotion," Journal of Nonverbal Behavior, Vol. 27, No. 2, pp. 111-132, 2003.

[9] T. F. Gross, "The Perception of Four Basic Emotions in Human and Nonhuman Faces by Children with Autism and Other Developmental Disabilities," Journal of Abnormal Child Psychology, Vol. 32, No. 5, pp. 469-480, 2004.

[10] J. Fabregas, M. Faundez-Zanuy, "Biometric Face Recognition with Different Training and Testing Databases," Lecture Notes in Computer Science, Springer, Berlin, Heidelberg Vol. 5042, pp. 44-55, 2008.

[11] F. Smach, C. Lemaitre, J. Gauthier, J. Miteran, and M. Atri, "Generalized Fourier Descriptors with Applications to Objects Recognition in SVM Context," Journal of Mathematical Imaging and Vision, Vol. 30, No. 1, pp. 43-71, 2008.

[12] H. Wang, S. Chen, Z. Hu, and B. Luo, "Probabilistic two-dimensional principal component analysis and its mixturemodel for face recognition," Neural Computing & Applications, Vol. 17, No. 5-6, pp. 541-547, 2008.

[13] Y. Tang, "Status of pattern recognition with wavelet analysis," Frontiers of Computer Science in China, Vol. 2, No. 3, pp. 268-294, 2008.

[14] G. V. Belle, M. D. Smet, P. D. Graef, L. V Gool, and K. Verfaillie, "Configural and featural processing during face perception: A new stimulus set," Behavior Research Methods, Vol. 41, No. 2, pp. 279-283, No. 2, 2009.

[15] K. Delac, M. Grgic, and S. Grgic, "Face recognition in JPEG and JPEG2000 compressed domain," Image and Vision Computing, Vol. 27, pp. 1108-1120, 2009.

[16] Q. K. Al-Shayea, M. S. Al-Ani, and M. S. Abu Teamah, "The Effect of Image Compression on Face Recognition Algorithms," (IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 8, pp. 56-60, 2010.

[17] M. S. Al-Ani, and A. S. Al-Waisy, "Multi-View Face Detection Base on Kernel principle analysis and Kernel Support Vector Techniques,", International Journal on Soft Computing (IJSC), Vol. 2, No. 2, 2011.

[18] M. S. Al-Ani, and A. S. Al-Waisy, "Face Recognition Approach Based on Wavelet--Curvelet Technique," Signal & Image Processing: An International Journal (SIPIJ) Vol. 3, No. 2, 2012.

[19] R. S. Prasad, M. S. Al-Ani and S. M. Nejres, "An Efficient Approach for Human Face Recognition," International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 5, No. 9, 2015.

[20] Y. Chen, J. Yang, C. Wang, and N. Liu, "Multimodal biometrics recognition based on local fusion visual features and variational Bayesian extreme learning machine," Expert Systems with Applications, Vol. 64, pp. 93-103, 2016.

[21] K. Nguyen, C. Fookes, S. Sridharan, M. Tistarelli, and M. Nixon, "Super-resolution for biometrics: A comprehensive survey," Pattern Recognition, Vol. 78, pp. 23-42, 2018.

[22] A. Ur-Rehman, S. Ur-Rehman, I.U. Khan, M. Moiz and S. Hasan, "Security and Privacy Issues in IoT," International Journal of Communication Networks and Information Security, Vol. 8, No. 3, pp. 147-157, 2016.

[23] A. Jegede, N. I. Udzir, A. Abdullah, and R. Mahmod, "State of the Art in Biometric Key Binding and Key Generation Schemes," International Journal of Communication Networks and Information Security, Vol. 9, No. 3, pp. 333-343, 2017.

Qeethara Al-Shayea (1) and Muzhir Al-Ani (2)

(1) Department Management Information Systems, Al-Zaytoonah University of Jordan, Amman, Jordan (2) Department of Information Technology, University of Human Development, Sulaimani, KRG, Iraq
COPYRIGHT 2018 Kohat University of Science and Technology
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Al-Shayea, Qeethara; Al-Ani, Muzhir
Publication:International Journal of Communication Networks and Information Security (IJCNIS)
Article Type:Report
Date:Apr 1, 2018
Words:3852
Previous Article:Performance Analysis of Mobility Impact on IEEE 802.11ah Standard with Traffic Pattern Scheme.
Next Article:Split and Merge LEACH Based Routing Algorithm for Wireless Sensor Networks.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters